CRUSH-2

Dr Natasha Barrett / Professor Karen Mair

Crush-2. is an interactive sound-art installation exploring the microscopic forces released during the process of crushing rock. The installation draws on geological research designed to help us understand the patterns and processes found in nature: 3D numerical simulations of grain fracture and fault gouge evolution during shear - the work of Dr Karen Mair (Physics of Geological Processes and Department of Geosciences, University of Oslo) and Dr Steffen Abe (Geologie-Endogene Dynamik, RWTH Aachen, Germany), and the study of real acoustic emissions from granite, basalt and sandstone under compression - the work of Dr Alexandre Schubnel (Laboratoire de Geologie de l'Ecole Normale superieure, Paris).

Crush uses 3D sound so the listener can move through a virtual, immersive space, experiencing the dynamics of deformation from "inside" the rock. The interactive side of Crush uses wireless headphones and a motion tracking system. It was first tested during Crush-1 in the SAS Radisson hotel in Oslo, supported by PGP as part of the Nordic Geological Winter meeting, January 2010.

Crush-2 is set-up as part of the Sonic Interaction Design (SID) exhibition at the Norsk Teknisk Museum in Oslo. This version provides a choice of two interfaces: the motion tracking system from the first version and a '3D-mouse'.

The composition work was carried out during a grant from the Norwegian Composers Remuneration Fund. A final revision, Crush-3 is planned as part of a large solo exhibition at Gallery ROM in November 2011.


Science, sonification and art

Work on Crush began with the accurate sonification of data from simulations and real acoustic emissions. These experiments each presented a different type of data requiring different approaches to sound.

The real acoustic emissions

In the experiments carried out by Alexandre Schubnel and his research team, rock samples were subjected to pressure inside a machine within which were mounted a large number of ultrasonic transducers. The transducers, distributed evenly over three dimensions, recorded the sound of fracturing bonds within the rock structure.

Acoustic emissions experimental equipment. Photo: Karen Mair

The experiment lasts only 0.15 seconds, but the extremely high sampling frequency (4Mhz) captures data points every 0.25 microseconds. Amplitude and time of arrival differences of the acoustic emissions at each transducer allows the researchers to plot the magnitude and spatial location of each micro-fracture. Through time, this data maps the changes in stress within the rock sample and the dynamics of the rupture propagation.

Six plots from time slices in the acoustic emissions experiment fracturing of sanddstone. Credits: Schubnel et. al (2007).

From Schubnel and his team I received spatial, time, magnitude data files (containing 540 events) for three rock samples displaying different rupture behaviour, along with one recorded ultrasonic signal from each sample. The samples were:
- Sandstone (from Fontainebleau, close to Paris France)
- Granite (from La Peyratte, close to Poitiers, France)
- Basalt (from San Miguel Island, Azores, Portugal)

From this data the following sound-art experiments were conducted:

The ultrasonic recordings were transposed into the audible range. A standard down sampling (tape transposition) was used and the original duration of 0.15 seconds resulted in an output of between 15 to 60 seconds depending on the transposition.

The data files were scaled in time and space. The timeline was scaled to match the duration of the transposed acoustic emission, the spatial dimension scaled to match a more appropriate human scale (10x10x10 meters).

In ambisonics the acoustic emissions were spatialised using micro-fracture co-ordinates from the data files. The ambisonics model included amplitude, Doppler and air absorption to enhance the auditory spatial dynamics.

A problem in using the original acoustic emissions is that aspects such as experimental noise and a narrow band frequency range, which have no consequence for the scientists, become problematic in sound-art. Therefore, in addition I made my own recordings of rocks being crushed, scraped and generally destroyed. Using an ultrasonic transducer and high sampling frequency recording equipment (normally used for bioacoustics) that captured up to 90 kHz, as well as normal microphones recording sounds in the audible range, the palette of source sounds was significantly expanded. These sounds were also mapped to the researchers' data.


3D numerical simulations

Numerical simulations model as close as possible real-world processes in an attempt to understand the dynamic patterns and processes that exist in nature. The work of Karen Mair and Steffen Abe involves modeling grain fracture and fault gouge evolution during shear Š in other words, what happens in rock structures during high force sliding compression. These experiments tackled the problem from the micro-level in simulations using over 100,000 of individual particles involving a large number of descriptive parameters.

3D model of granular fault gouge constrained between jagged boundary blocks. The initial configuration (left) and final state (right) after vertical compression and sliding of the fault walls right and left (see arrows). Model contains 480000 particles, coloured according to their original 'parent' grain. As the aggregate parent grains break-up, some colours get smeared as grains are pulverised, whilst other 'survivor' grains are relatively undamaged. (Mair 2011, modified after Abe and Mair, 2009)

With so much information is was necessary to reduce the dataset before considering how to approach the sonification. The researchers also needed to extract data from the model and chose to group the data into 200 time-steps. The data showed a chaotically complex initial rupture followed by exponentially decreasing information distributed over the 3D space. In contrast to the acoustic emissions, the spatial distribution appeared more uniform and the temporal distribution more constant. This provided an interested point of contrast. I then further applied the following data reduction techniques:

- Tracking the change in just 30 particles from different points in the sample, where changes in particle cluster mass indicate fractures.
- Selecting particle cluster mass changes within specific thresholds for the complete data set.
- Tracking the motion of a few individual particles from different points in the sample.

Without any 'real' sound in this experiment, I used the audible range sound from the acoustic emissions as input for the sonification. Particles and clusters were mapped to their own sound varieties, amplitude was mapped to magnitude, and time and spatialisation mapped and synthesised in a similar way to the acoustic emissions.

Development and sound-art

The initial sonifications highlighted the most interesting mapping processes, which were then used as a point of departure for artist experiments. Although sometimes at the expense of scientific accuracy, these new developments attempted to maintain correlation between 3D sound and the patterns and processes found in the geological systems. Parameters such as sound type, volume, transience, frequency, filter, pitch shift, grain, continuation, resonance and spatial location were mapped in various ways to the source data parameters such as fracture magnitude, fracture location and spatial displacement. Time lines and spatial dimensions were particularly malleable and found to be one of the most useful ways to focus towards (short time spans) or abstract from (longer time spans) the geological systems. Some results are many minutes in duration, others only a few seconds.

Interaction and exploring Crush-2

The 3D motion tracking system

Surrounding the installation space is a loudspeaker array and seven targets of infrared light constellations. A custom-made motion tracking system allows each user to physically navigate through the sound composition. The user wears head-mounted 3D accelerometers, gyroscopes and an infrared camera.

Crush Helmet with motions sensors, infrared camera and wireless headphones. Photo: Natasha Barrett

The seven targets of infrared light constellations surround the interactive space and actively recalibrate drift from the accelerometers and gyroscopes.

Sensor light constellation, also with visible light. Photo: Natasha Barrett.

Motion data are sent to a computer over Bluetooth and processed to render the userÕs position and direction of view. This information is used to modify the spatial sound. For the person wearing the headset, the 3D sound is rendered using head-related transfer functions (HRTFs) over wireless headphones. For other visitors, sound is decoded over the loudspeaker array using ambisonics.

The 3D mouse

The user stands in the centre of the loudspeaker array and controls their virtual location and rotation with the 3D mouse. Wearing the headphones enhances the audio experience and assists accurate navigation. A simple visual display indicates the user's current location and rotation in the horizontal scene.

3D mouse and start-up screen. Photo: Natasha Barrett.

When interacting with Crush the user is explores the following stages:

Stage 1.
10-minute sound-files play automatically when no one is interacting with the system for more than 60 seconds. These sound-files are made from a detailed sonfication of the numerical simulations.

Stage 2.
When first starting to play with the installation by touching the 3D mouse or by fitting the helmet (see separate information on using the helmet) you hear a number of repeating sounds called 'target sounds'. There are seven target sounds in total, located in free-space just in front of the light-constellations. These sounds are derived from the acoustic emissions and spatialised in real-time to maintain accurate relation to the users physical (or virtual) spatial location. You will hear each sound increase in volume as you approach. Use your spatial hearing to navigate. When in close proximity and 'looking' front-on you will release the next stage.

Stage 3.
Variations on longer sonifications from both acoustic emissions and numerical simulations are released when successfully having located and approached one of the sounds in stage 2. These sonifications allow the user to either move or listen stationary. Stage 3 automatically transfers either to stage 4 or returns to stage 2 (depending on which target sound was chosen). There are seven versions of stage 3, one released for each target sound.

Stage 4.
30 very short repeating mono sounds are distributed in space at the locations of the 30 highest amplitude fractures in the acoustic emissions data sets. The points are spatialised in real-time to maintain accurate relation with the users physical (or virtual) spatial location. You can investigate the space, move away, towards and around sonified spatial fractures. There are three version of stage 4, one released for three of the seven target sounds.

Stage 4 ends automatically and returns to stage 2. The target sound first chosen will have stopped playing and you can chose if you want to explore the rest of the targets. After 60 seconds of no activity stage 1 will automatically play.


Cited references:
Schubnel, A., Thompson, B. D., Fortin, J., Gueguen, Y., Young, R. P. (2007). Fluid-induced rupture experiment on Fontainebleau sandstone: Premonitory activity, rupture propagation, and aftershocks. Geophysical Research Letters, Vol 34, L19307, doi:10.1029/2007GL031076
Abe, S. and K. Mair, 2009, Effects of gouge fragment shape on fault friction: New 3D modelling results, Geophysical Research Letters, vol 36, L23302, doi:10.1029/2009GL040684


Images from Crush-2 at SID / Teknisk Museum, Oslo.

Crush-2 (left: 3D mouse interface, centre and right: helmet interface). Photos: Natasha Barrett.

Crush-2 (left: 3D mouse interface and loudspeaker array). Photos: Natasha Barrett.