Last updated: 5th November 2010.

Crush

Crush is an interactive sound-art installation exploring the microscopic forces released during the process of crushing rock. The installation draws from two research projects at PGP (Physics of Geological Processes in Oslo): 3D numerical simulations of grain fracture and fault gouge evolution during shear - the work of Steffen Abe (Aachen) and Karen Mair (Oslo), and the study of real acoustic emissions from granite, basalt and sandstone under compression - the work of Alexandre Schubnel, (Paris).

Crush involves 3D electroacoustic sound, a loudspeaker array, wireless headphones, a motion tracking system, still images and a real-time video projection. In this installation, the audience can move through a virtual, immersive space, experiencing the dynamics of deformation from 'inside' the rock.

This first test version of Crush was installed in the SAS Radisson hotel in Oslo as part of the Nordic Geological Winter meeting, January 2010. In this test set-up Crush was located in two attached spaces: one room dedicated to the loudspeaker system and video allowing visitors to watch and hear, while in a large foyer visitors wore the motion tracking system and wireless headphones to actively interact with the work using their ears and body. Link to photos, sound and video

Crush-2 is a development of Crush, and is currently a work in process. At the end of this sectioned is a brief list of how Crush-2 will be developed from Crush.

Crush was supported by PGP, Fond for Lyd of Bilde and TONO.

Science, sonification and artist processes

Work on Crush began with the accurate sonification of data from simulations and real acoustic emissions. Subsequent stages involved degrees of abstraction through the choice of sound material, data mapping rules, interaction design and material montage. Maintaining a tight correlation between 3D sound and the patterns and processes found in the geological systems was an important consideration. In the final work, micro-scale processes are enlarged into a dynamic system audible through sound colour (timbre), texture, shape and spatial geometry.

The real acoustic emissions were recorded at a frequency of 4 MHz. Three of these recordings from different rock samples were transposed into the audible range and used as sound material in Crush. In addition I made my own recordings of rocks being crushed, scraped and generally destroyed, using ultrasonic transducers capturing up to 90 kHz as well as recordings in the audible range. Parameters such as sound type, volume, transience, frequency, filter, pitch shift, grain, continuation, resonance and spatial location were mapped in various ways to the source data parameters such as fracture magnitude, fracture location and spatial displacement.

The simulations involved hundreds of thousands of fractures. Data reduction was necessary to make the processes audible. Different fracture magnitudes were mapped to different sound types. A chaotically complex initial rupture was followed by exponentially decreasing information distributed over the 3D space. In contrast, data from the real emissions involved a moderate number of fractures and a less uniform spatial-temporal distribution. The rupture occurred after a built-up of compression energy, and all information was used in the sound mapping process.

Time and Space

The data was grouped into a number of time steps. In Crush the duration of these times steps are malleable - some layers of sound span 10 minutes, others a few seconds. Different timescales were chosen so as to highlight the processes described by the data as well as allowing interesting musical and sounding motives to be heard. Likewise, the spatial dimension is malleable in the artwork, ranging from one to many meters.

The interactive system

We designed a motion tracking system that allows each user to physically navigate through the 3D sound composition. The user wears a head-mounted Nintendo Wii-Remote with the Motion Sensor Plus. This unit contains 3D accelerometers, gyroscopes and an infrared camera. Seven targets with infrared light constellations surround the interactive space and actively recalibrate drift from the accelerometers and gyroscopes. Motion data are sent to a computer over Bluetooth and processed to render the user's position and direction of view. This information is used to modify the spatial sound image and video. For the person wearing the headset, the 3D sound is rendered using head-related transfer functions (HRTFs) over wireless headphones. Two people may interact with the work at any one time (restrictions due to radio interference between headphones). For other visitors, sound is decoded over the loudspeaker array.

In Crush the fracture simulations occupy 10-minute 3D layers, while the acoustic emissions occupy time spans from 10 seconds to two minutes. The listener navigates through the crushing rock structure using their ears.

The loudspeaker array

In a dedicated space, an eight-channel loudspeaker system plays the 3D sound-field encoded with higher-order ambisonics based on one of the two interactive streams. This gives a higher quality and more precise sound picture than that heard over the headphones. If no one is currently interacting with the work, a preset cycle begins.

Video projection (optional)

A 3D interactive virtual world is created with the open source gaming software Irrlicht. As the user navigates through the real space these movements control the camera view inside the virtual world containing the following elements:
• The targets / infrared light constellations corresponding to the real interactive space.
• A point cloud derived from the numerical simulations.
• 80 images derived from X-ray CT scans of a fracture in sand-stone, one picture every mm. The projected location, size and brightness of each image are controlled by the location, amplitude and frequency of the loudest component of the 3D sound-field. In this way the sound may be regarded as a mediator between visualised science and art.

New in Crush-2

Learning from the experiments in Crush, in Crush-2 we are:
• Exploring new simulations and new acoustic emissions.
• Developing the sonfication process and data-sound mapping.
• Thoroughly exploring interactivity, meaning for the visitor and connection to the real space within which the installation is set-up.
• Revising the visual information and its effect on (a) the aural sonifcation and (b) the general meaning the visitor may find.
• Designing a durable solution for the head gear.
• Solutions for long-term running.