INTERACTION

1. Sampling

2. Spatialisation 1

3. Sound processing layer 1

4. Memory log

5. Sound processing layer 2

6. Spatialisation 2

7. Processing stage 3

8. Spatialisation 3

9. Processing stage 4

10. Pre-prepared sound retrieval

11. Spatialisation 5

The variety and activity of Adsonora is in the hands of you - the users of the building.

Microphones record sound produced in the stair tower. The microphones (hypercardiode boundary layer condenser microphones) are chosen to collect sound in a local area, minimising levels of cross talk between floors.

1. Sampling

1. Sound input is filtered to remove some of the constant background noise.
2. An envelope follower detects if the sound input rises above a threshold, and only when above this threshold will sound be sampled and evokes a response in the installation. If you don't want to interact, walk quietly - and watch out for someone noisy behind you!
3. On each floor are located three infrared (retroreflective) sensors. One is located at each door and monitors the entry/exit activity. Two sensors are located across the intermediate landing. The sensors provide simple motion direction through the order of messages and their delay times. When a sensor beam is broken it tells the record function that it can sample sound if the volume threshold has been exceeded.
4. The recorded sound is analysed by the audio classifier. If it is classified as a new sound, it is saved to disk with a unique file name. If it is classified as an old sound, the original old sound is retrieved for second layer processing (see below).
5. On recording the sound to disk the size of the sound is analysed in samples and silence is eliminated. The maximum duration of any sound chunk is 4000ms. This value is chosen for three reasons: (a) to allow a large number of sounds to be active in computer memory at any one time, (b) to act as a default time value if the on/off data from the sensors is ambiguous (see below), (c) 4000 ms is a reasonably long duration for a person to walk across the landing where the sensors are located. If the sensors do not provide clear direction information, and therefore no clear 'record-off' message sent to the record function, an off message is automatically sent after 4000 ms.

2. Spatialisation 1

On each floor, recorded sound is delayed by 1000ms and first spatialised over three small loudspeakers (moving up or down from the secondary landing) without further processing. If clear motion information is received from the sensors, the sound is spatialised in the direction of this motion. On reaching the main landing, it is played over the two main speakers for that floor in a left-right oscillating motion. The speed of this motion is randomly variable between 0.1 and 100 hz/second. If the sound is classified as an old sound, the old sound is retrieved and played over the main loudspeakers instead of the sound recorded by the microphone. In general, if motion information is ambiguous, the sound remains stationary. If many people are entering the tower, occasionally the sound will be processed with a resonant filter effect (giving a ringing harmonic quality to the sound).

3. Sound processing layer 1

When the sound reaches the main speakers, if it is an old sound, the sound is looped a few times and processed with a harmonisation effect. The changing parameters of the harmonisation (transposition, delay, window size) are restrained within a narrow range to reduce the degree of transformation in the early stages.

4. Memory log

If the sound is a new sound, it is stored in a log file that is cleared every two days. This log file is accessed by the layer2-splicer (see below). If the sound is an old sound, it is stored in different log file called a 'memory file'. Every 21 days this memory file is stored to disk, and its contents retrieved for sound processing layer 4.

5. Sound processing layer 2

After a delay time of 30000ms (sufficient to allow all first layer processing of the sound to be complete), old and new sounds are retrieved for use in the 'splicer'. Each floor has its own splicer to handle floor specific sounds. The splicer is a granulation programme designed to imitate the basic antibody DNA splicing process. Small segments of sound are cut from numerously recently heard sound files combined with older sound files. If old sounds are detected, the size of the splice window reduces over a 21-day period. Although this is not true to the antibody splicing process, the audible effect produces a time developing function. The splicer output is heard on the left main speaker on each floor.

6. Spatialisation 2

When sensors at each entry door are triggered, the output of all splicers are spatialised vertically through the tower, to and from the direction of the trigger points. This is a rapid process lasting only 900ms.

7. Processing stage 3

If the sound is an 'old sound', it is further transformed with more extreme effects: harmonisation, amplitude modulation, spectral inverse and convolution.

8. Spatialisation 3

The processed sound is dynamically spatialised over small and large loudspeakers in the direction of motion information. If the motion direction information is ambiguous, this is the one instance when the sound is spatialised in the direction of the last clear motion sent from the sensors.

9. Processing stage 4

Every 21 days a log file of classified 'old' sound is written to disk (a 'memory log'). These log files accumulate infinitely. A new file is randomly retrieved on a daily basis from the available pool of 'memory files', and the contents used to recall 'old' sounds. Processing stage 4 is a parallel to the 'memory cells' of the immune system. These sounds are processed with longer, continuous developments than in the previous stages. The retrieved sounds are looped and gently transformed with delays, filtering, and reverberation.

A maximum of two memory processes are active simultaneously - one set beginning on floors 1-4, another set on floors 7-8. This duel process is not designed to mark specific floors, but was the best sounding combination. Spatialisation 4

The sounding result of stage 4 produces a 'murmuring' effect that wanders slowing up and down the main loudspeaker chain.

10. Pre-prepared sound retrieval

Even if you decide to present all sorts of sounds to the installation, there are many limitations to real-time sound processing. At the moment there are limitations conditional on the speed of the computer processor and the speed of the hard disk. However, more relevant to long-term developments in computer technology where speed will no longer be an issue are the limitations conditional on the background noise in the live space and the imagination of the user. For these reasons it is necessary for the installation to be able to access pre-composed sound material.

A separate audio classifier is 'primed' with analysis data from pre-composed sounds. Incoming live sound is compared by the classifier to the 'primed' log file. The pre-composed sound most closely matching the incoming sound is then retrieved by the computer.

The probability that a pre-composed sound will be retrieved is determined by how far the system is through a 21day cycle. If the cycle is at its 5-day peak, then there is a greater probability that a pre-composed sound will be selected for playback. This probability decreases again over the remaining 16 days of the cycle.

11. Spatialisation 5

Pre-composed sounds are spatialised over all speakers. When a door sensor is triggered, if a sound is playing it moves instantly to the main pair of speakers on that floor. In addition a single small speaker in the same proximity is turned on until another process turns it off.