Composition and the 'Live-Electroacoustic': sound, time, control and ensemble.

This text was written in 2009. I'm returning in depth to this subject from winter 2013 through to 2015

This paper discusses how differences between the 'instrument and tape' format and works involving real-time sound transformation or automatic computer synchronisation have aesthetical consequences for both the compositional process and the final work. The author separates aesthetical issues from technical solutions simplifying the performance of an electroacoustic work that for a listener is equivalent to a 'perfect' instrument and tape performance. Composition, rather than improvised performance, is under focus. The ideas discussed are derived from the author's compositional activity spanning a period of technological change and in working with performers in solo and ensemble contexts. The presentation draws on examples from the electroacoustic ensemble composition 'Crack' (Barrett 2006).

1. Introduction
Are today's and tomorrow's live performance technologies building a framework for the evolution of a new compositional aesthetic? Or is the main effect found in areas of improvisation and interaction? The ideas below deal with composition as a practice where the majority of dimensions (note, timbre, gesture, timing, space, interaction and overall sound-world) are defined and detailed in a way that allows recognizable repeatable performances by different performers and allows the work to live beyond a single period in time, even though some dimensions are open for freedom in interpretation. Integral to my recent live-electroacoustic works, electroacoustic sounds and structural-temporal electroacoustic variations are produced in real-time in interaction with the acoustic performance: tracking systems capture detailed multi-dimensional data, which in turn is used to control real-time sound transformation, mixing and montage of the live audio signal across a range of time domains. In these works the 'real-time' systems address the reality of a contemporary music ensemble on tour, where a single person is normally responsible for all technical and sound related matters. In these works the acoustic and computer materials are composed in detail yet little sound was realised beforehand in the studio. After two performances of Crack the technical gymnastics of calibrating the computer's performance and sense of ensemble in the most optimum way, under extremely variable and unpredictable concert situations, were evident. These experiences lead to the assessment of the computer's value as a 'live' element in the performance. Adding a large MaxMSP patch and specialist hardware increases the compositional workload and the technical set-up, and unless the composition is 'upgraded' as technology changes the work is locked to a period in technological history. Immediate questions arose: how much real-time concert sound manipulation could have been pre-made in the studio, the audience being none-the-wiser? How much live variation is important to the 'one-off listener' (a normal listener experiencing one performance of a new work and then afterwards on a CD recording - in other words being unable to compare the differences of two live performances)? Unless the listener is actively alert throughout several performances, where variation is more than a factor of room acoustics, loudspeakers and mix, how much of the material would they realise was pre-made? Would this realisation make the work less exciting?

2. Studio sound and live sound
The differences between sound molded in the studio and material transformed live during a performance are clearly audible. Sound from the studio conveys precision and clear intent, controlled and precise in its temporal-spectral morphology and yielding to a broad diversity. Material transformed live during a performance is imprecise and variable from one articulation to the next, of variable quality in terms of signal to noise ratio and spectral representation, problematic to control in temporal-spectral morphology and yields significantly lower diversity. Regardless of how advanced our real-time computer systems may be, differences such as these are inescapable for the following reasons:

2.1 Microphone placement
In the studio close microphone techniques are normally used to capture interesting natural spectral morphologies. A centimeter variation in microphone placement results in a markedly different sound. Furthermore the composer searches with the microphone for interesting features of the resonating body in a controlled acoustic, often recording an object or instrument from a multitude of angles. In a performance situation even if set-up time permits centimeter accurate microphone placement, recreating the studio equivalent is unrealistic. We encounter unpredictable concert hall acoustics that changes when the audience enters the room, microphone placements that are fixed for the complete performance and limited to locations unobtrusive to the performed action, and instruments acoustically blended on stage. An approach to live sound that places microphones further from the source presents additional problems: even less separable sources and feedback dangers. Although some performers use contact microphones or microphones built into the body of the instrument, in most situations this is not a realistic option (not all performers use contact or inbuilt microphones, and those I have experienced are less representative of the instrument's timbre than high quality condenser microphones).

2.2 The studio based performer-composer relationship
In the studio the composer often works with the performer to capture most interesting timbres, gestures or articulations. The process extends over many hours or days. Time to think, experiment, modify, segment non-performable material into short units are common needs. Besides capturing a more diverse sound collection, this way of working may suggest compositional directions unanticipated at the outset. Such a process is unrealistic in the instant of the concert even though the intention of the gesture may be worked out between composer and performer in advance. Interesting material is likely to come from the stage-action but is unlikely to recreate the 'chosen-take' from the studio session, which for composition, rather than improvisation, is a significant issue.

2.3 The composer's studio work
Composing in the studio is a non-real-time activity. We are of course hearing the sound in real-time (rather than mentally reading an instrumental score) but the process of modifying a multitude of sound composition parameters takes weeks of reflection, experimentation and development in the context of the growing work. In a concert, even if we were able to overcome microphones placement issues, the scale of the transformation routine is on the whole unrealistic to implement in real-time. We are not necessarily limited by computer power, but by programming time and complexity: in the studio hundreds of actions may be used to transform a five-second source-sound into the appropriate composed sound.

In terms of sound alone the reasons for incorporating a live computer look bleak. However, if we view the 'human' angle, a different picture emerges. It is to here we now turn, drawing on examples from Crack.

3. Technical infrastructure for Crack
Whether controllers and gesture tracking devices will offer as much control to the performer as their acoustic instrument, and indeed whether this is desirable, is the subject of a different discussion. Here we are interested in a way to capture the performance in terms of sound- or motion-tracking, methods for extracting appropriate information from the data and the of mapping sound and temporal transformations in a meaningful way. Crack is scored for percussion, electric guitar, trumpet and computer. The percussionist was under focus for tracking and a method for capturing the most significant visual-audio performance information was needed. One accelerometer and one gyroscope sensor is attached to each of the percussionist's hands and used to track two-dimensional motion. The sensors are connected to a UDP interface. The motion data is processed to provide attack-instant, velocity of attack, motion and speed of gesture from the percussionist's performance. The sensor selection, positioning on the performer's hands and method for filtering the raw data was design by Tomas Bouaziz (who originally developed the sensor system at La Kitchen in Paris). Bouaziz initially intended his design for a rock-drumming technique, which is somewhat different to a contemporary percussionist technique, and the first task was that of investigating which data types were robust for a contemporary technique. Some aspects were reliable and repeatable, other aspects too variable to use. Attack synchronisation was consistent, attack velocity and motion direction in the vertical plane were meaningful for some specific playing techniques. Motion direction was reliably only for clear articulations in the vertical plane.

4. The score
As the percussionist steers all computer action, the notated score is somewhat different from what it would have been in an instrument and tape situation. The percussionist's part is required to interface the needs of the complete ensemble where the computer is an active player.

5. Compositional issues connected to the real-time format
At a basic level, automatic synchronisation results in more reliable performances. Only when composers place themselves on stage 'behind' the concert loudspeaker system do they realise the difficulties hearing what happens in the electroacoustic material. Knowing that the computer will 'follow' removes much confusion in synchronisation and simplifies the learning process. Furthermore, as with other types of score following, in synchronising with pre-made material a degree of rubato, if desirable, is possible. These basic features can be regarded as solutions to performing a work that could just as well have been scored for instrument and tape. The following concerns features that influence the composition and the compositional process.

5.1 Composed interactive virtuosity
Fast and accurate performance data-capture and sound transformation opens the possibility for rapid and accurate synchronization, which in turn facilitates greater complexity in the counterpoint between composed acoustic and electroacoustic materials. Under these conditions interactive virtuosity becomes a compositional concern where composed interaction may enter into detail beyond that possible in the instrument and tape format. The possibilities for this complex counterpoint emerged during the composition of Crack and are by no means fully realised in this work. I will however draw on a few illustrations. Crack contains three movements: Atomic Crack, Deep Ice and Crack Horizon. One layer of compositional intent in the first two movements explored ways to connect musical structure to phenomena found in nature. Two contrasting approaches are used: modeling and observation. Atomic crack begins at the atomic level of a crack process - an imperceptible process and therefore conceptual in its sounding manifestation. Without delving into detail, data extracted from an atomic model was used to create acoustic instrumental material hand in hand with the live sound manipulation. The extent of timing and synchronisation required to realise both the concept and the musical material would have been impossible with a fixed tape part. All electroacoustic material is created live through sampling, playback and transformation. The percussionist controls live sampling, processing and synchronisation of the complete ensemble in the following way: Live sound is continuously sample to disk and stored in a log or pool of individual sounds, the duration of each segment determined by the duration between peak attack velocities (from 100 ms to 6 seconds). Simultaneously, attack instants trigger playback of the recorded segments. Four stereo sounds may play simultaneously via the playback engine and each sound loops until it is replaced when a new attack occurs. The sequence of recorded segments is logged to create a chronological list from which sounds for the play-engine are chosen. The frequency of attack instants is greater than the frequency of peak attack velocities. This means that during the opening bars the interaction appears straight forward in both sampling and synchronisation. Soon the number of recorded segments has increased such that the possible range for temporal detachment and rhythmic complexity is increased (a peak velocity may be recording 'live sound number 30', while the attack instant, being unable to select 'live sound number 30'as it has not jet been written to disk, would jump to the beginning of the log and select 'live sound number 1'). This whole sequence occurs in a relatively short duration (approx. 30 seconds). The result is designed to produce an increasing density of counterpoint and rhythm where the electroacoustic part is a player within, rather than accompaniment to, the ensemble. All events - recording, playback, degree of change and dynamics are inherently synchronized with the percussionist's action. The opening of movement 2 - Deep Ice deals interactively with pre-made sound. The sound of ice cracking as it is submerged in water was recorded in the studio with high quality close microphones. After a four-octave tape transposition a clearly resonant percussion-style attack texture became apparent. Attack, timbral and pitch centers were extracted from this sound and mapped to the live percussion part. In the composition the percussion rapidly plays simultaneously with, and modifies, the pre-made electroacoustic sound. Here, live synchronisation is vital. If the percussionist were to play with a continuous fixed sound-file, we would perceive only a chance co-ordination of two streams running in parallel. Instead, peak attack velocities from the percussion cause sound playback to jump to one of 80 different preloaded cue points, each positioned at a moment of attack in the electroacoustic sound. Performed attack velocities are also used to control the volume of the pre-made material. The difference between the 'instrument and tape' version and the 'live' version is subtle and intended to create an uncannily accurate synchronisation of two complex rhythmic fields verging on the point of texture.

5.2 Sound and the live action
Even in a strictly defined score, each performance - the energy profile, the subtly of articulation, timbre and gesture - will differ. When set against fixed electroacoustic sound these differences become apparent. For now we will take the viewpoint that significant sonic detachment of the electroacoustic sound from the real-time performed action is undesirable as the sound could have more easily been pre-made in the studio. To create a greater sense of ensemble between the acoustic and electroacoustic action a number of techniques can be addressed. At the simplest level, amplification brings the audience 'closer' to the acoustic instrument. In more complex ways real-time processing captures the performed energy. Although we may argue that the expected type of behaviour associated with an acoustic instrument relates to its size, we often experience an increasingly vague correlation as the size of the ensemble increases or if the audience is far from the stage. It is thus unnecessary for sound to follow the energetic profile of the performer in a one-to-one mapping for the listener / viewer to experience the 'live-ness' of the spectacle. In fact such correlations can be tiring and distract from the complexities of the composition. In general I experience that real-time transformation may depart significantly from the source-sound before the ear experiences a sense of complete detachment from the uniqueness of the performance. Relationships may be reversed, inverted, delayed or involve one-to-many mapping functions and the totality of the 'performance action' may nevertheless be conveyed. Live sound identity is such that even after a multitude of sound transformations a strong flavour of the original source remains - its 'rough edges' (discussed above in the difference between studio and live sound) flavouring the material with unique qualities. Specifically I find the live processing of gestures, dynamics and short timbral changes are perceptually allied to the performed source, while pitch and textural transformations, although connected to the live action, could as effectively have been pre-made. Temporal segmentation (shuffling and granulation) involving a large range and a small window size will clearly affect greater identity dislocation, yet in general we may experience that the result of a transformation depends on how perceptually marked the material is within the context of the work. For example, as the first movement of Crack progresses the idea of 'cracking' moves away from the atomic scale - reference instead drawn from the formation of visual crack clusters and the audible-perceptual connection to cracking processes in nature. Here the electroacoustic sound consists of a combination of re-triggered segments recorded from the opening, transformation by, and of, the live signal. The re-triggered elements result in an extreme temporal dislocation of the live-electroacoustic material (discussed further below), while sound transformation is controlled by performance articulations happening in the 'now' of immediate performance energy. Attack instants control post- and pre-fade volume and filter envelops, attack velocity controls modulation depths, filter Q and centre frequencies of processing routines affecting the complete ensemble.

5.3 Temporal issues
When computers, sound hardware and loudspeakers are involved in a performance, the idea of 'exact synchronization' warrants brief discussion. If a loudspeaker is located away from the ensemble a delay in acoustic and live sound will result for the listener, albeit very small (approximately 3.4 ms per meter of difference from the listener). Also, sound hardware adds an inherent degree of latency. However, maybe more significant is the general issue of amplification. Electronic works are normally louder than acoustic works of the same performer lineup. The amplification draws the listener closer to the correlation between sound and visual action where we more clearly hear the crispness of the articulation and visually focus on the exact moment of the performed action. A side effect is that the different between the speed of light and the speed of sound from stage to audience can be sizeable. Just 15 meters from the stage will lead to a 51 ms delay in visual attack and sounding attack even before we deal with hardware latencies. Exact synchronization is therefore only possible with some degree of 'anticipation' in the electroacoustic part, and this 'anticipation' would need to be calibrated for each performance geometry. Putting this fine-tuning aside, live electroacoustic sound can be exactly synchronized with the performed action or involve significant displacement in time. From a compositional perspective I encounter three main timescales:

(a) 'Immediate live transformation': the performed action is immediately coupled to a live electroacoustic action. The transformation takes the live audio input as its source. Immediate live transformation is dependent on real-time sound processing.

(b) 'Structural live transformation': the electroacoustic action may happen before the performed action (via score following or gesture tracking anticipating the 'present' sounding event) or after the performed action within a time delay in which our perception finds relevance to the present. Structural action therefore includes normal delayed actions, but also compositonal decisions that are derivatives of earlier actions. Complex mixing or montage, instant synchronisation of other sounds, algorithmically controlled passages or a separation of human physical energy from energy implied in the computer transformed sound, are all possibilities. The extent to which structural live transformation may traverse temporal scales depends on the character of the performed gesture and how memorable it is within the specific context. During some sections periods in the first movement of Crack, although most sound transformation is exactly synchronised with the performed action, we also hear downward tape-transpositions. This simple technique allows a natural temporal detachment while maintaining clear reference to the performed gesture-profile. Performed attack instants are used to re-synchronise the timeline at different locations appropriate for the music. In the above example from the second movement of Crack attack articulations in pre-made sound are both synchronised and modified with the live action in a way that makes the direction of control ambiguous.

(c) 'Conceptual live transformation': the electroacoustic material, although taken directly from the live performance at one point in time, involves a large temporal dislocation such that traces of 'real-time' are remote. What is important is the concept of being 'live' - the audience knows a computer is involved in the performance (even if the technology is hidden, earlier instances of 'real-time' will make the computer conspicuous), and the 'live versus studio' sound quality issue further reinforces this assumption. Further more, micro variations in performed timing and articulation are captured, embodying performance characteristics from the work as a whole. In the second half of Crack movement-1 the re-triggering of segments recorded from the opening affect a temporal detachment of some minutes from the point at which these segments were originally heard. These sounds are clearly connected to the performed material in a way that has been impossible to 'fake' in the studio. Making a studio mock-up was found to be impossible due to the complexity and speed of the percussionist's gesture performance data.

5.4 Spatialisation
In an acousmatic work, spatial composition and spatial performance are often central elements. In a live work, space may be an important concern, but in performance is one of many urgent tasks to be tackled. Spatial issues tend to fall into the realm scene setting with a surround sound feel, effects-style dramatic panning or simply balancing the sound in the room. Furthermore, to prevent spatial detachment of acoustic and electroacoustic materials it is often desirable to localise the blend of sound in the area of the performers and thus restrict spatial development to spatial implication rather than implicit spatial motion. Performance gesture, that in some way connects to the spatial gesture, may allow a performance where the materials find meaning in spatial separation, not only spatial blend. In investigating compositionally how useful this may be there are a number of obstacles to consider:
(i) The success of spatial projection is tightly linked to the concert situation. Differences in the loudspeakers, the space, the audience and performer locations create variations from the composed optimum.
(ii) Spatialisation of an amplified acoustic signal, without any intermediate processing, inevitably involves feedback dangers.
(iii) Pre-made sound material and pre-made spatialisation in the form of a fixed multi-channel source defeats the object of the current 'live' discussion.
Unless one performer is dedicated to a manual spatial technical performance, which in most situations is an unrealistic burden on the touring costs, an alternative type of spatialisation approach in required. A simple solution in Crack involved a multi-dimentional spatial framework. This framework consisted of motion distances, speeds, trajectories, filtering and perceptual cues, constrained within ranges. Gesture data was mapped to these variables such that the performance action played within the framework. Spatialisation was therefore automated and composed yet embodied variation derived from the performance dynamics. This system is still under development and remains highly sensitive to each concert situation.

6. Discussion
Changes in compositional technique have been identified in areas of interactive virtuosity, sound type, temporal issues and spatialisation. To assess whether live performance technologies are building a framework for the evolution of a new compositional aesthetic a thorough analysis research project would need to be undertaken. It is clear that with the use of technologies capturing micro-temporal variations, aspects normally part of the performer's interpretation become a compositional concern. The difference in sound type between pre-made and live transformation may be used constructively. Non-instrumental sounds, pre-made 'special sounds' based on gestural archetypes and real-time processing may create a continuum between a close relationship to the sound of the live instrument and a sense of otherness that enriches the complete picture. Instrument-electroacoustic works involving significant quantities of pre-made sound allow the composer to take control outside the format of the score - 50% of the work may be fixed in an electronic domain. If however all sound is created in a real-time performance process, compositional control falls predominantly on the notated score and the real-time computer implementation. Although the score-based paradigm has been the norm for works where live electronics are a subtle layer to what is essentially an amplified acoustic composition, in the 50/50 acoustic-electronic work we see a change in how the composition is represented.

Brown, N. The flux between sounding and sound: Towards a relational understanding of music as embodied action. In Contemporary Music Review. 25:1, 37-46. 2006.
Emmerson, S. The electroacoustic harpsichord. In Contemporary Music Review, 20:1 35-58. 2001.
Harvey, J. The metaphysics of live electronics. In Contemporary Music Review, 18:3, 79-82. 1999.
Stroppa, M. Live electronics or…live music? Towards a critique of interaction. In Contemporary Music Review, 18:3, 41–77. 1999.