1. General considerations
One of the main contributions of acousmatic music to musical practice and thinking is the awareness of space as a central aspect of music composition. Space has been explored in instrumental and vocal music (see Zvonar Reference Zvonar2005a and Reference Zvonar2005b), but its exploration in acousmatic music is enhanced by the possibilities offered by recording technology, studio composition techniques and sound diffusion. Electroacoustic composition is, therefore, a privileged realm for spatial exploration, since it presents the means that allow the control of spatial settings, sonic distributions and movements both during the compositional stage (through refined ways for controlling spectral space and the registration of spatial fingerprints in the sound files) and in the performance space (through sound diffusion techniques).
Harrison (Reference Harrison2000) notes that ‘the most exciting, the most thrilling notion’ related to space as revealed in acousmatic music is that, ‘as well as moving sounds in space, one is also transporting the listener to other listening spaces, to parallel listening universes – and a major contributor to this fact is the ability of the acousmatic medium to conjure spatial and locational references in sound’. According to Smalley (see Harrison Reference Harrison1999: 124–5), sound diffusion with multichannel concert systems provides a means to overcome the conflicts that result from the superimposition of composed space (understood as the spatial characteristics of a work, as conceived by the composer in the studio during the compositional process) and listening space (understood as the acoustic characteristics and constraints of the room where the work will be presented), creating diffused space as a consequence.Footnote 2 When an electroacoustic work is diffused at a concert, its sounds are not merely placed in an inert space. The spatial characteristics of the sounds – conceived during the compositional process and sometimes embedded in the source recordings themselves – are enhanced by diffusion, articulating the space and the way it is perceived by the listeners. The notion of a pre-existent and inert space is replaced by the notion of space as something dynamic that can be moulded by the sounds. This is an interesting and challenging concept that can be incorporated by the composer as a poetic stimulus during the compositional process. In other words, the sounds start to be seen as events that create their own space. A natural step from this notion is the creation, during the compositional stage itself, of strategies for handling the space in multichannel works.
Multichannel systems expand the musical possibilities offered by the stereo setup, since they provide enhanced ways for controlling prospective space and circumspace – using the terminology devised by Smalley (Reference Smalley2007).Footnote 3 Therefore, enveloping sonic images can be articulated in the studio during the compositional stage. This does not prevent composers from envisaging further enhancements of the multichannel spatial image in the concert room whenever the number of equal loudspeakers is two or more times greater than the number of channels in the work. Although the origins of multichannel composition are linked to the compositional intention of pre-determining the spatial distribution of the sounds and precisely reinstating it at the concert, nowadays composers are not bound to pursue their explorations in that direction any more. In the performance of eight-channel works, for example, the use of two or more rings of eight loudspeakers can aid the exploration of the nuances suggested in the works, such as different distances and elevations in relation to the audience.
2. Some technical alternatives
According to Otondo (Reference Otondo2008), the first decade of the twenty-first century has witnessed a growing interest in multichannel composition (5.1, quadrophonic and eight channels) and a decreasing interest in stereo. Over the same period, technology has become considerably cheaper than it was before, and the processing power of computers has increased. Also, there seems to be cultural and aesthetic reasons for such an interest which is probably related to an enhanced experience of space allowed by multichannel composition. The possibility of composing for multiple channels increases the complexity involved in the handling of space during the compositional process, since the spatial features of the work become more evident, which, in turn, highlights their role in the articulation of the musical discourse. Thus, the research of compositional tools and ways for controlling space in multichannel composition is a highly significant task.
In general terms, it can be stated that the handling of space in multichannel electroacoustic compositions tends to be focused on two different approaches. The first concerns the precise position of the sounds at specific locations in the space and the depiction of clearly identifiable trajectories within that space. The second approach is based on a more generalised (or diffused) distribution of the sounds, not concerned with their precise localisation. In this case, the identification of spatial regions such as ‘frontal’, ‘lateral’, ‘at the periphery of the space’, ‘close’ and ‘distant’, for example, are considered to be significant enough to articulate the spatial dimension of a work. With multichannel systems, this generalised or diffused spatial approach can generate sounds that envelop the whole audience, causing the sensation that the sounds are the space.Footnote 4
During the compositional stage, one strategy for controlling the distribution of the sounds in a multichannel array is based on amplitude panning, which involves, among other possibilities, the handling of automation volume and panning curves in the mixing software. Usually, this approach is effective when the position and movement of the sounds are intended to be precisely identifiable. Granulation of sampled sounds and spectral diffusion (using FFT – Fast Fourier Transform), on the other hand, tend to generate convincing results when the aim is to obtain a generalised and diffused distribution of the sounds.Footnote 5 The granulation of sampled sounds in multichannel systems produces multichannel sounds through the placement of several grains among the loudspeakers. The results obtained with such a technique are largely dependent on the characteristics of the sound being granulated and the parameters used in the granulation process. In general terms, however, the results tend to be diffused, although localised, sonic images that can move in the space or be static. Spectral diffusion happens when the spectrum of a mono or stereo sound is sliced in several frequency bands (called bins), which are then spread over the several channels. One interesting aspect of this technique, at least in conceptual terms, is that the resulting sound is produced by all loudspeakers working together. The sound is not placed in one loudspeaker or between two loudspeakers, but scattered all over the multichannel array. The result (although this depends on the spectral characteristics of the original sound) is that the listener is enveloped by the multichannel sound. One possible interpretation of this process is that the sound creates its own space – or that the sound becomes the space. Granulation and spectral diffusion have been implemented by different authors – a few examples are discussed below.
Keyes (Reference Keyes2004) presents three approaches for the spatialisation of stereo sounds in multichannel systems as compositional tools: amplitude panning, spectral diffusion and granulation. Spectral diffusion is also discussed by Torchia and Lippe (Reference Torchia and Lippe2004), who make references to some of their own Max/MSP patches developed for this purpose. Spectral diffusion and granulation in multichannel systems are techniques discussed by Kim-Boyle (Reference Kim-Boyle2006), who also dedicates an specific paper to granular techniques (Kim-Boyle Reference Kim-Boyle2005) and another one to spectral diffusion (Kim-Boyle Reference Kim-Boyle2008). The granulation of sampled sounds is also the focus of the approach taken by Wilson (Reference Wilson2008).
The Max/MSP patch designed by Kim-Boyle (Reference Kim-Boyle2006) distributes sounds in four channels using either granulation or spectral diffusion. The positions of the sonic grains or the FFT bins are controlled by the Boids algorithm, by Craig Reynolds, using a Max implementation by Eric Singer.Footnote 6 The Boids algorithm simulates the movements of flocks of birds, which present emergent coherent patterns, although they derive from very simple rules, such as:
• avoiding collision with neighbouring individuals;
• keeping approximately the same direction and speed of the neighbouring individuals; and
• not staying too far away from the other members of the group.
In Kim-Boyle’s implementation, the position of the Boids controls the amplitude of the signal that is addressed to each one of the loudspeakers, allowing the virtual positioning of the sounds in the space. The patch allows the visualisation of the movement performed by the Boids and alterations in the coordinates x and y using simple mathematical functions before they are sent to the visualisation screen. Therefore, it is possible to restrict the movement of the Boids to certain regions of the space, or make them rotate in the space, among other possibilities that are not possible with Singer’s implementation alone. The patch developed by Kim-Boyle also allows the use of data from the real-time analysis of audio signal as an interactive way to control the attractor that influences the movement of the Boids, thus modifying the movements in the space.
The Boids algorithm is also used by Davis and Rebelo (Reference Davis and Rebelo2005) as a tool to control the distribution of sounds in the space. One of the examples mentioned by the authors is a computational model based on the distribution of male and female frogs in a certain area. Nature, in this case, is not only used as a model for controlling the position of the sounds, but also as an inspiration for the generation of the sounds themselves.
Kim-Boyle (Reference Kim-Boyle2005) discusses a Max/MSP/Jitter patch for multichannel spatialisation that makes use of three Jitter objects designed from working with particle systems: jit.p.shiva, jit.p.vishnu and jit.p.bounds. The purposes of this implementation are similar to the ones presented in Kim-Boyle (Reference Kim-Boyle2006), although the movement of the particles is not controlled by the Boids algorithm in this case. The trajectories of the particles control the movement of the grains generated by a sampled-sound granulator in five channels. Kim-Boyle notes that the assignment of grain positions according to the movement of the particles presents more natural and complex results than the random distribution of grains in the space. Changes in the parameters of the three Jitter objects transform the movement of the particles and allow the specification of regions that they must avoid, among other aspects. Real-time alterations in the parameters allow the generation of complex behaviours. According to Kim-Boyle, one of the advantages of using particle systems is that they model and emulate natural phenomena. In his patch, the positions of the particles are continually mapped to the amplitudes of the five channels, thus controlling the movements of the sounds in the space.
Kim-Boyle (Reference Kim-Boyle2008) discusses some of his previous implementations (already mentioned in Kim-Boyle Reference Kim-Boyle2005 and Reference Kim-Boyle2006) and also a more recent one based on a stochastic spatialisation technique. The latter allows the control and the transformation of coordinates for several particles with just a few spatial controls that do not ‘overwhelm the user with massive banks of control data’ (2008: 5). In this implementation, ‘the spatial location of FFT bins is randomly assigned by using a noise control signal’ (2008: 5). They are ‘contained within boundaries defined by a global X/Y parameter and the overall location of the bins within two-dimensional space can be controlled with the mouse, joystick or assigned to circular trajectories’ (2008: 5). According to Kim-Boyle, this implementation demonstrates equivalent or better perceptual results than his implementation with Boids, with the advantage that it presents a more intuitive user interface.
Wilson (Reference Wilson2008) mentions a powerful spatial swarm granulator based on the Boids algorithm and largely implemented in SuperCollider. The system is based on three SuperCollider classes that control the movement of the Boids within a 2D or a 3D space filled with a large number of loudspeakers (52 loudspeakers in one of the figures displayed). It can provide ‘diffuse but localised’ effects when the granulation is assigned to sub-arrays of loudspeakers. The system presents visualisation capabilities in 2D and 3D that can help the user to control the distribution of the sounds. After assessing the position of a Boid (which represents a granulation stream), the system calculates the nearest loudspeaker and hard assigns the grain to that speaker. According to the author, this hard-assigning approach does not demonstrate perceptual disadvantages in comparison to other alternatives when a large number of grains of short duration is used. This happens because, when dealing with several simultaneous grain streams, ‘individual channels consisting of short grains with small variations in offset, delays due to distance, etc. serve adequately to create decorrelation between active channels (and thus the desired impression of diffuseness and/or increased physical volume’ (Wilson Reference Wilson2008: 2). Subjective tests demonstrated that the system ‘was deemed successful in both creating the effect of “localised but diffuse” granular sources, and in convincingly moving those sources through a performance space’ (2008: 4).
3. Examples of compositional applications
Percursos Enredados, Maresia and Sons Adentro are eight-channel acousmatic works that I composed using the eight-channel configuration presented in figure 1. In these works, I explore different kinds of spatial behaviours. The precise position of a sound and the tracking of its trajectory within the eight-channel array is not necessarily important. Rather, generalised perceptions are considered to be effective ways of appreciating the spatial dimension of the works: ‘sounds that move’ as opposed to those that ‘do not move’; trajectories that cross the space versus movements in the periphery of the eight channels; ‘close’ versus ‘far away’; and movements with a certain direction (e.g. ‘front to back’) versus erratic movements. Also, the identification of different speeds of movement are meant to be more significant than the trajectories displayed by the sounds. Several passages in the works explore the articulation of ‘multichannel sonic images’ achieved by an integrated control of the whole set of channels. Instead of panning sounds between pairs of loudspeakers within the eight-channel array, the sonic images derive, most of the time, from the sum of the frequencies and amplitudes delivered by all (or most) of the channels working in conjunction. This was done with Max/MSP patches that take stereo sound files as input and that allow the manipulation of the processing parameters in real-time. Spectral diffusion and granulation of sampled sounds are among the techniques used for creating some of the eight-channel sounds articulated in these works, as will be mentioned below.Footnote 7
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921021703172-0106:S1355771810000294:S1355771810000294_fig1g.jpeg?pub-status=live)
Figure 1 Eight-channel configuration.
3.1. Percursos Enredados
One of the main features in Percursos Enredados (‘interwoven paths’, in Portuguese) is the use of different kinds of sound sources and spectromorphologies to ‘sculpt’ the space. The musical discourse is articulated by the relationship between sounds with a strong gestural content and layers of texture, and by the contrast between resonant and ‘dry’ sounds. The creation of enveloping eight-channel sonic images is the compositional aim in many passages of the work. The opening section presents the two main spatial behaviours explored throughout the piece: a diffused and enveloping sound of a resonant attack is set to interact with another sound that moves around the space (Sound Example 1). In this case, the ‘moving’ sound occurs on ‘ground level’, but in other passages the ‘role’ of ‘moving sound’ is carried out by sounds that ‘fly’ over the space (Sound Example 2). The role of ‘enveloping’ sounds are usually carried out by resonant attacks, but in some of the middle sections long and slowly moving textures also provide this spatial outcome. There are also other spatial behaviours in the work. Sounds of wind chimes, for example, are not set to move through the space, nor to sound as a diffused image. Their frequencies scintillate over the eight channels as if they were positioned over the loudspeakers. At the end of the work, the sound of a stone moving over a rough surface appears localised as a frontal wide sonic image.
Spectral diffusion is a technique extensively used in Percursos Enredados in order to obtain enveloping sounds. A spectral diffusion patch was implemented in eight channels as a development of a four-channel example designed by Erik Oña.Footnote 8 The main processing engine in the patch is displayed in figure 2. It slices the spectrum of the sounds using FFT and places the bins in different channels according to the allocations defined in a look-up table written into a buffer that can be changed in real-time.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921021703172-0106:S1355771810000294:S1355771810000294_fig2g.jpeg?pub-status=live)
Figure 2 Spectral diffusion patch – FFT engine.
The left and right channels of a stereo file are ‘spectrally diffused’ to four different channels each. The patch also allows the rotation of the whole eight-channel image through the manipulation of parameters such as the rotation degree and the speed of rotation. An envelope follower (that can be turned on and off) tracks the amplitude of the source sound and triggers random changes in the processing parameters whenever it detects an amplitude above a user-defined threshold.
The results obtained with this patch vary according to the spectrum of the original sound. In general, sounds with a broad spectral content tend to sound diffused, providing an enveloping sonic image. Sounds with energy concentrated on specific regions of the spectrum, on the other hand, usually sound more localised. In such cases, and depending on the time interval defined for the interpolation between two different magnitude distributions, fast movements of the mouse on the look-up table result in convincing sonic movements in the eight-channel set-up. Both possibilities were used for handling the space in Percursos Enredados and other works.
An eight-channel FFT equaliser (Equalator), designed by Jonty Harrison, had some features added to it and was also used to generate some of the sounds in Percursos Enredados. The Equalator patch filters each channel of a stereo sound file in four different ways and distributes the eight filtered versions (four filtered versions for each channel of the original stereo file) among the eight loudspeakers. The feature added to the patch is a device that gradually defines the magnitudes of the FFT bins over time and draws them on eight different look-up tables (one for each channel) according to some configurations set beforehand by the user (including a parameter that controls the speed at which the look-up tables change over time). The magnitudes registered on the look-up tables are used by the FFT equaliser and ultimately define the spectrum of the resultant eight-channel sound. Similarly to the Spectral Diffusion patch, this version of the Equalator enables the rotation of the whole eight-channel image by any degree with speeds defined by the user.
The main difference between the results obtained with the Spectral Diffusion patch and the Equalator is that, in the former, the whole spectrum of the sound is diffused over the eight-channel array, whereas in the latter only certain regions of the spectrum are kept. In Percursos Enredados, this version of the Equalator is used as a tool to generate pitched material from source sounds that are not necessarily pitched. This is done by keeping only a few narrow frequency bands of the original sound and gradually changing them over time. The results are long sounds with a slowly changing spectrum that do not tend to appear localised in the space, which, in turn, creates diffused sonic images such as the ones presented in Percursos Enredados between 7′59″ and 10′24″ (Sound Example 3).
3.2. Maresia
In Maresia (‘sea mist’, in Portuguese), the clear reference to sea sounds has an evocative character that is enhanced by the eight channels as the audience becomes surrounded by or even ‘submerged’ in water sounds. The shape of the waves, the energy they accumulate and release, their amplitude and their spectrum profile over time were taken as abstract models for structuring the composition. They depict the most striking spatial behaviour that happens a few times in the work – a big wave that starts in the front loudspeakers and sweeps across the space towards the rear speakers, returning to the frontal ones at the end of the gesture. The image of a wave that ‘washes’ through the audience and returns to the sea is conveyed by such events (Sound Example 4). Using the terminology devised by Smalley (Reference Smalley2007), it can be said that they depict a vectorial space that invade the listener’s personal space. Vectorial space is understood as ‘the space traversed by the trajectory of a sound’ (Smalley Reference Smalley2007: 37). In a concert situation the use of more than one ring of eight channels is particularly useful to enhance the gesture of the big wave. Another kind of spatial setting happens towards the end of the piece, when the sonic image is concentrated in the frontal loudspeakers (like in Percursos Enredados). This quality, together with a fade-out, gives the impression of the sea receding into the distance.
In contrast with the spatial behaviour depicted by the big waves, some sections explore the erratic movements of granular sounds, sometimes superimposed to sounds of the waves (Sound Example 5). Sounds generated by granulation in eight channels are explored both in Maresia and Sons Adentro. A granulation patch in eight channels was adapted from a 12-channel patch designed by Peter Batchelor (ultimately derived from a stereo granulator by Erik Oña). The placement of each grain is decided randomly within a range of channels defined beforehand by the user.Footnote 9 The results obtained with this patch vary from sounds that are diffused in the space to more localised ones depending on several aspects such as the characteristics of the sounds themselves and the values used for each parameter of the granulation process. Some of the sounds obtained with this tool present fleeting trajectories with a very interesting compositional potential, such as the zigzag movement from front to rear loudspeakers which opens Sons Adentro.
3.3. Sons Adentro
In Sons Adentro (‘into the sounds’, in Portuguese), the erratic movement of the opening granular gesture is subsequently contrasted with resonant attacks that present an enveloping spatial character (Sound Example 6). It can be noticed, therefore, that the same kind of dichotomy observed in the first section of Percursos Enredados – spatial movement versus enveloping sound – is explored in this passage of Sons Adentro.
The diversity of sound materials is noticeable throughout the work. Environmental sounds provide diversity in terms of sonic content, time flow and spatial setting. The ‘night scene’ that gradually grows out of the ‘abstract’ sounds from around 6′50″, for example, depicts a certain space (clearly an outdoor space), carrying the sensation of distance, openness and vastness that is enhanced in the eight-channel configuration. The sound of a bouncing ball is superimposed to the sounds of the night, coveying the image of an indoor space. This, in turn, is enhanced by the fact that, after ‘rolling’ around the front loudspeakers, the sound appears to ‘bump’ into a wall on the frontal left side of the audience – this sound occurs precisely when the movement of panning reaches the frontal left loudspeaker (Sound Example 7). There is, therefore, a clear contrast between ‘outdoors’ and ‘indoors’, which is also explored in other parts of the piece.
Space, as an element of connection between ‘abstract’ and environmental sounds, becomes prominent when sounds of birds are introduced at 9′59″, triggered by a sound that ‘flies’ over the eight-channel setup (Sound Example 8). One of the strategies used to relate different materials was actually to assign the role of ‘triggers’ to sounds that would put other materials in motion. Among the three works mentioned here, Sons Adentro is the one that mostly explores discrete positions and trajectories of the sounds. In some passages of the work, gestures that happen only in the frontal channels were used to set other sounds in motion by ‘triggering’ their (sometimes) erratic movements. Marbles and bouncing balls that roll inside the eight-channel configuration or that ‘bounce’ between the loudspeakers are some of the sounds that are set in motion by others (Sound Example 9). Another spatial aspect explored in Sons Adentro is the enhancement of the space already present in the environmental sounds. The sensation of distance, observed in the ‘night scene’, for example, is revisited in the end of the work with the sound of an aeroplane that takes up the canopy of the sonic space.
4. Summary
This article presents some notions of space in electroacoustic music and approaches the handling of space in three eight-channel works by the author. The works are addressed both in their musical and technical aspects – in the latter case, through references to Max/MSP patches that were used for controlling the spatial dimension of the sounds in eight channels. Possibilities explored by other authors in the design of technical solutions for multichannel composition are also mentioned.
Further developments in this topic may include the design of Max/MSP patches that are capable of providing refined ways for mapping the spectromorphological characteristics of the sounds as parameters for their spatial distribution. The control of sound distribution in live-electronic works using data collected in real-time from the instruments is another interesting field of research. Also, the use of sensors and gesture interfaces (accelerometers, for example) as devices for controlling the spatial distribution of the sounds is a thriving area of creative exploration.
Multichannel electroacoustic composition is a challenging and stimulating field that integrates aesthetic and technical explorations in the service of musical expression.