Hostname: page-component-745bb68f8f-g4j75 Total loading time: 0 Render date: 2025-02-11T03:07:05.410Z Has data issue: false hasContentIssue false

Audification and Non-Standard Synthesis in Construction in Self

Published online by Cambridge University Press:  26 February 2014

Ryo Ikeshiro*
Affiliation:
Department of Music and Department of Computing, Goldsmiths, University of London, New Cross, London, SE14 6NW, UK
Rights & Permissions [Opens in a new window]

Abstract

The author's Construction in Self (2009) belongs to the interdisciplinary context of auditory display/music. Its use of data at audio rate could be described as both audification and non-standard synthesis. The possibilities of audio-rate data use and the relation between the above descriptions are explored, and then used to develop a conceptual and theoretical basis of the work.

Vickers and Hogg's term ‘indexicality’ is used to contrast audio with control rate. The conceptual implications of its use within the digital medium and the possibility for the formation of higher-order structures are discussed. Grond and Hermann's notion of ‘familiarity’ is used to illustrate the difference between audification and non-standard synthesis, and the contexts of auditory display and music respectively. Familiarity is given as being determined by Dombois and Eckel's categories of data. Kubisch's Electrical Walks, Xenakis's GENDYN and the audification of seismograms are used as examples. Bogost's concept of the alien is introduced, and its relevance to the New Aesthetic and Algorave are discussed. Sound examples from Construction in Self are used to demonstrate the varying levels of familiarity or noise possible and suggested as providing a way of bridging the divide between institutional and underground electronic music.

Type
Articles
Copyright
Copyright © Cambridge University Press 2014 

1. Introduction

Sonification is a practice found in various disciplines from auditory display to music and other sound arts. While attempts may have been made previously to attempt to restrict the usage of the term to the domain of science by those such as Hermann (Reference Hermann2008: 3), even he now appears to concede that this is no longer applicable (Grond and Hermann Reference Grond and Hermann2012: 213).

The factor determining whether a sound is sonification has been claimed to be both the intention of the composer, or its function (Barrass and Vickers Reference Barrass and Vickers2011: 146, 152), and the perspective of the listener (Vickers and Hogg Reference Vickers and Hogg2006: 213) or the circumstances of listening (Grond and Hermann Reference Grond and Hermann2012: 214). Furthermore, Grond and Hermann imply the possible formation of a new discipline and criteria, stating that ‘sound becomes sonification when it can claim to possess explanatory powers: when it is neither solely music nor serves as mere illustration’ (Grond and Hermann Reference Grond and Hermann2012: 213). Vickers and Barrass go so far as to describe this as the second aesthetic turn, referring to the middle ground between information visualisation and data art (Barrass and Vickers Reference Barrass and Vickers2011: 156).

The author's Construction in Self (2009) – referred to as CiS from here on – is an example of an art work which straddles these blurred boundaries. It is a generative work based on the Lorenz dynamical system, a representation of forced dissipative hydrodynamic flow and a model for convection currents proposed by Edward Lorenz (Lorenz Reference Lorenz1963: 130). An excerpt can be heard in Sound example 1. The work can be considered as audification of a mathematical system and as non-standard synthesis – a term first used by Holtzman to describe an approach to synthesis where ‘sound is specified in terms of basic digital processes rather than by the rules of acoustics or by traditional concepts of frequency, pitch, overtone structure, and the like’ (Holtzman Reference Holtzman1979: 53). The article investigates the relation between these two possible frames of reference in which CiS operates in assessing the work's theoretical and conceptual basis.

Audification and non-standard synthesis both use data at audio rate – which distinguishes them from other techniques of sonification and algorithmic composition, which usually operate at the more common control rate. Thus audio-rate data use offers alternative possibilities. Most discussions concerning the use of data at audio rate have focused either on its design in the context of auditory display (see for example Dombois and Eckel Reference Dombois and Eckel2011; Grond and Hermann Reference Grond and Hermann2012: 218–19) or on its potential as compositional material in the context of non-standard synthesis (see for example Di Scipio Reference Di Scipio1995; Hoffmann Reference Hoffmann2009). Research on non-standard synthesis from the viewpoint of auditory display and audification as material in the context of music has been limited. Such discussions, as well as a comparison between audification and non-standard synthesis, can certainly enrich the debate concerning the use of data at audio rate and further the possibilities offered by the interdisciplinary context outlined above.

2. Data rate

2.1. Audio rate and control rate

‘Audio rate’ and ‘control rate’ are accepted terminology within the field of computer music. Typically in audio programming environments, the former refers to rates of around 44,100 Hz or more and the latter to around 500 Hz to far less. These correspond to Karlheinz Stockhausen's time spheres of ‘frequencies’ and ‘rhythms’ respectively (Stockhausen and Barkin Reference Stockhausen and Barkin1962: 42–4). Data at audio rate is by definition sufficient by itself in being used directly as amplitude values. This equates to audification (Dombois and Eckel Reference Dombois and Eckel2011: 301). Within the context of auditory displays, it is one of several sonification techniques (Walker and Kramer Reference Walker and Kramer2004: 152–5). In contrast, parameter mapping sonification (PMSon) (Grond and Berger Reference Grond and Berger2011: 363) usually operates at slower speeds such as control rate.

2.2. ‘Indexicality’

Vickers and Hogg present a two-dimensional aesthetic perspective space,Footnote 1 which illustrates the aforementioned blurring of boundaries between different contexts for sonification. Table 1 shows a simplified outline of a section from their schematic. In their diagrams, the crossover is most evident in the horizontal axis, which represents a continuum from ars informatica to ars musica – or auditory display to music respectively. The vertical axis represents their notion of ‘indexicality’, which refers to ‘how strongly a sound sounds like the thing that made it’ (Vickers and Hogg Reference Vickers and Hogg2006: 213–14). The use of data at audio rate is given the highest indexicality, being the most direct method. Their measure of indexicality offers one possible starting point in conceptualising the difference between audio rate (audification) and control rate (PMSon).

Table 1 A simplified version of Vickers and Hogg's schematic

On the auditory display side of the diagram (the second column of Table 1), indexicality is related to the nature of the mappings, which is represented by a continuum from the direct to the metaphorical or interpretive. Audification is the most direct (e.g. that of seismic data). The more metaphorical and interpretive mappings have low indexicality (e.g. parameter mapping sonification). In differentiating between the various sonification techniques, the notion of indexicality ranging from direct to metaphorical is comparable to Scaletti's use of the level of directness (Scaletti Reference Scaletti1994: 224) and Kramer's semiotic spectrum ranging from the analogic to the symbolic (Kramer Reference Kramer1994: 21–9).

On the music side of the diagram (the third column of Table 1), low indexicality is associated with the use of traditional music scores – in other words, at the level of individual notes, or control rate. The use of sound recordings such as soundscapes is the only example of audio-rate data use in the context of music given by Vickers and Hogg,Footnote 2 and thus it has the highest level of indexicality. The inclusion of works based on field recordings can somewhat be justified. The playback of audio recordings is itself a form of audification in Dombois and Eckel's classification (Dombois and Eckel Reference Dombois and Eckel2011: 302). Aesthetic similarities between the use of audification in auditory display and soundscapes have also been noted (Polli Reference Polli2012: 257–68). However, in the reduction proposed in Table 1, perhaps a more suitable alternative to soundscapes which also uses data at audio rate within the context of music might be non-standard synthesis: see Table 2.

Table 2 A modification of Vickers and Hogg's schematic with the addition of non-standard synthesis

3. Non-standard synthesis

3.1. Non-standard synthesis

Most synthesis methods can be described as being ‘standard’, in the sense that they follow acoustic models. These include additive and subtractive synthesis, waveshaping, FM, RM, AM and physical modelling. In contrast, the ‘non-standard’ approach involves no pre-existing acoustic model (Laske Reference Laske1989: 55): the main early examples of its implementation are Gottfried Koenig's SSP, Herbert Brün's SAWDUST and Iannis Xenakis's GENDYN.

On the music side of Vickers and Hogg's diagrams, the equivalent practice to parameter-mapping sonification could be described as algorithmic composition. As Di Scipio notes, the distinction between non-standard synthesis and algorithmic composition is a matter of degree (Di Scipio Reference Di Scipio1994: 203–4): the former could be described as an example of the latter at audio rate. This corresponds to the distinction between audification and sonification generally, as shown on Table 3.

Table 3 Categories of the use of data as audio

3.2. Digital

The aforementioned historical examples of non-standard synthesis occur in the digital realm. Similarly, although audification predates the digital age,Footnote 3 it has certainly been greatly facilitated by it. Various advantages offered by digital technology are due in part to the numerical representation of data as digital code, resulting in the possibility of their algorithmic manipulation (Manovich Reference Manovich2001: 49–50). This defining characteristic of new media, according to Lev Manovich, enables ‘transcoding’, the translation of input such as real-world data into a different format (Manovich Reference Manovich2001: 64), which forms the basis of sonification. Codification of signal occurs in the analogue realm – in video, for example. However, digital technology has certainly simplified the practice of transcoding such as audification. Friedrich Kittler also notes how a medium can be translated to any other, ‘erasing the very concept of medium’ (Kittler Reference Kittler1999: 2). Therefore, data is data, regardless of whether they originate from sensors taking readings from the external world or are from processes occurring within a computer. Thus sonification of real-world phenomena and sonification of simulations of phenomena, or even data unrelated to phenomena based in reality, are equivalent in the digital domain. Similarly, audification of data from the real world is analogous to non-standard synthesis.

3.3. ‘Out of nothing’

Audification is the simplest and most direct of all sonification techniques. Despite subjective decision-making still being necessary, most notably at the signal conditioning stage (Dombois and Eckel Reference Dombois and Eckel2011: 313–15), it requires far less human intervention in comparison to PMSon. This is due to the fact that, unlike audio-rate data, control-rate data is insufficient by itself to be used directly as audio. As Dombois and Eckel state, with audification, ‘no sound engines are needed, no instruments, no libraries of samples, no acoustic inputs’ (Dombois and Eckel Reference Dombois and Eckel2011: 319), this not being the case with PMSon. Correspondingly, GENDYN is described as being able to produce music ‘out of nothing’, with no audio or control input, Its output is dictated purely by the lines of code and the probability distributions that constitute the program (Hoffmann Reference Hoffmann2000: 31). Thus the difference between audification and PMSon is not merely quantitive in the rate of data, but also qualitative in the amount of external input necessary in order to produce audio.

3.4. Materiality

Non-standard synthesis takes advantage of the minimum time-unit at which sound can be manipulated, allowing digital audio samples to directly become material upon which compositional processes can be applied (Di Scipio Reference Di Scipio1995: 39–40). The sonic results of non-standard synthesis are sometimes regarded as idiosyncratic anomalies when compared to the electroacoustic canon. In the case of Xenakis, this is in part due to the lack of affinities with IRCAM and GRM after his split with Schaeffer. Using Gerhad Eckel's terms in contrasting Xenakis's approach to the two leading electroacoustic institutions in France, Peter Hoffmann identifies IRCAM with the ‘Technology of Writing’, GRM with the ‘Technology of Editing’, and CEMAMu, Xenakis's own research institute, with the ‘Technology of Computing’. Broadly speaking, at IRCAM, audio analysis is undertaken for the purposes of score-following and the combining of instruments with electroacoustic or tape parts, and at GRM, combinations and transformations of recorded sounds are explored. Both approaches involve the use of computers, but merely for the extensions of the possibilities of notation in the former and tape music in the latter.Footnote 4 However, CEMAMu is concerned with the use of computers for the advancement of music that is unique to computers, or more specifically, the manipulation of audio samples. Therefore, although Xenakis's programming style is certainly idiosyncratic, his use of the computer is highly idiomatic to the medium and the technology with which much electroacoustic music is created currently (Hoffmann Reference Hoffmann2009: 59–63). Hence audio-rate use of data is one possible idiomatic use of digital technology, taking into account its materiality.

4. Audification

4.1. Electrical Walks

Christina Kubisch's Electrical Walks (2003) is an example of audification of real-world data in real-time. The piece involves the participants walking through a city or a town wearing a specially designed device with headphones that convert electromagnetic waves into audio. They are free to follow a prescribed route or explore the streets of their own accord. It has been presented in various countries. Being a combination of audification and soundwalks, it would appear to belong halfway between audifications of seismic data and soundscapes in Vickers and Hogg's schematic.

Kubisch states that her work reveals a previously hidden and undetected world (IKON 2006). However, Seth Kim-Cohen is dismissive of such claims, which he finds to be typical of interpretations of Walks. The uncovering of the presence of electromagnetic waves alone is insufficient as an aesthetic validation of the work for Kim-Cohen, as the resulting sounds cannot be understood by humans in any meaningful way:

To ‘read’ the work as if it is conveying a message – as if it is the product of a legible intention – seems forced …

As far as the experience of art is concerned, the revelation of phenomena is not enough. Kubisch's walks may introduce us to a normally inaudible by-product of the city's activities. But what can we do with those sounds? What kind of aesthetic value do they deliver? (Kim-Cohen Reference Kim-Cohen2009: 111–12)

As Kim-Cohen also rightly notes regarding the use of EEG data as a source of audio, sonification in aesthetic contexts often fails to make full and appropriate use of the information contained in the data (Kim-Cohen Reference Kim-Cohen2009: 100–1).

The issue appears to be especially acute in the case of audification. The aforementioned lack of subjective intervention in comparison to PMSon can often result in noise that seems meaningless. It may appear that some form of higher-level analysis and mapping other than audification may be necessary in order to convey the information as audio. The same charges could be levelled at non-standard synthesis, where the resulting sounds may appear to be meaningless noise.

4.2. Familiarity

The matter can be illustrated through Grond and Hermann's concept of familiarity. It refers to the extent to which sounds may be identified with a certain process – which may be abstract – or an existing example of a sound. One example of data that produces familiar sounds through audification is seismograms, mentioned previously. Familiarity is dependent on both our prior experience of earthquakes – for example, initial tremors preceding the actual quake which are then followed by aftershocks – and our ability to interpret the audified sounds through their resemblance to recognisable sounds or processes – for example, a hard click indicating a sharp attack for the main earthquake, or a gong-like sound indicating irregular tremors (Grond and Hermann Reference Grond and Hermann2012: 218–19).

In contrast, Grond and Hermann state that with the audification of EEG signals, ‘as we cannot directly experience the dynamics of electric potentials, the resulting listening experience remains unfamiliar’ (Grond and Hermann Reference Grond and Hermann2012: 219). Furthermore, the audified data cannot be interpreted to the same extent due to the lack of perceptible processes.

Familiarity is not equivalent to Vickers and Hogg's term ‘indexicality’. The latter is dependent on only the technical procedure involved, whereas the former also concerns how the audified results may be associated with the original phenomena. One would presume that in Vickers and Hogg's schematic, Pierre Schaffer's musique concrète would almost be interchangeable with Murray Schafer's Soundscape project. The difference in placement would only be determined by the amount of treatment of the sound recordings, and not by the presence or lack of contextual references. By contrast, Schafer's soundscapes would be considered familiar whilst Schaffer's musique concrète would not, at least in their intention: see Table 4.

Table 4 Familiarity and indexicality in different uses of sound recordings

4.3. Categories of data

The familiarity of the results of audification is dependent on the type of data. Dombois and Eckel identify four groups (Dombois and Eckel Reference Dombois and Eckel2011: 302–3):

  • sound recordings (mentioned previously) such as soundscapes;

  • general acoustical data from elastomechanical measurements such as seismograms;

  • non-mechanical physical data such as electromagnetic waves (Walks) and EEG readings;

  • abstract data such as processes behind non-standard synthesis – for example, the random walks in GENDYN.

Familiarity is determined by how much the data conforms to the wave equation. Usually the first two categories do whereas the last two do not. Thus the former produce familiar sounds and the latter produce unfamiliar sounds. In the case of the first two categories, this is further assisted by our prior knowledge of the phenomena and in their interpretation becoming common practice. Walks audifies electromagnetic waves which belong to the third category of physical data. Thus unfamiliar sounds are produced, which is partly the basis of Kim-Cohen's criticism of the work.Footnote 5

4.4. Familiarity of auditory display and music

Strictly for the purposes of auditory display, the success of audifying data would certainly depend on the level of familiarity of the resulting sounds. In the arts, the context determines the importance of familiarity, which can again be illustrated by its placement on Vickers and Hogg's ars informatica–ars musica continuum as shown on Table 5.

Table 5 Familiarity in audio rate use of data

Processes driving non-standard synthesis are abstract. By definition, no acoustical models are followed and hence the sounds produced are unfamiliar in Grond and Hermann's sense. In fact, its primary aim is precisely the production of new sounds. The adjective ‘unfamiliar’ is also used by many in a general sense in describing such works in comparison to existing music. Regarding GENDY3 (1991), which was the first work produced with Xenakis's GENDYN, even electronic musicians and Xenakian scholars state how at the time it was unlike anything they had heard previously (see for example Di Scipio Reference Di Scipio2002: 22; Hoffmann Reference Hoffmann2009: 10). Within the context of contemporary music and art where novelty, experimentation and the radical are celebrated eventually, it is possible for unfamiliarity to be a positive attribute that non-standard synthesis is capable of providing.

The concept of familiarity can characterise the difference between audification (for auditory display) and non-standard synthesis. As a generalisation, the former aims for familiar results while the latter aims for unfamiliar results. To a certain extent, this occurs due to the use of general acoustical data in the former and abstract data in the latter. These distinctions are also blurred within the interdisciplinary context outlined at the beginning, and CiS incorporates different levels of familiarity as shown below.

4.5. Higher-order time structures

However, the unfamiliar sounds produced from abstract data do not necessarily remain meaningless. Returning to GENDYN, Xenakis's primary objective may not have been the communication of the intricate workings of the systems behind his compositions to the listener as per auditory display. Nonetheless, he was concerned with whether music based on rules of probability – or ‘voids of rules’, as he described it – could still have meaning or information (Xenakis Reference Xenakis1992: 260). One possibility for the formation of meaning is through alterations in the spectral properties of the wavetable and higher-order time structures that appear as by-products of the manipulation of digital samples. Di Scipio describes the process as ‘sonological emergence’ (Di Scipio Reference Di Scipio1994: 205), due to macro-level epiphenomena being created through micro-level dynamic processes. Previously, Xenakis described the same procedure as the production of sonorities of higher-orders through ‘microcomposition’ (Xenakis Reference Xenakis1992: 47). These could also include the formation of certain pitches, as shown in GENDY3 (1991) (Hoffmann Reference Hoffmann2004: 138). What differentiates the presentation of data at audio rate from other rates is the increased possibility of the production of higher-order time structures.

The gradual formation of order from apparent disorder is reminiscent of Jacques Attali's conception of noise:

Noise does … create a meaning: … the very absence of meaning in pure noise … by unchanneling auditory sensations, frees the listener's imagination. The absence of meaning is in this case the presence of all meanings … a construction outside meaning… . It makes possible the creation of a new order on another level of organization, of a new code in another network. (Attali Reference Attali1985: 33)

Noise operates on numerous levels, according to Attali, and the above statement could refer to how the listener may attempt to make sense of the sounds through imposing their own meaning through a very free form of interpretation. This would not be sonification in the strict sense. However, Attali's description of the creation of new organisation applies in a more concrete manner in non-standard synthesis through the formation of higher-order time structures.

The expertise that may be required in interpreting audifications of abstract data such as non-standard synthesis may appear to be of a far higher degree than what is usually necessary for auditory display. However, even the audification of general acoustical data often requires prior knowledge. Returning to the aforementioned example of the audification of seismographs, Dombois describes how timbre is a good indicator of the material – for example, metallic sounds indicate sediments and wooden sounds indicate bedrocks. In addition, he shows how its tectonic source mechanism can be inferred from its amplitude envelope – for example, a sharp hard beat indicates one plate subducting the other and a ‘plop’ indicates two plates moving apart (Dombois Reference Dombois2002: 28). Such audio features can easily be recognised, but they also require prior knowledge on their method of interpretation.

5. Concept

5.1. (New) Aesthetics

Comparisons with visualisation are common in the study of sonification (see for example Barrass and Vickers Reference Barrass and Vickers2011: 154–5). They are useful in exploring its aesthetic potential as well as in defining and describing the practice through reference to a more recognised parallel domain. The New Aesthetic is one example of an analogous trend that operates within the visual arts and design (Bridle Reference Bridle2011). It is a stylistic predilection for the effects of processes via which computers receive, display and transmit data. For Bruce Sterling it includes the following:

Information visualization. Satellite views. Parametric architecture. Surveillance cameras. Digital image processing. Data-mashed video frames. Glitches and corruption artifacts. Voxelated 3D pixels in real-world geometries. Dazzle camo. Augments. Render ghosts. And, last and least, nostalgic retro 8bit graphics from the 1980s. (Sterling Reference Sterling2012)

Many of the examples above have obvious counterparts in audio: data audification/sonification, field recordings at a wide range of amplitudes, hacking/scanning of radio/mobile communication, digital signal processing, sampling/mash-up, glitch and the aesthetics of failure, and 8-bit ‘chiptune’. Greg Borenstein highlights perhaps its most promising aspect with reference to Ian Bogost's concept of alien phenomenology:

New Aesthetics is not simply an aesthetic fetish of the texture of these images, but an inquiry into the objects that make them. It's an attempt to imagine the inner lives of the native objects of the 21st century and to visualize how they imagine us. (Borenstein Reference Borenstein2012)

5.2. Alien phenomenology

Translated to the domain of audio, the above could also serve as a validation for the use of sonification in an aesthetic context. However, the scope of the New Aesthetic is restrictive for Bogost due to its limitation to computational media and their relationship to human beings. He concedes the special status afforded to computers due to their influence and import, but nevertheless regards them as only one type among many others. Likewise, there are many other relationships that exist between things as well as with ourselves. The irreducibility of objects does mean that humans may never be able to fully comprehend computers or other things and their relations on their own terms. But there is no reason why this should not be speculated upon. This is the general basis for ‘alien phenomenology’ (Bogost Reference Bogost2012a: 32–4), his version of object-oriented phenomenology (Bogost Reference Bogost2012b: 5–6):

A really new aesthetics might work differently: instead of concerning itself with the way we humans see our world differently when we begin to see it through and with computer media that themselves ‘see’ the world in various ways, what if we asked how computers and bonobos and toaster pastries and Boeing 787 Dreamliners develop their own aesthetics. The perception and experience of other beings remains outside our grasp, yet available to speculation thanks to evidence that emanates from their withdrawn cores like radiation around the event horizon of a black hole. The aesthetics of other beings remain likewise inaccessible to knowledge, but not to speculation – even to art. (Bogost Reference Bogost2012b)

From imagining the inner life of a computer as posed by the New Aesthetic, sonification involves attempts at rendering in audio the perception and experience of other things that may otherwise be inaccessible to us, whether these may be physical phenomena such as vibrations of earthquakes or abstract entities such as mathematical systems. Sonification can thus be conceptualised as a speculative task. As such, it is also a creative act (Bogost Reference Bogost2012a: 31). It shares affinities with Goodiepal's tongue-in-cheek notion of Radical Computer Music, one of its aims being the composition of music for hypothetical alternative life-forms such as sewage and electrical systems (Goodiepal 2009: 15–16).

The alien shares similarities with the concept of familiarity – or, more precisely, the unfamiliar. As the examples above illustrate, alien aesthetics present phenomena of varying degrees of familiarity in an attempt to reveal aspects of their underlying characteristics.

5.3. Alien music

Algorave, or algorithmic rave, is a recent series of club nights featuring performances of generative beat-based music. In promotional material, they state that ‘alien sounds of rave music are augmented with the alien structures of algorithmic composition’ (Algorave 2013). Their description conveys a feeling of discontent which many experimental electronic musicians can perhaps relate to: namely from the use of inappropriately traditional structures – such as the pop song format or regular metre and western harmony – framing what are at least initially novel timbres. In addition, one could contend the opposite to also be the case: that the alien structures of algorithmic composition – in other words, the use of formalised processes at control rate – often rely inappropriately on traditional timbres, for example through the triggering of piano samples. Alien structures of algorithmic composition necessitate the use of appropriately alien timbres.

Algorave belongs to the aforementioned interdisciplinary context of music and auditory display. On the one hand, it is possible to enjoy the beat-based sounds as dance music without fully comprehending the underlying algorithmic processes. On the other hand, deviations from the conventional four-square construction of dance music as a result of the use of algorithms are perceptible to varying degrees and can further one's engagement with the sounds. As they state, the use of algorithms results in the ‘alien’ at control rate. Audification and non-standard synthesis provides one possibility for the ‘alien’ at audio rate as an attempt at presenting the meaning behind familiar and unfamiliar timbres.

6. Construction in Self

6.1. The Lorenz as audio

The Lorenz dynamical system is capable of producing data with varying characteristics corresponding to different categories of data, from general acoustic to abstract. Thus the audification of the Lorenz displays different levels of familiarity which are dependent on its parameter values. Construction in Self uses twelve different settings through altering the Rayleigh number (r). Figures 1 to 12 show values of one dimension of the trajectories (referred to as i) plotted against time. The audification of the data can be heard in order in Sound example 2, and also transposed higher in Sound example 3.

Figure 1 r = 10 (i). In all figures, amplitude (y-axis) is plotted against time (x-axis). The range for the x-axis is [0,1] seconds for Sound examples 2 and 4, and [0,0.25] seconds for Sound examples 3 and 5

Figure 2 r = 14 (i)

Figure 3 r = 18 (i)

Figure 4 r = 19 (i)

Figure 5 r = 19.5 (i)

Figure 6 r = 20 (i)

Figure 7 r = 20.5 (i)

Figure 8 r = 21 (i)

Figure 9 r = 21.5 (i)

Figure 10 r = 22 (i)

Figure 11 r = 23 (i)

Figure 12 r = 24 (i)

Overall, the level of familiarity of the sounds produced decreases as the trajectories alter from resembling acoustical data to physical or abstract data with the increase of r. As a result, wave-like trajectories gradually become noise-like.

At low values of r (Figures 1 to 3), the trajectories resemble general acoustical data and the audified results resemble percussion sounds as heard in the first three sounds in Sound examples 2 and 3. Characteristics of the data such as the length of approach towards a fixed point and the speed of the orbit can be interpreted from audio features such as duration and pitch respectively.

As r is increased (Figures 4 to 8), the results of the audification begin to become unfamiliar and a short amount of noise is produced at the beginning, as heard in the fourth to the eighth sound in Sound examples 2 and 3. Detailed features of the noise produced may be difficult to interpret accurately. But the presence of noise can be very easily detected, from which the presence of aperiodic orbits at the beginning of the trajectory can be inferred. In these trajectories, the data resembles non-mechanical physical data or abstract data initially, after which it resembles general acoustical data.

At higher values of r (Figures 9, 11 and 12), the results could be described as being completely unfamiliar as noise is heard for the whole duration of the ninth, eleventh and twelfth sounds in Sound examples 2 and 3. Again, accurate features of the noise are difficult to perceive but it is possible to deduce that the trajectories are aperiodic for their whole duration. They now resemble non-mechanical physical data or abstract data.

On the one hand, as non-standard synthesis (or music), a rich source of timbral variety can be produced from the Lorenz system. On the other hand, as audification (or auditory display), many features of the data can be deduced from the sounds.

6.2. The ‘butterfly effect’

The difference between two sets of trajectories beginning at close proximity to each other is also calculated, producing an additional twelve trajectories (referred to as i–ii). Their sensitive dependence on initial conditions is demonstrated where, no matter how close two initial points are, their paths eventually diverge (Hirsch, Smale and Devaney Reference Hirsch, Smale and Devaney2004: 305), otherwise known as the ‘butterfly effect’. Figures 13 to 24 show values of one dimension against time for twelve values of the Rayleigh number. The audification of the data can be heard in order in Sound example 4, and also transposed higher in Sound example 5. A variety of higher-order features such as timbre, amplitude envelope and pitch are again produced.

Figure 13 r = 10 (i–ii)

Figure 14 r = 14 (i–ii)

Figure 15 r = 18 (i–ii)

Figure 16 r = 19 (i–ii)

Figure 17 r = 19.5 (i–ii)

Figure 18 r = 20 (i–ii)

Figure 19 r = 20.5 (i–ii)

Figure 20 r = 21 (i–ii)

Figure 21 r = 21.5 (i–ii)

Figure 22 r = 22 (i–ii)

Figure 23 r = 23 (i–ii)

Figure 24 r = 24 (i–ii)

6.3. Construction in Self as sonification

Construction in Self demonstrates the possibilities offered in the interdisciplinary context outlined through the use of data at audio rate. As a characterisation of the alien being of the Lorenz, it succeeds in attaching meaning on the level of unfamiliarity itself, dependent on how much it conforms to the wave equation. From the perspective of audification, the ‘noisiness’ or the level of unfamiliarity of the sounds becomes an indication of the underlying data. As mentioned, it may be argued that ‘noisy’ audifications such as Walks are difficult to ‘read’. However, the above examples from CiS are no harder to interpret than the aforementioned example of the audification of seismic data. Furthermore, abstract phenomena such as the butterfly effect can be characterised through its level of familiarity as it traverses the space outlined on Table 5. From the perspective of non-standard synthesis, it offers the possibility for a formal method for the control of noisiness.

7. In closing

Recent examples of non-standard synthesis within academic institutions include the use of Chua's Circuit (Mayer-Kress, Choi, Weber, Barger and Hubler Reference Mayer-Kress, Choi, Weber, Barger and Hubler1993),Footnote 6 iterated functions (Di Scipio and Prignano Reference Di Scipio and Prignano1996; Di Scipio Reference Di Scipio2001), waveform segmentation (Chandra Reference Chandra1996) and Lindenmayer systems (Manousakis Reference Manousakis2009). However, audification or non-standard synthesis are not considered canonical methods for sound production as noted, with the above serving as exceptions rather than the rule. In the case of chaotic systems, although they are used at control rate (see for example Pressing Reference Pressing1988) their use at audio rate is less common, as Hecker also remarks in the liner notes to his album Recordings for Rephlex (2006).

Similarly, research into direct audification is uncommon in comparison to sonification generally within professional scientific research using auditory displays (Dombois and Eckel Reference Dombois and Eckel2011: 316). In addition, just as audification appears to be far more widespread among amateurs for science popularisation or general amusement (Dombois and Eckel Reference Dombois and Eckel2011: 316), its use in the arts appears to be more common outside than inside academic institutions. This is evident in a whole variety of practices found in alternative settings, from hardware hacking to software implementations of chaotic systems at audio rate found in audio programming environments such as Supercollider and Max/MSP, to a combination of the two, as with Martin Howse's ‘data carvery’ involving the audification of discarded harddisks (Reboot FM 2011). The often abstract data evokes an unfamiliar sound-world containing elements of noise, glitch and drone typically found in underground electronic music. This may partly explain why these techniques are embraced by artists operating in alternative scenes whilst being ignored by leading institutions – for example in Paris as mentioned above in the case of Xenakis. A formal rigour in its construction provided through audification may offer one possibility for such sounds to become more accepted within academia. Put differently, the use of data at audio rate may provide a method for producing works valid in both contexts.

While all the tables in this article have clearly delineated columns, a fluid continuum or space is a more accurate characterisation of the various concepts illustrated. This is also the case with Table 6, a summary of the various concepts discussed in using data at audio rate, which offers an alternative to control rate.

Table 6 Use of data at audio rate

The interdisciplinary context outlined considers sonification or the use of data as sound as both music and auditory display. It reiterates the two concerns of esthesis (perception) and poeisis (construction) within art, with sonification advocating the importance of both factors. Thus technique should not and need not be separated from aesthetics: ‘We do not distinguish between æsthetics and mappings – the two are inextricably linked’ (Vickers Reference Vickers2005: 5). The two frameworks of auditory display and music, or audification and non-standard synthesis, offer a way of characterising the issue through current techniques involving the use of data.

As Grond and Hermann state: ‘Sonification can only succeed as a cutting-edge practice that transcends either discipline [of science and art]’ (Grond and Hermann Reference Grond and Hermann2012: 221). One possibility of producing an effective work in such a context may lie in considering the issues in both columns of Table 5 through the use of data at audio rate – as audification and as non-standard synthesis – as demonstrated by Construction in Self.

Footnotes

1 There are two diagrams: the first is two-dimensional; the second is a circular version of the first, emphasising the continuity of the ends of the spectrum. Readers are encouraged to refer to their original version for a more detailed and refined categorisation (Vickers and Hogg Reference Vickers and Hogg2006: 213–14).

2 See below concerning Mayer-Kress et al.'s work based on Chua's Circuit.

3 For early examples of audification, see Dombois and Eckel Reference Dombois and Eckel2011: 303–7.

4 This should not, however, be taken as criticism of the numerous remarkable achievements, both musically and technically, of these two institutions.

5 Kim-Cohen's main criticism of Walks stems from Kubisch's lack of engagement with the work's extra-musical context.

6 Their work features on Vickers and Hogg's diagrams, but it appears to be assigned a much lower indexicality than it should as it uses data at audio rate.

References

References

Algorave. 2013. ‘Algorave 18th April.’ MS Stubnitz. Accessed 1 March 2013. http://ms.stubnitz.com/content/algorave-18th-april.Google Scholar
Attali, J. 1985. Noise: The Political Economy of Music. Minneapolis: University of Minnesota Press.Google Scholar
Barrass, S., Vickers, P. 2011. Sonification Design and Aesthetics. In T. Hermann, A. Hunt and J.G. Neuhoff (eds.) The Sonification Handbook. Berlin: Logos.Google Scholar
Bogost, I. 2012a. Alien Phenomenology, Or What It's Like to Be a Thing. Minneapolis and London: University of Minnesota Press.Google Scholar
Bogost, I. 2012b. The New Aesthetic Needs to Get Weirder. The Atlantic, 13 April. http://www.theatlantic.com/technology/archive/2012/04/the-new-aesthetic-needs-to-get-weirder/255838.Google Scholar
Borenstein, G. 2012. In Response to Bruce Sterling's ‘Essay on the New Aesthetic’: What it's Like to Be a 21st Century Thing. The Creators Project, 6 April. http://thecreatorsproject.vice.com/blog/in-response-to-bruce-sterlings-essay-on-the-new-aesthetic#4.Google Scholar
Bridle, J. 2011. The New Aesthetic. Really Interesting Group, 6 May. http://www.riglondon.com/blog/2011/05/06/the-new-aesthetic.Google Scholar
Chandra, A. 1996. Composing with Composed Waveforms having Multiple Cycle Lengths and Multiple Paths. Accessed 30 May 2013. http://ada.evergreen.edu/~arunc/papers/paper2.htm/road.htm.Google Scholar
Di Scipio, A. 1994. Formal Processes of Timbre Composition Challenging the Dualistic Paradigm of Computer Music: A Study in Composition Theory (II). Proceedings of the International Computer Music Conference. Aarhus, Denmark: ICMA, 202–8.Google Scholar
Di Scipio, A. 1995. Inseparable Models of Materials and of Musical Design in Electroacoustic and Computer Music. Journal of New Music Research 24(1): 3450.Google Scholar
Di Scipio, A. 2001. Iterated Nonlinear Functions as a Sound-Generating Engine. Leonardo 34(3): 249254.Google Scholar
Di Scipio, A. 2002. Systems of Embers, Dust, and Clouds: Observations after Xenakis and Brün. Computer Music Journal 26(1): 2232.CrossRefGoogle Scholar
Di Scipio, A., Prignano, I. 1996. Synthesis by Functional Iterations: A Revitalization of Nonstandard Synthesis. Journal of New Music Research 25(1): 3146.CrossRefGoogle Scholar
Dombois, F. 2002. Auditory Seismology: On Free Oscillations, Focal Mechanisms, Explosions, and Synthetic Seismograms. Proceedings of the 8th International Conference on Auditory Display, 27–30. Kyoto: ICAD.Google Scholar
Dombois, F., Eckel, G. 2011. Audification. In T. Hermann, A. Hunt and J.G. Neuhoff (eds.) The Sonification Handbook. Berlin: Logos.Google Scholar
Goodiepal the Århus Warrior (P.K.B. Vester). 2009. Radical Computer Music & Fantastisk Mediemanipulation: A Corrected and Illustrated Transcript of the Official Mort Aux Vaches Ekstra Extra Walkthrough. Los Angeles, CA and Copenhagen: MPH/Pork Salad Press.Google Scholar
Grond, F., Berger, J. 2011. Parameter Mapping Sonification. In T. Hermann, A. Hunt and J.G. Neuhoff (eds.) The Sonification Handbook. Berlin: Logos.Google Scholar
Grond, F., Hermann, T. 2012. Aesthetic Strategies in Sonification. AI & Society 27(2): 213222.Google Scholar
Hermann, T. 2008. Taxonomy and Definitions for Sonification and Auditory Display. Proceedings of the 14th International Conference on Auditory Display. Paris: ICAD.Google Scholar
Hirsch, M.W., Smale, S., Devaney, R. 2004. Differential Equations, Dynamical Systems, & An Introduction to Chaos. Boston, MA: Academic Press.Google Scholar
Hoffmann, P. 2000. The New GENDYN Program. Computer Music Journal 24(2): 3138.Google Scholar
Hoffmann, P. 2004. ‘Something Rich and Strange’: Exploring the Pitch Structure of GENDY3 . Journal of New Music Research 33(2): 137144.Google Scholar
Hoffmann, P. 2009. Music Out of Nothing? A Rigorous Approach to Algorithmic Composition by Iannis Xenakis. PhD diss., Technischen Universität Berlin.Google Scholar
Holtzman, S.R. 1979. An Automated Digital Sound Synthesis Instrument. Computer Music Journal 3(2): 5361.Google Scholar
IKON. 2006. Exhibition Guide: Christina Kubisch – Electrical Walks. Birmingham: IKON.Google Scholar
Kim-Cohen, S. 2009. In the Blink of an Ear: Toward a Non-Cochlear Sonic Art. New York and London: Continuum.Google Scholar
Kittler, F. 1999. Gramophone, Film, Typewriter. Stanford, CA: Stanford University Press.Google Scholar
Kramer, G. 1994. An Introduction to Auditory Display. In G. Kramer (ed.) Auditory Display. Reading, MA: Addison-Wesley.Google Scholar
Laske, O. 1989. Composition Theory: An Enrichment of Music Theory. Journal of New Music Research 18(1–2): 4559.Google Scholar
Lorenz, E.N. 1963. Deterministic Nonperiodic Flow. Journal of the Atmospheric Sciences 20(2): 130141.Google Scholar
Manousakis, S. 2009. Non-Standard Sound Synthesis with L-Systems. Leonardo Music Journal 19: 8594.Google Scholar
Manovich, L. 2001. The Language of New Media. Cambridge, MA: The MIT Press.Google Scholar
Mayer-Kress, G., Choi, I., Weber, N., Barger, R., Hubler, A. 1993. Musical Signals from Chua's Circuit. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 40(10): 688695.Google Scholar
Polli, A. 2012. Soundscape, Sonification, and Sound Activism. AI & Society 27(2): 257268.CrossRefGoogle Scholar
Pressing, J. 1988. Nonlinear Maps as Generators of Musical Design. Computer Music Journal 12(2): 3546.CrossRefGoogle Scholar
Reboot FM. 2011. Substrat Radio #2 Data Carvery. Accessed 1 May 2013. http://reboot.fm/2011/08/14/substrat-radio-2-data-carvery.Google Scholar
Scaletti, C. 1994. Sound Synthesis Algorithms for Auditory Data Representation. In G. Kramer (ed.) Auditory Display. Reading, MA: Addison-Wesley.Google Scholar
Sterling, B. 2012. An Essay on the New Aesthetic. Wired, 2 April, accessed 1 March 2013. http://www.wired.com/beyond_the_beyond/2012/04/an-essay-on-the-new-aesthetic.Google Scholar
Stockhausen, K., Barkin, E. 1962. The Concept of Unity in Electronic Music. Perspectives of New Music 1(1): 3948.Google Scholar
Vickers, P. 2005. Ars Informatica – Ars Electronica: Improving Sonification Aesthetics. Understanding and Designing for Aesthetic Experience Workshop at HCI 2005 The 19th British HCI Group Annual Conference. Edinburgh: BCS.Google Scholar
Vickers, P., Hogg, B. 2006. Sonification Abstraite/Sonification Concrète: An ‘Aesthetic Perspective Space’ for Classifying Auditory Displays in the Ars Musica Domain. Proceedings of the 12th Meeting of the International Conference on Auditory Display. London: ICAD, 210–16.Google Scholar
Walker, B.N., Kramer, G. 2004. Ecological Psychoacoustics and Auditory Displays: Hearing, Grouping, and Meaning Making. In J.G. Neuhoff (ed.) Ecological Psychoacoustics. Waltham, MA: Elsevier Academic Press.Google Scholar
Xenakis, I. 1992. Formalized Music: Thought and Mathematics in Music. Hillsdale, NY: Pendragon Press.Google Scholar

Cited art works

Ikeshiro, R. 2009. Construction in Self. Independent, generative. http://www.ryoikeshiro.com.Google Scholar
Kubisch, C. 2003. Electrical Walks. Independent, installation.Google Scholar
Xenakis, I. 1991. GENDY3. On Aïs – Gendy3 – Taurhiphanie – Thalleïn, Neuma Records, 450–86, 1994, CD.Google Scholar
Figure 0

Table 1 A simplified version of Vickers and Hogg's schematic

Figure 1

Table 2 A modification of Vickers and Hogg's schematic with the addition of non-standard synthesis

Figure 2

Table 3 Categories of the use of data as audio

Figure 3

Table 4 Familiarity and indexicality in different uses of sound recordings

Figure 4

Table 5 Familiarity in audio rate use of data

Figure 5

Figure 1 r = 10 (i). In all figures, amplitude (y-axis) is plotted against time (x-axis). The range for the x-axis is [0,1] seconds for Sound examples 2 and 4, and [0,0.25] seconds for Sound examples 3 and 5

Figure 6

Figure 2 r = 14 (i)

Figure 7

Figure 3 r = 18 (i)

Figure 8

Figure 4 r = 19 (i)

Figure 9

Figure 5 r = 19.5 (i)

Figure 10

Figure 6 r = 20 (i)

Figure 11

Figure 7 r = 20.5 (i)

Figure 12

Figure 8 r = 21 (i)

Figure 13

Figure 9 r = 21.5 (i)

Figure 14

Figure 10 r = 22 (i)

Figure 15

Figure 11 r = 23 (i)

Figure 16

Figure 12 r = 24 (i)

Figure 17

Figure 13 r = 10 (i–ii)

Figure 18

Figure 14 r = 14 (i–ii)

Figure 19

Figure 15 r = 18 (i–ii)

Figure 20

Figure 16 r = 19 (i–ii)

Figure 21

Figure 17 r = 19.5 (i–ii)

Figure 22

Figure 18 r = 20 (i–ii)

Figure 23

Figure 19 r = 20.5 (i–ii)

Figure 24

Figure 20 r = 21 (i–ii)

Figure 25

Figure 21 r = 21.5 (i–ii)

Figure 26

Figure 22 r = 22 (i–ii)

Figure 27

Figure 23 r = 23 (i–ii)

Figure 28

Figure 24 r = 24 (i–ii)

Figure 29

Table 6 Use of data at audio rate

Supplementary material: File

Ikeshiro sound files

Ikeshiro sound files

Download Ikeshiro sound files(File)
File 2.1 MB