Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-11T17:17:50.962Z Has data issue: false hasContentIssue false

Anacoustic Modes of Sound Construction and the Semiotics of Virtuality

Published online by Cambridge University Press:  04 March 2020

Robert Seaback*
Affiliation:
Dakota State University, Madison, SD, USA
Rights & Permissions [Opens in a new window]

Abstract

This article discusses technical and aesthetic aspects of sound synthesis in the context of anacoustic modes of sound construction – a neologism that underscores the unique ontology of information in the digital domain. Anacoustic modes address the computer at its most fundamental level: the syntactic level of information. This changes the nature of signification as sound is considered first as an informational construct rather than a material circumstance, rupturing the front-loaded meaning that arises from our acoustic experience. Following certain concepts encompassed by N. Katherine Hayles’s posthumanism, anacoustic modes can be viewed as an expression of the materiality of information.

Type
Articles
Copyright
© Cambridge University Press, 2020

1. INTRODUCTION

The composition of music typically and traditionally presupposes its ultimate manifestation in sound – in physical, acoustic vibrations that can be heard by humans. It follows that composers often employ rationale based on sound ideas.

There is, however, a body of work that challenges this seemingly fundamental notion. Its compositional strategies are enacted in the domain of digital sound synthesis in which abstract schemata (kinds of information) have the potential to become sonic phenomena, but not inevitably or predictably so. We witness these processes by hearing what they leave as a trace, which is perceptually distinct from acoustically recorded sound (i.e., sound captured with a microphone) or synthetic sound derived from acoustic principles. This type of sound construction, divorced from representational intention, is suggestive of anacoustic as opposed to acoustic origins.Footnote 1

The common thread that links each example of anacoustic composition is the conception of data as sound. This article examines anacoustic modes from a technical and aesthetic standpoint, drawing attention to the way informational constructions (of sound or otherwise) relate to the material circumstances by which information is necessarily instantiated. Drawing from N. Katherine Hayles’s semiotics of virtuality (Reference Hayles1999: 247–82), the term anacoustic links different discursive formations of sound, information and materiality. When applied as an analytical device to the creation and interpretation of acousmatic music, the semiotics of virtuality illuminate relationships between embodied complexities, representational absence, informational patterns and noise.Footnote 2 This orientation attributes significance to the digital audio medium as an idiomatic voice in itself as well as a hyperreal window into the physical, acoustic world.

For preliminary clarification, I borrow the term sound construction from Joanna Demers (Reference Demers2010) to describe the production of ‘new’ sounds through synthesis, which is distinct from sound reproduction: the re-presentation of an acoustical event via recording or the use of pre-existing, sampled materials.

2. ANACOUSTIC MODES OF SOUND CONSTRUCTION

One possible extreme in virtual semiotics, anacoustic modes of sound construction reflect a conception of data as sound. They employ types of synthesis which address the computer at the most fundamental and abstract level of coding: the syntactic level of information. Claude Shannon (the ‘father of information theory’) described this form of information in his mathematical theory of communication or MTC. MTC focuses on the effective transmission of signals via communication channels, and ‘is the theory that lies behind any phenomenon involving data encoding and transmission’ (Floridi Reference Floridi2010: 38).

Floridi explains, ‘MTC deals with messages comprising uninterpreted symbols encoded in well-formed strings of signals. These are mere data that constitute, but are not yet, semantic information’ (Reference Floridi2010, 45). The meaning of a message is, in Shannon’s words, ‘irrelevant to the engineering problem’ (Shannon Reference Shannon1948: 381). In this view, information is dimensionless, non-material, free-floating and decontextualised. MTC is a study of information at the syntactic level, which is why it is so effective in information and communication technologies as computers are syntactical devices.

Digital audio encoding follows the principles of MTC, hence the commonly known Nyquist theorem is also known as the Nyquist-Shannon sampling theorem. Sound construction that establishes digital audio encoding itself as a site for sound specification is known as nonstandard synthesis. As described by Holtzman, in nonstandard synthesis, ‘sound is specified in terms of basic digital processes rather than by the rules of acoustics or by traditional concepts of frequency, pitch, overtone structure, and the like’ (Holtzman Reference Holtzman1979: 53). This changes the nature of signification as sound is considered first as an informational construct rather than a material circumstance, rupturing the front-loaded meaning that arises from our acoustic experience.

The hallmark of anacoustic processes is that they model neither the physical behaviour of sounding objects nor the acoustic characteristics produced by such behaviour. By either the knowing use of an arcanely coded translation process or by the use of an incomplete characterisation of physical models (or both), they operate in an immaterial space – that which is neither physical nor acoustic, while producing acoustic artefacts.

Central to nonstandard synthesis is the waveform, or time-domain representation. At an atomic level, the waveform is composed of a finite number of discrete time points and amplitude values which collectively outline the time-pressure curve on a computer. Nonstandard synthesis operates at this micro-sonic level, using abstract data sets or logical operations to build waveforms from their smallest digital elements. Nonstandard synthesis applications by Koenig (1971), Brün (1976) and Xenakis (1991) are well-documented examples of this activity and will later be examined in turn. This article also analyses a more idiosyncratic use of nonstandard synthesis in the work Musica Iconologos by Yasunao Tone and outlines how the concept of anacoustic operates as broad aesthetic territory.

2.1. Parametric and non-parametric models

While the sound encoding process is, by design, linked to acoustics and the Fourier series via the Nyquist theorem, the ontology of data has an effect on sound constructions that use data exclusively as the fundamental (im)material element. As Holtzman points out with regard to nonstandard synthesis (or synthesis by instruction), ‘instruction synthesis samples are determined through diacritically-defined relationships among samples which do not refer to some superordinate acoustic model or function’ (Holtzman Reference Holtzman1979: 53). This is contrary to standard synthesis, in which sound is specified according to high-level acoustic or psychoacoustic parameters.

Perry Cook’s definition of non-parametric models of sound synthesis similarly articulates the anacoustic condition. While a parametric model is ‘one that has a (relatively) few variable parameters that can be manipulated to change the interaction, sound, and perception’, a non-parametric model is not predicated on perceptually determined variables and ‘has no small set of parameters that allows us to modify the sound in meaningful ways’ (Cook Reference Cook, Hermann, Hunt and Neuhoff2011: 198). Similar to a single pixel on a high-resolution computer screen, a single sample is, at best, barely perceptible as a sound event. Roads reinforces the idea that ‘individual samples are sub-symbolic – perceptually indistinguishable from one another. [Furthermore,] it is intrinsically difficult to string together samples into meaningful music symbols’ (Roads Reference Roads2001: 31).

This assessment emphasises the dependent hierarchic levels of coding in a parametric model, adherence to which allows the coded sound to be manipulated and transferred back in a perceptually meaningful way. An analogy can be drawn with text structures in language from the level of the letter, to the word, to the sentence, paragraph and so forth. The words I am writing here are well formed in sentences, which allows them to carry meaning. To alter the order of words would damage the intelligibility of my sentences. Corruption at the lower level of words in their arrangement of letters would be even more damaging to communication.

Similarly, when sound is represented digitally, different levels of coding intervention introduce different degrees of failure for sound to carry and transmit information. Manipulating the shape of a digitised sound through parametric models (such as pitch and partial specifications for an oscillator bank or the cutoff frequency of a filter) is far less damaging to the encoded sound than the manipulation of samples or segments of samples, or, at an extreme, the order of the bits that comprise each sample, which could lead to complete (acoustic) noise.

A highly parametric model consolidates numerous functions into a top-down hierarchical control structure, in which small or large parameter changes yield corresponding changes in the acoustic output. Cook addresses ‘parametricity’ in the context of auditory display, where it plays a large role in the communicative prospects of data mapping (Cook Reference Cook, Hermann, Hunt and Neuhoff2011). Auditory display and sonification rely on the most perceptually salient features of music and sound synthesis to articulate relations within data sets.

Non-parametric models, such as those used in nonstandard synthesis techniques, do not present a sufficient analogue to sound since the conditions of sampling, manipulating and transferring sound as a message have been corrupted, which may cause aspects of the coding process to become evident instead of transparent.

2.2. The information/matter duality

Beginning in the mid-twentieth century, digital computing, advanced via the inter-discipline of cybernetics, affected a shift in epistemology as our material world became more and more easily equated with the digital abstractions we use to represent and understand it. Hayles describes this ‘contemporary pressure toward dematerialization … as an epistemic shift toward pattern/randomness and away from presence/absence’ (Hayles Reference Hayles1999: 29). The proliferation/ubiquity of information and communication technologies has given rise to the posthuman conception of information and materiality as distinct entities – that information is ‘separate from the material forms in which it is thought to be embedded’ (Reference Hayles1999: 2). In Shannon’s MTC, for example, information is defined as a probability function ‘with no dimensions, no materiality, and no necessary connection with meaning. It is a pattern, not a presence’ (Reference Hayles1999: 18).

A dualistic conception of information and materiality can be seen in various strands of sonic computing – from the practices of auditory display to algorithmic composition to real-time analysis and feature extraction. The information/matter duality also links directly to nonstandard synthesis, which on the one hand represents a posthuman vision of information as a reified entity, and on the other, an appeal to the materiality of digital systems via non-parametricity. While it is not my intention here to discredit the expressive power of our simulated or modelled versions of the natural world, I find Hayles’s reflection on the materiality of information a provocative reminder:

It can be a shock to remember that for information to exist, it must always be instantiated in a medium, whether that medium is the page from the Bell Laboratories Journal on which Shannon’s equations are printed, the computer-generated topological maps used by the Human genome Project, or the cathode ray tube on which virtual worlds are imagined. The point is not only that abstracting information from a material base is an imaginary act but also, and more fundamentally, that conceiving of information as a thing separate from the medium instantiating it is a prior imaginary act that constructs a holistic phenomenon as an information/matter duality. (Reference Hayles1999: 13)

The posthuman primacy of information necessitates the recuperation of materiality in the critical navigation of informatics. For Hayles, the distinction between presence/absence and pattern/randomness – the central dialectics in posthuman epistemology – is meaningful because presence/absence, she argues, ‘connects materiality and signification in ways not possible within the pattern/randomness dialectic’ (Reference Hayles1999: 247). Presence, for example, is allied with ‘an originary plenitude that can act to ground signification and give order and meaning to the trajectory of history’ (Reference Hayles1999: 285) – order, meaning and history being the realm of humans.

Acousmatic music, as a technologically mediated form, presupposes representational absence – the physical objects that may be heard are never truly present. Yet, acoustically recorded or physically modelled sound can signify presence as a trace left by physical action. Sounds can imply, by their spectromorphologies alone, human agency or material embodiment in physical space. By contrast, part of what distinguishes anacoustic sound constructions is their blunt deviation from such acoustic patterns as they cross over to exclusively informational origins – ‘meaning is not front-loaded into the system, and the origin does not act to ground signification’ (Hayles Reference Hayles1999: 286). This writing assumes that presence is a meaningful branch in the semiotics of virtuality and is central to listener comprehension as he or she navigates digital acousmatic sound worlds with (non-)representational potential ranging from idiomatic voice to the virtually real to the impossible or imagined.

Anacoustic strategies, by foregrounding the mechanical, processual, abstract nature of encoding, evoke a metaphoric distance between material and encoded realities. Hayles poses a relevant question with regard to digital, on-screen text, which might apply equally to the sonic arts: ‘How should we fundamentally change our idea of signification when language is bound up with code in the integral way that it is today?’ (Hayles Reference Hayles2010: 327)

3. WAVEFORM SEGMENT TECHNIQUES

Gottfried Michael Koenig’s Sound Synthesis Program (1971) is an early example of nonstandard synthesis reflective of his foundation in serialism. Serial techniques were central to Koenig’s concept of programmed music, as shown in his earliest computer experiments, Project 1 (1964) and Project 2 (1966): ‘by programmed music we mean the establishment and implementation of systems of rules or grammars, briefly: of programs, independent of the agent setting up or using the programs, independent too of sound sources’ (Koenig Reference Koenig1978: 4).

Sound Synthesis Program (SSP) was completed in 1971, of which Koenig writes:

My sound synthesis program SSP endeavours to transfer the generating principles of musical form to sound synthesis, and hence has common links with electronic music which particularly in its developmental phase in Cologne, stressed the inseparable unity of sound and sound structure. My aim was to apply the idea of a form-generating principle, as can be studied in Project 1 and Project 2, to the genesis of sound; the changing sound-field should represent the development of the form ‘directly,’ as it were, without being communicated by musicians and traditional instruments. (Koenig Reference Koenig1985: 3)

Berg, Rowe and Theriault (Reference Berg, Rowe and Theriault1980) provide a detailed account of the SSP’s operation, which can be viewed as algorithmic composition at the audio rate. Waveform segments are constructed and manipulated with algorithmic functions before they are sequenced and looped for a specified duration.

Similar to SSP, Herbert Brün’s SAWDUST, completed with the help of Gary Grossman in 1976, implements a waveform segment technique whereby a complex wave is constructed from the combination of segments of variable length and amplitude:

The computer program which I called SAWDUST allows me to work with the smallest parts of waveforms, to link them and to mingle or merge them with one another. Once composed, the links and mixtures are treated, by repetition, as periods, or by various degrees of continuous change, as passing moments of orientation in a process of transformations. (Brün Reference Brün1998)

In contrast to the fixed waveforms generated by SSP, the most novel aspect of SAWDUST was its ability to gradually transform from one waveform to another by computing a new wave (or link in the program’s terms) upon each wavetable iteration (Brün Reference Brün and Chandrand).

Both SSP and SAWDUST represent a distancing of composition from an embodied experience of sound – both performative bodies and the experience of human listening as sound is rendered an epiphenomenon superseded by abstract code structures. Koenig sought another level of structuralist intervention (using self-contained, form-generating principles) that might give rise to a new musical language, while Brün was fascinated by the waveform itself as a carrier of complexity with the possibility of new modes of communication.

Despite the anacoustic origins of sounds specified in SSP and SAWDUST, both systems entertained the possibility of perceptually governed interventions. In SSP, a sound is generated, and modifications are made following listening and evaluation, thus adding a level of aural logic to the composition process. Brün’s work exhibits a tendency to temper the unpredictable spectra of nonstandard synthesis with traditional modes of pitch organisation. Luc Döbereiner suggests, for example, that Brün’s sketches ‘reveal that he was constantly linking waveform lengths to tempered pitch scales and even producing twelve-tone rows and chords for the organization of waveforms’ (Döbereiner Reference Döbereiner2011: 33). While this technically contradicts a premise of sound construction that is entirely anacoustic, I acknowledge that poietic compromises are inevitably made that tether otherwise anacoustically generated material to the physical world of sound. It is actually this dynamic between code (message) as received by the computer versus message as received by a human that is of interest as artist programmers address both humans and intelligent machines in their activities. In the context of digital text-based practices, Hayles argues, ‘the fact that it is a double address has a very significant impact on how language operates and what language means’ (Hayles Reference Hayles2010: 327). The sound negotiations made on behalf of the (post-)human in an anacoustic modality are an acoustic salvage.

4. DYNAMIC STOCHASTIC SYNTHESIS

Iannis Xenakis is known for his use of mathematics and in particular stochastic processes in composition. Stochastics begin from a state of noise or chaos and apply restraints at various levels to create pseudo-deterministic patternings. This approach affords novel structural control over sound masses, group behaviour, and sound aggregates, that is, a perceptual gestalt – in contrast to deterministic serial processes that grow outward from local relationships. In a commentary on the inadequacy of Fourier series construction, Xenakis proposed a synthesis method according to this orientation:

Instead of starting from the unit element concept and its tireless iteration and from the increasing irregular superposition of such iterated unit elements, we can start from a disorder concept and then introduce means that would increase or reduce it. (Xenakis Reference Xenakis1992: 245)

With dynamic stochastic synthesis, Xenakis moved beyond the ‘lifeless sound made up of a sum of harmonics produced by a frequency generator’ (Reference Xenakis1992: 244) and closer to the noisy complexity of transient sound phenomena. He first experimented with the application of stochastic processes to waveform construction at the University of Indiana in the 1970s and returned to this project in 1991 at CEMAMu, Paris, where he wrote the computer program GENDY (GÉNération DYnamique).

Focused discussions on GENDY’s operation can be found in Di Scipio (Reference Scipio1998), Hoffman (Reference Hoffman2000), Luque (Reference Luque2009), Serra (Reference Serra1993) and Xenakis (Reference Xenakis1992) among others. GENDY divides a waveform into several segments and applies continuous variation (via stochastic functions) to the end-points of each segment, affecting both time and amplitude components of the wave. The elastic barriers that define each wave cycle allow control over the degree of regularity or randomness of the waveform shape, thus creating a spectral continuum from static pitch (high periodicity and symmetry) to noise (low periodicity and symmetry). GENDY is a unique approach to waveform construction because it attempts to link waveform patterns with types of spectra.

4.1. The materiality of informatics

For Xenakis, dynamic stochastic synthesis was not an avenue towards the simulation of acoustic sounds, but towards simulation of the acoustic condition. His assertion that transient states can be effectively simulated as stochastic variations in the time-pressure curve represents an experimental model for carrying sound information: a system abstracted from the visual waveform patterns of acoustic sound integrated with the system of nonstandard synthesis.

While it is true that acoustically derived waveforms exhibit patterning in visually discernable periodicities and symmetries, the noise that affects said patterns in the material world resists the normalising tendencies of inscription practices such as digital encoding. The noises of the acoustic (transients, spatial effects, vibrational irregularities, etc.), antagonistic to pattern, reflect the complexity of context bound, material circumstances and are not formulaically embedded in waveforms. This particular pattern/noise configuration is described by Hayles as an exchange between inscription and incorporation: one of two polarised relations that together articulate the dynamics of posthuman dematerialisation:

The first polarity unfolds as an interplay between the body as a cultural construct and the experiences of embodiment that individual people within a culture feel and articulate. The second polarity can be understood as a dance between inscribing and incorporating practices. (Reference Hayles1999: 193)

In this analysis, body/embodiment and inscription/incorporation forge connections between ideologies of immateriality and the material conditions that enable such ideologies. The body as a cultural construct differs from embodiment in that it is always normative relative to certain criteria. Meanwhile, embodiment never exactly coincides with ‘the body’ because it is contextual, ‘enmeshed within the specifics of place, time, physiology, and culture, which together compose enactment’ (Hayles Reference Hayles1999: 196). Hayles elaborates further:

Whereas the body is an idealized form that gestures toward a Platonic reality, embodiment is the specific instantiation generated from the noise of difference. Relative to the body, embodiment is other and elsewhere, at once excessive and deficient in its infinite variations, particularities, and abnormalities. (Reference Hayles1999: 196–7)

Inscription and incorporation are concerned with writing and action respectively. ‘Like the body, inscription is normalised and abstract, in the sense that it is usually considered as a system of signs operating independently of any particular manifestation’ (Hayles Reference Hayles1999: 200). An incorporating practice, on the other hand, cannot be separated from its material instantiation and exists only in a particular embodied circumstance: I move my hand and my wireless mouse, delete some text and return to the keyboard, translating my incorporation to a reduced account in prose.

Inscription and incorporation, along with the body and embodiment, inform and modify each other in a recursive feedback loop: ‘Incorporating practices perform the bodily content; inscribing practices correct and modulate the performance’ (Hayles Reference Hayles1999: 200).

While all digital sound constructions and reproductions are forms of inscription, different approaches to the medium articulate certain orientations related to body/embodiment and inscription/incorporation. Standard synthesis relates to the body and is constructed out of abstractions of our physical mechanism of hearing or the physics of sound, while nonstandard synthesis arises from the mechanics of inscription in the digital domain.

GENDY takes as a point of departure the observable patterns of sound inscriptions representative of physical circumstances – acoustic phenomena that carry an inextricable link to the ‘noise of difference’ out of which they emerge through incorporation. GENDY underscores the interplay between incorporating practices and the inscriptions that abstract the practices into signs. Xenakis saw dynamic stochastic synthesis as a channel towards embodied complexity, hence his posthuman suggestion that ‘following these principles, the whole gamut of music past and to come can be approached’ (Reference Xenakis1992: 289). But the complexity of the acoustic as a physical, summative phenomenon can barely be approximated through non-parametric synthesis models, which relinquish perceptually or physically meaningful controls.

In acoustic recording, the digital means of abstraction are secondary to the surface representations it manifests. When Xenakis foregrounds inscription, the particularities of incorporation that characterise acoustic recording tend to fade from view, obfuscated by artefacts that arise from the inscriptive mechanics.

5. MUSICA ICONOLOGOS

Yasunao Tone’s first CD project, Musica Iconologos (1993), presents an idiosyncratic implementation of nonstandard synthesis that reconfigures the practice of audification into a noise-incurring process. Starting with excerpts from the Shih Ching, the earliest Chinese poetry anthology, Tone matched text characters with photographic images derived from the text’s ancient pictographic forms (Tone Reference Tone2003).

After the images were digitised, Tone, with the help of engineer Craig Kendall, analysed each utilising functions from Optical Music Recognition (OMR) software developed by Ichiro Fujinaga at McGill University. OMR collects projection data by analysing an image as a two-dimensional matrix of pixels, scanning through x- and y-axes to detect the presence of black or white. A histogram is then generated based on the projection data for each axis as seen in Figure 1 (Kendall Reference Kendall1993).

Figure 1. X/Y projection data from OMR software (Fujinaga Reference Fujinaga1997: 35).

Waveforms were constructed with the C language program Projector based on combinations of the X and Y projection data, such as X+Y, X-Y, Y-X, and X*Y (Kendall Reference Kendall1993). These combinations represent collections of binary words – bits (zero or one) generated from pixels (white or black).

The resulting 187 sounds – each derived from a single image, averaged only about 20 milliseconds each in duration. Kendall’s task as digital editor was to ‘uncover and shape the larger sounds that lay within each short 20 ms burst’ (Kendall Reference Kendall1993). Another form of acoustic salvage, he used common digital signal processing techniques such as time stretching and pitch shifting to reflect the time span and phonetic inflections of the ‘words’ as if they were spoken.

The two tracks that comprise Musica Iconologos are entitled ‘Jiao Liao Fruits’ and ‘Solar Eclipse in October’. Kendall asserts that ‘Tone always remained true to the poem’s structure regardless of his personal impressions of the music’ (Reference Kendall1993). The form of Musica Iconologos – its continuity as a string of isolated sound events drawn, like words, from a finite vocabulary – is its one remaining tangible connection to the Shih Ching.

Musica Iconologos is anticommunicative (following the Brünian concept) in the way that it channels and integrates fragments of language into a new system in which they fail to mean what they once meant. Anticommunication was composer Herbert Brün’s view of the role of noise in the reshaping/evolution of language. Anticommunication focuses on relationships between disparate systems of communication and forges links between them that instigate transformative periods whereby noise becomes assimilated by pattern (Brün Reference Brün, Chandra and Middletown2004). Links can be drawn, for example, at a syntactic or structural level.

Analogy: two systems are guided by one structure.

To make an analogy: construct a system in relation to another system such that the constructed system points at a structure which both share. (Brün and Enslin Reference Brün and Enslinnd)

In MI, traditionally communicative systems are transformed via their digital deconstruction and integration – the communication of poetry (language), image and sound.

5.1. Flickering signifiers

The binary foundation of digital code implies that meaning must be constructed at higher levels of the coding hierarchy, and arbitrarily so as there is no inherent correlation between high-level parameters and low-level code structures. This altered mode of signification underlying the operation of information and communication technologies is referred to by Hayles as flickering signification. Flickering signifiers are characterised, ‘by their tendency toward unexpected metamorphoses, attenuations, and dispersions’ (Hayles Reference Hayles1999: 30).

Information technologies operate within a realm in which the signifier is opened to a rich internal play of difference. In informatics, the signifier can no longer be understood as a single marker, for example an ink mark on a page. Rather it exists as a flexible chain of markers bound together by the arbitrary relations specified by the relevant codes … A signifier on one level becomes a signified on the next-higher level. Precisely because the relation between signifier and signified at each of these levels is arbitrary, it can be changed with a single global command. (Reference Hayles1999: 31)

Flickering signifiers expose the undercurrent of abstraction and transformation beneath representations in the virtual stimuli we experience daily. When an analogue signal, for example, is sampled and converted to an abstract code, a sequence of binary words, it becomes susceptible to dramatic transformations in a chain of flickering signification. Parametric models construct layers of translations to correlate some perceptually meaningful variable to a change in the binary sequence. To operate on the code itself – as in non-parametric models – corrupts any kind of perceptually grounded patterning.

Although flickering signifiers specifically apply to information and communication technologies in How We Became Posthuman, Musica Iconologos extends this computational dynamic to compositional method through the intentional foregrounding of the disparity between an information source, its encoded form(s), and its (re-)embodiment in sound.

6. ANACOUSTIC TRACES

This section examines the immanent dimension of nonstandard synthesis – the trace in sound – which exhibits spectromorphological attributes engendered by the time-domain synthesis paradigm. I suggest that the immanent level carries information that can be identified for its contribution to listener impressions of artificial (or digital) origins. I do not claim this analysis to be comprehensive, but rather, a selective contribution to my heuristic framework built around the anacoustic concept.

Of central importance in analysing the anacoustic trace is its relation to the phenomenon of resonance. Acoustic resonance is a marker of the physicality of sound as the direct result of physical input. We observe resonance in the varied vibrational patterns of bodies when they are subject to noisy external forces. The anacoustic trace is marked by its deviation from acoustic resonance patterns, the details of which will be examined in the sections that follow.

6.1. Acousmatic bodies

Resonant bodies are set in motion by incorporated practices. Part of what marks sound constructions as distinct from acoustic recordings is a lack of those ephemeral sonic details that emerge from physical action – details that collectively signify presence.

Central to presence is embodiment, and central to embodiment is the human body. At an extreme, composer Bob Ostertag shares the view of his collaborator Pierre Hébert who says ‘the measure of a work of art is whether one can sense in it the presence of the artist’s body’ (Ostertag Reference Ostertag2002: 11). In the acousmatic scenario, where there is no visual source of embodiment or action, we often listen for traces of the body, tuned in to those specific sound qualities that signify the movements or utterances of a human agent. Katharine Norman provides a uniquely poetic account of this process in her journal entry response to Petit Jardin by Magali Babin, in which she navigates the cognitive dissonance of hearing a performance without seeing it, calling forth the presence/absence duality: ‘nearly all the sounds imply actions. Someone or something “did” these things’ (Norman Reference Norman2004: 5). The way that we relate sound to incorporated practices (and by extension, embodiment) has been referred to by Andrew Mead as kinesthetic empathy (Mead Reference Mead1999). Marc Leman’s synopsis of Broeckx’s theory of expressive meaning formation in music goes further when he discusses kinesthetic processing:

Kinesthetic processing concerns the sensing of musical dynamics. Music is dynamic in the sense that physical properties (frequency, amplitude, and so on) evolve through time and generate in our perception segregated streams and objects that lead, via ideomotor processing, to impressions of movement, gesture, tension, and release of tension. (Leman Reference Leman2008: 93–4)

Embodiment is also central to composer Denis Smalley’s concept of gesture in acousmatic listening:

A human agent produces spectromorphologies via the motion of gesture, using the sense of touch or an implement to apply energy to a sounding body. A gesture is therefore an energy-motion trajectory which excites the sounding body, creating spectromorphological life …

When we hear spectromorphologies we detect the humanity behind them by deducing gestural activity. (Smalley Reference Smalley1997: 111)

The concepts of gesture and kinesthetic empathy provide ways of understanding presence as a key node in the semiotics of acousmatic listening as they articulate our recognition and affective response to certain patterns related to sounding bodies in the physical world.

Sound constructions, on the other hand, are distinguished by their tenuous relation to embodiment as Norman again demonstrates in her response to zero degrees by Ryoji Ikeda: ‘He is absent now. Nobody performs, hits a gong, or trails a hand through implicitly substantial sounds. Instead the sound is apparently laid bare and has no aural secrets’ (Norman Reference Norman2004: 13). This kind of impression can be traced back to the erasure of transient complexity in favour of replication and invariance. There is nothing below the surface; no one behind the acousmatic veil. Norman’s notion that sounds can carry implicit substance is a testament to the ‘front-loaded’ meaning of presence, whereas the crossover to informational pattern is subject to contingencies. Exasperated, she continues, ‘identifying with a click is to become brutally irradiated by sound. No time at all. Quick, get rid of it in favour of records of human presence!’ (Reference Norman2004: 14). In a perceptually dispassionate manner, zero degrees presents starkly digital material at polarised extremes of timbre, frequency, and duration, extending the binary ontology of data to a formal continuity.

Musica Iconologos and other anacoustic works similarly place the listener in a scenario where sound origins may be unrecognisable or at least untraceable to known physical causes. But despite the absence of physical origins, many sound constructions in today’s predominantly electronic-industrial soundscape are associated with the tools (digital or otherwise) used to create them or the environments in which they are typically found. Some general observations can be made about the spectromorphological ramifications of sound construction, which help to identify immanent characteristics of digital audio as an idiom.

6.2. Invariance

Invariance might be used to describe many facets of an informational ontology: invariances manifest in quantisation, in replication, or in the widely applicable but ultimately invariable syntax of MTC. The computer can be conceptualised as a template for sound in the form of an invariant time and frequency grid to be filled with discrete sound particles, or acoustic quanta. Quantum approaches to sonic computing have been comprehensively documented by Roads and defined as microsound (Roads Reference Roads2001). Nonstandard synthesis represents an extreme approach to time-domain microsound, operating below the level of micro-time at the sample time scale. The perceptual effects of constructive or destructive interventions at this scale are most easily observed in the frequency domain.

6.2.1. Spectral invariance

The property of tonal balance – the distribution of spectral energy across the audible range – is significant in the relation of sounds to artificial modes of production. Nonstandard synthesis naturally generates spectra characterised by (at an extreme) a flat frequency response, meaning a relatively equal (and invariant) distribution of energy among partials. This property deviates from the spectral patterns of acoustic instruments, which are predicated on resonance and the natural concentration of energy around vibrational modes close to the fundamental frequency.

Certainly, one of the affordances of synthesis is the ability to explore and manipulate high register partials that are inaccessible or transient in the acoustic instrumental domain. The difficulty with nonstandard synthesis lies in the formation of meaningful relationships between waveform shapes and timbre. Some correlations (well known to electronic musicians) can be made towards this end, as summarised by Moore:

Waveforms exhibiting impulsive behaviour tend to have spectra that do not decrease with increasing frequency.

Waveforms that tend to have sharp steps tend to have spectra that roll off at the rate of 6dB per octave.

Waveforms that tend to have sharp corners tend to have spectra that roll off at the rate of 12 dB per octave. (Moore Reference Moore1990: 95)

SAWDUST generates sharp steps by design and GENDY, sharp corners. Musica Iconologos is characterised by waveforms that exhibit pulse and step-like behaviour (Figure 2) – spectra do not significantly decrease in energy over the audible range (Figure 3).

Figure 2. Waveform segment from ‘Jiao Liao Fruits’.

Figure 3. Average frequency distribution of ‘Jiao Liao Fruits’.

6.2.2. Temporal invariance

The temporal invariance of sound constructions can be heard in the mechanical precision of pulse-based computer music such as zero degrees or in the sample-accuracy of time-domain granular synthesis. It is also evident in fixed-wavetable lookup synthesis (i.e., digital oscillators), which generates spectra with invariant morphologies – acoustic quanta are conceived as static spectral units of infinite duration (such as a sinusoidal wavetable looping continuously). Koenig’s SSP, despite its nonstandard operation, was inhibited by the spectral invariance of the fixed-waveform paradigm. SSP segments, upon specification, were iterated periodically without input parameters that might allow dynamic change to successive periods.

6.2.3. Anechoic spaces

Anacoustic sound constructions are often characterised by an anechoic spatial profile. Anechoic implies without reflections or echo. When an acoustic source vibrates, it causes the air molecules surrounding it to vibrate in an analogous manner, expanding outward from the source. When sound arrives at the ears of the listener, it is typically a combination of this direct sound vibration with many time-delayed reflections that have bounced off surfaces within the listener’s space. These reflections constitute the acoustic properties of the space – its resonant and reverberant characteristics. Sound constructions do not typically carry intrinsic attributes that signify diffusion within a physical space; nor the resonant patterns of objects that, as micro spaces, reinforce and cancel different frequencies over time. They are completely ‘dry’. The impact of the anechoic profile might be compared to the threat response elicited in listeners from equally dry, close-miked acoustic recordings, especially of sounds without ambient resonance. A loud, near-field noise, lacking in resonance, signals a threat as it associates with something that is breaking, being no longer contained or in vibrational balance. In many cases, anacoustic composers embrace the starkness, proximity and noise of the anechoic profile, and do not attempt to salvage reflective properties through further processing.

7. SYNTHESIS and VIRTUAL SEMIOTICS

Hayles’s semiotic square, encompassing her semiotics of virtuality, is a heuristic device used to map the posthuman concept as a literary phenomenon, linking the central dialectics of presence/absence with pattern/randomness. When applied as an analytical framework to the creation and interpretation of acousmatic music, the semiotic square (Figure 4) illuminates relationships between embodied complexities, representational absence, informational patterns and noise (synonymous with randomness in Hayles’s language, that is, the absence of pattern). Importantly, I view the semiotics of virtuality as a creative springboard towards musical expression that considers the computer for both its productive and its reproductive capacity, with an ear towards the extrinsic significance of materials as related to their modes of production and the metaphoric networks to which they give rise.

Figure 4. The semiotics of virtuality (Hayles Reference Hayles1999: 248).

The terms that comprise Hayles’s semiotic square interact dynamically with their partners:

The dialectics can be set in motion by placing presence/absence along the primary axis, with pattern/randomness located along the secondary axis. The relation of the secondary axis to the primary axis is one of exclusion rather than opposition. Pattern/randomness tells a part of the story that cannot be told through presence/absence and vice versa. The diagonal connecting presence and pattern can conveniently be labelled replication, for it points to continuation. An entity that is present continues to be so; a pattern repeating itself across time and space continues to replicate itself. By contrast, the axis connecting absence and randomness [noise] signals disruption. Absence disrupts the illusion of presence, revealing its lack of originary plenitude. Randomness tears holes in pattern, allowing the white noise of the background to pour through. (Reference Hayles1999: 248)

Bringing the dynamic interaction between the primary dialectics to the foreground, new synthetic terms emerge (Figure 5):

On the top horizontal, the synthetic term that emerges from the interplay between presence and absence is materiality. I mean the term to refer both to the signifying power of materialities and the materiality of signifying processes. On the left vertical, the interplay between presence and randomness gives rise to mutation. Mutation testifies to the mark that randomness leaves upon presence … On the right vertical, the interplay between absence and pattern can be called, following Jean Baudrillard, hyperreality. (Reference Hayles1999, 249)

Figure 5. Semiotic square with synthetic terms (Hayles Reference Hayles1999: 249).

On the concept of hyperreality, Hayles elaborates:

Baudrillard has described the process as a collapse of the distance between signifier and signified, or between an ‘original’ object and its simulacra. The terminus for this train of thought is a simulation that does not merely compete with but actually displaces the original. (Reference Hayles1999: 250)

Finally, the bottom horizontal is labelled information, ‘to include both the technical meaning of information and the more general perception that information is a code carried by physical markers but also extracted from them’ (Reference Hayles1999: 250).

Anacoustic modes highlight the ontology of information and articulate a distance from material circumstances. By doing so, information becomes a reified entity in itself, collapsing the top-down structural hierarchy used in parametric coding in favour of a low-level address that corrupts the waveform as a carrier of meaning. Unlike parametric modes, which encode the physicality of sound objects into mechanisms of performative control, the computational techniques of nonstandard synthesis have no ground in embodied human experience. This calls to mind Hayles’s analysis of the science-fiction dynamic of materiality in Galatea 2.2 by Richard Powers (Reference Powers1995), in which she describes the precarious relation between the artificial intelligence Helen and its creator, Rick:

[For Helen,] there is nothing in her embodiment that corresponds to the bodily sensations encoded in human language … To feel estrangement in language, as Rick comes to feel as he works with Helen, is to glimpse what it might be like to be incorporated in a body that finds no image or echo in human inscriptions. (Hayles Reference Hayles1999: 265–6)

Standard synthesis techniques are allied with presence in their parametric links to acoustic properties. They represent these acoustic properties through information, yielding simulacra that can be seen as a play between representational absence and informational pattern.

In Musica Iconologos, Yasunao Tone channels the information of materiality through the digital abstraction of image while, in sound, presenting something closer to the materiality of information as there is no parametric barrier (at least at the production stage) between the sound and its code structure (remember that all sounds were edited to make them ‘acoustically’ viable).

Beyond anacoustic modes, I imagine music that engages with other dimensions of the semiotic square – those equally anchored in presence, which can be co-opted by pattern or transfigured by noise. A voice sings and becomes frozen in time – a static aggregate of overtones – a body representation. Or it is mutated by noise – a physical model extended beyond physical boundaries, or a distorted semblance of language. Certainly, many well-known works in the electroacoustic cannon perform these relationships, which is a promising area for further research beyond the scope of this article. Current practices in recording, digital signal processing, and sound field simulation such as ambisonics and wave field synthesis approach the complex multidimensionality of our embodied experiences of sound – functioning as powerful signs of presence in a hyperreal aesthetic space.

8. FINAL THOUGHTS

In digital sound construction, the computer functions as a channel which, through its own unique ontology, demands a posthuman methodology whereby sound exists simultaneously as informational pattern and material trace. Sound becomes a real, but malleable substance that can be altered via commands sending chain reactions through a multitude of informational, structural hierarchies. Diverging from the instant and continuous feedback a performer receives from an acoustic instrument, the computer musician must constantly alternate between an embodied experience of sound and its abstract underpinnings in code, tracing connections in a continuous feedback loop until a sufficient model is attained.

Sound construction based on the inscriptive mechanics of digital audio – aggregates of samples strung together from logical operations – is an idiomatic approach to sonic computing which foregrounds the disparity between code-specific and perceptually governed logic. This anacoustic endeavour serves as a reference to the materiality of information – the binary undercurrent flowing beneath the most complex representations and simulations. Combined with acoustic modes of synthesis, the digital medium offers the potential to function both as an idiomatic voice (the materiality of information) and as a window into the physical world (the information of materiality). I find the engagement between these two polarities an interesting prospect in formation of an aesthetic around what might be called an (an)acoustic discourse.

Footnotes

1 An anacoustic zone, such as the upper region of the atmosphere or space, is unable to support the propagation of sound. The term, as it functions in this writing, is synonymous with ‘soundless’.

2 For convenience, I borrow the definition of acousmatic from Bayle to ‘demarcate music on a fixed medium – representing a wide aesthetic spectrum – from all other contemporary music’ (Bayle Reference Bayle1993: 18)

References

REFERENCES

Bayle, F. 1993. Musique Acousmatique: Propositions … Positions. Paris: Buchet/Chastel.Google Scholar
Berg, P., Rowe, R. and Theriault, D. 1980. SSP and Sound Description. Computer Music Journal 4(1): 2535.CrossRefGoogle Scholar
Brün, H. nd. A Manual for SAWDUST, ed. Chandra, Arun. https://sites.evergreen.edu/arunchandra/wp-content/uploads/sites/395/2018/05/sawdust.pdf (accessed 4 July 2019).Google Scholar
Brün, H. 1998. Liner notes to SAWDUST Computer Music Project. Albany, NY: EMF CD 00644.Google Scholar
Brün, H. 2004. When Music Resists Meaning, ed. Chandra, Arun. Middletown, CT:Wesleyan University Press.Google Scholar
Brün, H. and Enslin, M. nd. Traces Left by Ten Dialogues. www.herbertbrun.org/BrunTexts.html (accessed 2 May 2017).Google Scholar
Cook, P. R. 2011. Sound Synthesis for Auditory Display. In Hermann, T., Hunt, A. and Neuhoff, J. G. (eds.) The Sonification Handbook. Berlin: Logos Verlag, 197235.Google Scholar
Demers, J. 2010. Listening Through the Noise: The Aesthetics of Experimental Electronic Music. New York: Oxford University Press.CrossRefGoogle Scholar
Scipio, Di. 1998. Compositional Models in Xenakis’s Electroacoustic Music. Perspectives of New Music 36(2): 201–43.CrossRefGoogle Scholar
Döbereiner, L. 2011. Models of Constructed Sound: Nonstandard Synthesis as an Aesthetic Perspective. Computer Music Journal 35(3): 2839.CrossRefGoogle Scholar
Floridi, L. 2010. Information: A Very Short Introduction. New York: Oxford University Press.CrossRefGoogle Scholar
Fujinaga, I. 1997. Adaptive Optical Music Recognition. PhD dissertation, McGill University, Montreal.Google Scholar
Hayles, N. K. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.CrossRefGoogle Scholar
Hayles, N. K. 2010. How We Became Posthuman: Ten Years on an Interview with N. Katherine Hayles. Paragraph 33(3): 318–30.CrossRefGoogle Scholar
Hoffman, P. 2000. The New GENDYN Program. Computer Music Journal 24(2): 31–8.CrossRefGoogle Scholar
Holtzman, S. R. 1979. An Automated Digital Sound Synthesis Instrument. Computer Music Journal 3(2): 5361.CrossRefGoogle Scholar
Kendall, C. 1993. Liner notes to Musica Iconologos. Yasunao Tone. New York: Lovely Music, Ltd. LCD 3041.Google Scholar
Koenig, G. M. 1978. Composition Processes. www.koenigproject.nl/indexe.htm (accessed 20 May 2017).Google Scholar
Koenig, G. M. 1985. Programmed Music. www.koenigproject.nl/indexe.htm (accessed 21 September 2017).Google Scholar
Leman, M. 2008. Embodied Music Cognition and Mediation Technology. Cambridge, MA: MIT Press.Google Scholar
Luque, S. 2009. The Stochastic Synthesis of Iannis Xenakis. Leonardo Music Journal 19: 7784.CrossRefGoogle Scholar
Mead, A. 1999. Bodily Hearing: Physiological Metaphors and Musical Understanding. Journal of Music Theory 43(1): 119.CrossRefGoogle Scholar
Moore, F. R. 1990. Elements of Computer Music. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
Norman, K. 2004. Sounding Art: Eight Literary Excursions through Electronic Music. Aldershot: Ashgate.Google Scholar
Ostertag, B. 2002. Human Bodies, Computer Music. Leonardo Music Journal 12: 1114.CrossRefGoogle Scholar
Powers, R. 1995. Galatea 2.2. New York: Farrar, Straus and Giroux.Google Scholar
Roads, C. 2001. Microsound. Cambridge, MA: MIT Press.Google Scholar
Serra, M.-H. 1993. Stochastic Composition and Stochastic Timbre: GENDY3 by Iannis Xenakis. Perspectives of New Music 31(1): 236–57.CrossRefGoogle Scholar
Shannon, C. E. 1948. A Mathematical Theory of Communication. Bell System Technical Journal 27: 379423, 623–56.CrossRefGoogle Scholar
Smalley, D. 1997. Spectromorphology: Explaining Sound Shapes. Organised Sound 2(2): 107–26.CrossRefGoogle Scholar
Tone, Y. 2003. John Cage and Recording. Leonardo Music Journal 13: 1115.CrossRefGoogle Scholar
Xenakis, I. 1992. Formalized Music: Thought and Mathematics in Composition, revised Edition. Stuyvesant, New York: Pendragon Press.Google Scholar
Figure 0

Figure 1. X/Y projection data from OMR software (Fujinaga 1997: 35).

Figure 1

Figure 2. Waveform segment from ‘Jiao Liao Fruits’.

Figure 2

Figure 3. Average frequency distribution of ‘Jiao Liao Fruits’.

Figure 3

Figure 4. The semiotics of virtuality (Hayles 1999: 248).

Figure 4

Figure 5. Semiotic square with synthetic terms (Hayles 1999: 249).