1. NOTATING MUSICAL OBJECTS
In electroacoustic music, the question of notation has been at the centre of many experiments. How do we notate the unwritten: should we describe or transcribe? At times, some form of notation has been devised to graphically represent sonic events, and it has also been used to convert them into abstract symbols. In other words, notation can traditionally take the shape of a pictographic approach or a symbolic one. Somehow, a comparable dichotomy was expressed by Charles Seeger when he discussed two opposite approaches to notation (or ‘music writing’, as he put it), which he defined as prescriptive and descriptive. The former addresses ‘how a specific of music shall be made to sound’, while the latter is more like a ‘report of how a specific performance of it actually did sound’ (Seeger Reference Seeger1958: 184). Common musical notation is best suited for prescription; graphic notation has been used for a variety of situations, including representation of events in electroacoustic music. Today, there is an extensive body of scores involving multiple types of notation. However, we still can’t rely on a conventional graphical notation system: Alan Tormey remarked that, at the Princeton Laptop Orchestra, most music was either created as improvisation or involved some sort of openness, such as aleatoric music; he explained that the cause of this limitation could be found in the ‘lack of any defined and effective paradigms within which to develop and communicate more strictly specific musical ideas’ (Tormey Reference Tormey2011: 1). According to him, not having a system of conventions apart from common musical notation, ill-suited for the particular situation of that ensemble, composers were tempted to leave the performers to make decisions in their place.
An even deeper question arose a long time ago. It had been introduced in the late 1930s by French poet and thinker Paul Valéry, who invented two new words: one was meant to address what we perceive, however the object of perception was made. He called it the aesthesic position.Footnote 1
I would constitute a first group, which I would call Aesthesic, and in which I would put all that is related to sensations. (Valéry Reference Valéry1937a: 1311)
The other word invented by Valéry, poietic, aimed at dealing with the production side of things.
Another stack would gather everything concerned with the production of works; and a general idea of the complete human action, from its mental and physiological roots up to its enterprises upon material or individuals, would enable to subdivide this second group, which I would call Poetic, or rather Poietic. (Valéry Reference Valéry1937a: 1311)
Valéry mentions elsewhere that he derived this word from the Greek poïein, to make (1937b).
In electroacoustic music, a good deal of research has been undertaken from the aesthesic point of view. Indeed, Pierre Schaeffer built his theory of ‘musical object‘ from the listening experience. He researched the way we perceive and listen, and he invented the concept of reduced listening, which occurs when we lose the awareness of the origin of a sound, of its cause, even of its source. His theory aimed at providing musicians with a terminology and a particular behaviour when selecting and processing sounds, which combine typological classification and morphological description. His typo-morphology, developed in his 1966 book, Traité des objets musicaux, is presented in a detailed manner throughout its 670 pages.Footnote 2
However, his typo-morphology was mostly based on the question of classification, a problem which haunted him from the early days of musique concrète. He came up with a specific class of symbols to notate the characteristics of the sounds as an aid to categorising them in a particular typology (Schaeffer Reference Schaeffer1966: 466ff.). In fact, when Schaeffer first formed a team devoted to research and production of musique concrète, the Groupe de recherches de musique concrète (GRMC), a staff member, Michèle Henry, was hired for the main purpose of identifying and labelling the sound samples collected by the studio. In effect, Mrs Henry, who later became Michèle Thierry, was describing the sounds verbally, creating a collection of textual notations. One of her functions was to investigate the possibility of establishing a notation system for musique concrète, which, in fact, led nowhere but was felt a worthwhile attempt in the early days of this music. In his book À la recherche d’une musique concrète (1952), Pierre Schaeffer not only includes several examples of transcription of musique concrète, but also comments on the efforts at notating the sound objects and the problems in doing so.
Schaeffer’s typo-morphology was abstract, complex, symbolic and in the end not entirely convincing. Denis Smalley considered the same problem from a different perspective; the theory he proposed and which is known as spectromorphology is also based on an aesthesic approach. Can spectromorphology be helpful in composing? What role does it play in performance? In all the Electroacoustic Music Studies Network conferences,Footnote 3 since 2003, Pierre Schaeffer and spectromorphology are the two items that have appeared most often. Surely, they are of interest to musicologists and to composers.
More recently, there has been much research into aesthesic considerations. The UST (unités sémiotiques temporelles), for instance, come to mind (Laboratoire musique et informatique de Marseille 1996), but there are many others. Another question is whether a better understanding of electroacoustic music should be based on aesthesics. François Delalande (Reference Delalande2013), Lasse Reference ThoresenThoresen(Reference Thoresen2007, forthcoming) and Stéphane Roy (Reference Roy2003), among others, seem to think so.
2. MUSICOGRAPHIES
A remarkable attempt at inventing notations was realised in 1994 by Dominique Besson for an exhibition in Grenoble (France), which was devoted to graphics. It emphasised the concept of interactive listening, which had been put forward during a research seminar held by François Delalande at the GRM. Some animations rested on a new software, Acousmographe, developed by Olivier Koechlin and Hugues Vinet. In 1997, a CD-ROM of these notations was produced by the Institut national de l’audiovisuel (INA), with the technical assistance of Muriel Bonfils. It remains to this day a good example of research in representing music which is either not notated or whose notation is not suitable for interactive listening. It spans from Romanian gypsy music and Japanese shakuhachi to jazz and electroacoustic music. For instance, Figure 1 shows a screen capture of the Romanian piece Ciocîrla, by Taraf of Haîdouks, with Georghe Falaru and Anghel Georghe. The top frame is a video clip of the performance (flute and violin), while the bottom one is a coloured spectrographic analysis displaying the two musical lines (Figure 1).
Figure 1 Les Musicographies. Taraf of Haîdouks, Ciocîrla.
Among the multiple examples, some stand out as paradigms of representation. As such, the work done on a short movement of François Bayle’s ‘Rosace 5’ from Vibrations composées (1973) demonstrates the potential of representation. This is a rather short piece, in which a small number of musical objects are constantly played with variations of some of their dimensions: sounds of electronic sources and piano sounds are presented in the piece with various degrees of transpositions, spatial trajectories and inversions.
A first animation followed the natural course of the piece, while the graphical transcriptions were coloured according to their spatial placement: green for right, red for left and brown for centre (Figure 2). The same symbols were then used for a short explanation of the processes involved in the making of the piece (Figures 3 and 4).
Figure 2 Les Musicographies. François Bayle, Rosace 5.
Figure 3 Les Musicographies. François Bayle, Rosace 5. Explanation of terms. The CD-ROM offered a classification of each basic sound in the form of its graphic transcription, linked to a sound file. The reader was presented with a window in which these sounds were gathered, so that it was possible to freely assemble the sounds to create a personal version of the piece. This had a strong pedagogical impact, as it enabled anyone to play with the musical objects of the piece in a creative fashion. It also made these sounds accessible to anyone who could use this CD-ROM.
Figure 4 Les Musicographies. François Bayle, Rosace 5. Game.
3. ELECTROACOUSTIC MUSIC SPECTRAL REPRESENTATION
The opposite point of view, the poietic, was, in fact, successfully employed by Karlheinz Stockhausen in early works when he arrived at the Cologne’s NWDR studio, his second Electronic Study, and, to a lesser degree, his first. Later, other pieces were composed on similar systems: Franco Evangelisti, Incontri di fasce sonore (1957); Włodzimierz Kotoński, Study on One Cymbal Stroke (Etiuda na jedno uderzenie w talerz, 1959) or Ivana Loudová and Miloš Haase, Res humana campanorum canticum (1970). In each case, the score provides enough information to make an exact reproduction of the original, albeit with minor idiosyncrasies relevant to the production techniques of the time. An extreme example is the score by Bogusław Schaeffer, Symphony – Muzyka Elektroniczna (1964–66) (Schaeffer Reference Schaeffer1968), a realisation score which enables anyone to produce a version of the piece.
In the early days of musique concrète, when notation was still very much on the minds of Pierre Schaeffer and Pierre Henry, as well as the young composers temporarily associated with the GRMC, such as Pierre Boulez, a number of musique concrète pieces were accompanied by graphical scores. A notable case is the only tape piece composed by Olivier Messiaen, Timbres-durées (Battier Reference Battier2008), which I would like to take as an example of the graphical approach to notation at that time. It is a particularly interesting case as the piece, which lasts fifteen minutes, has three different scores, two of which are graphical. The first score uses common musical notation. Figure 5 shows the two first sections out of twenty-four. The rhythmic nature of the piece is illustrated by the various cells, each being applied to a particular sound. The score displays a verbal annotation of the nature of the sounds as well as their code, which refers to the label identifying the tape (Figure 5).
Figure 5 Olivier Messiaen. Timbres-durées. Sections 1 and 2. With permission from INA/GRM.
The second score was probably drawn for the actual realisation of the piece in the GRMC studio. During the process of splicing the tapes, a number of adjustments were made, resulting in the suppression of several ‘notes’ or sounds. This score is crucial to further analysis of the piece, as it shows how the careful organisation that appears in the common notation score was modified, resulting in several rhythmic and structural ambiguities (Figure 6). These can only be clarified by comparing the two scores.
Figure 6 O. Messiaen. Timbres-durées. Linear score. Sections 1, 2 and beginning of 3. With permission from INA/GRM.
The last score was made for multi-channel diffusion. At the time of the performance, the studio was experimenting with a new system conceived for the spatial projection of music, the ‘pupitre d’espace’ (space console).Footnote 4 It was thus decided to produce the piece in four channels. Three of them were read off the three-track tape recorder, an invention of Schaeffer and Poullin which was used throughout the 1950s at the studio, and a mono tape recorder was added for the fourth track. The channels were organised as follows: 1: right, 2: left, 3: kinematic, 4: centre and back. The third channel used the gestural control system associated with the pupitre d’espace, which was a coil moved by hand in front of large circular receivers, the effect being that the sounds emitted from this track were freely moved about the space. There is no record of the movements, though, so in this respect, any spatial reconstruction of the piece is left to the imagination (Figure 7).
Figure 7 O. Messiaen. Timbres-durées. Spatial score. Section 1. With permission from INA/GRM.
For the purpose of analysis, it may not be very useful to oppose the aesthesic and the poietic, or, in musical terms, aural analysis and analysis of the production process. An example of this can be found in the notation of musique mixte. In Kontakte by Karlheinz Stockhausen (1958–60), the performers are presented with common musical notation for their own parts, and pictographic notation for the tape. A graphical notation is used to illustrate some features and behaviours of the sound events on the tape (in four channels) and provides cues for the performers. At the same time, it helps the analyst understand to some extent how the sounds were made, and this is reinforced by the description of the various patches used in the making of Kontakte. However, composers, to my knowledge, do not write a score for the benefit of musicologists. The analyst must deal with whatever is available, score, sketches, patches, as Laura Zattra has eloquently shown (2004, 2011).
If we look at some electronic music scores of Stockhausen such as Kontakte and Telemusik (1966), we see mostly two types of notation: the first is a thorough and painstaking description of the patches created for each section or for each type of sound, in Cologne’s WDR studio for the former, in Tokyo’s NHK studio for the later. The second is a somewhat free graphical description of the sonic material. In this respect, each piece is quite different. In Kontakte, the overall texture is represented regardless of the channel distribution, while in Telemusik, the tracks appear independently as separate staves; some sections at the beginning show up to six tracks, as the Sony prototype tape recorder available in Tokyo offered that number of tracks, while there are only three channels in section 16, which acts as the fulcrum of the piece.
What was, for Stockhausen, the purpose of these notations? Probably not to enable the realisation of a new version of the piece, and surely not to please the musicologists. It seems that the composer needed to document his own production process, as I observed when I worked as his assistant for the making of the digital tracks of Kathinkas Gesang als Luzifers Requiem (1982–83). It could have been to serve as documentation during the production itself. In any case, Stockhausen no longer documented his subsequent pieces in such a manner, although his article on Kathinka’s Gesang production process at Ircam may prove me wrong (Stockhausen Reference Stockhausen1985). That article, however, did not present a notation apart from the computer code to drive the 4X digital processor, but was indeed a thorough documentation of the digital processes involved at Ircam in the realisation of the piece.
4. NOTATION OF THE TIMBRE
The second class of notation is that of the timbre, particularly over time. There, we can often find a representation of the evolution, density, texture, pitch or pitch range, inharmonicity and amplitude (if not loudness). This is most useful for performers, especially when a time scale is added with chronometric values.
It can be argued, therefore, that aesthesic notation serves various purposes. The musicologists who deal with electroacoustic music or any kind of unwritten music need to transcribe the recordings and, for this, should rely on a notation system. They can choose between several, be they pictorial, symbolic or data-driven (such as the acoustical and psychoacoustical approaches that can be found with Sonic Visualizer’s Vamp plugins, Acousmographe, EAnalysis, Audiosculpt and BStD, amongst others). Performers are often presented with electroacoustic music scores with such notations, where some prominent features of the sounds are notated. It is not infrequent for composers, while writing a piece, to resort to some sort of graphical sketches for electroacoustic sounds.
There are some notable efforts to devise consistent and comprehensive reference systems of aesthesic notation. Some are symbolic, such as Lasse Thoresen’sReference Thoresen,others are quasi-algebraic, such as that of Brian Fennelly (Reference Fennelly1967, Reference Fennelly1968) with its codes, subscripts and superscripts, or functionalist, as developed in Quebec by Stéphane Roy.
It is useful to study them to see how they deal with timbre. However, most of them rely on Schaeffer’s typomorphology, or variations of it such as the categories of spectromorphology, which implies that they are more concerned with how we perceive sounds according to certain predetermined patterns and, thus, belong to the aesthesic domain.
In this respect, it is understandable that most of the notation attempts are perceptually based. Especially when meant for a musique mixte performance, as instrumentalists must converse with invisible partners such as electronic and digital systems, aesthesic notation has an important role to play. Why is it then that no general convention has emerged yet? To answer, one can take into consideration the evolving nature of the field, but that seems a feeble explanation, as the history of the domain spans well over sixty years. It can also be observed that the existing examples of such notations are either too complex or sometimes too simple.
5. DEVELOPMENTS
Important steps have recently been achieved and are being constantly developed. I will merely cite two: one is a program written by Pierre Couprie, EAnalysis, written for the MTI Research Centre at De Montfort University, Leicester, UK (Couprie and Malt Reference Couprie and Malt2014). Couprie, also the developer of iAnalyse, has been studying multi-modal analyses of music, including electroacoustic music, for over ten years and EAnalysis is particularly well suited for a number of types of music. Another project is driven at Ircam by Mikhail Malt, BStD (Brightness, Standard Deviation). It is based on the representation of the fine analysis of multiple acoustical properties and leads to interesting types of notation (Malt and Jourdan Reference Malt and Jourdan2011).
As of this writing, several initiatives have recently been launched. One is the result of a Malaysian Government Fundamental Research Grant to explore the possibility of developing a form of timbral musical notation based on the use of spectrography. It is headed by Andrew Blackburn at the Universiti Pendidikan Sultan Idris (Sultan Idris Education University) in Tanjong Malim, Perak, Malaysia, and the research group is composed of ethnomusicologists, composers and international researchers. The benefits of this effort should be shared in fields represented by the associated researchers, and clearly shows the impact expected for various types of studies and endeavours.
The other project, which appeared at about the same time, focuses on music notation and representation. This initiative was started in France by researchers belonging to several organisations (University of Paris Sorbonne, Institute of Research in Musicology – UMR8223, Ircam, Grame and others) and aims at gathering an international group to collaborate on this topic. Here, also, the scholars and researchers come from fields as different as musicology, contemporary composition, electroacoustic music, ethnomusicology and computer science.Footnote 5
While the theme of notating music that has not in the past necessarily required such an approach has surfaced episodically, it appears that we are now at a turning point. The development of software instruments, and their relations with performers and gestural controls, has accelerated the need to think about representing data, interaction and controls, through description, transcription or notation.
New forms of notations have appeared to address this new situation. They aimed at reflecting the changing rapport between performers and technology. Interactive systems, data-driven and gestural controls of digital systems offer performers different types of actions, which may not rely on standard notation. There has been quite a bit of experimentation in real-time notation, sometimes called ‘screen scores’ (Hope and Vickery Reference Hope and Vickery2011; Vickery Reference Vickery2012). Scores can, for instance, be mediated by the computer, which computes and displays the score in real time; the computer itself can react to input from performers and even from the audience. In fact, Gerhard Winkler contends that the real-time scores generated by his dynamic system can be projected on a large screen for the benefit of the performers and the audience, a set-up which promotes a better understanding of what is happening on the stage (Winkler Reference Winkler2010). This creates a situation not unlike the score of Mauricio Kagel’s Prima Vista (1962–64),Footnote 6 a complex work where any number of performers are divided into two groups; each one reacts to the projection of slides triggered by the other group, as they appear on the screen.
Such systems also find their way into network performances, where the performers are located in various places around the world. In telematic concerts, if participants rely on audio and visual cues like in any performance, the need to intervene directly on the notation as a form of annotations is often felt. Hence, new forms of notation are invented, but they can include conventional sets of symbols. Georg Hajdu and Nick Didkovsky introduced MaxScore in 2008 (Hajdu and Didkovsky Reference Hajdu and Didkovsky2009). The software combines graphical and common musical notation. As telematic performances develop, we will certainly see the problematic of interactive notation tackled increasingly.
6. OUTLOOK
The question of notating performance music was raised with a particular intensity during the second half of the twentieth century. The issue then was to come up with answers to the difficult question of how to handle innovative musical material, which composers experimented with, while giving performers enough means to play the piece. Some composers wanted to go as far as they possibly could, and performers often accompanied them by finding new approaches to playing their instrument. As David Behrman put it:
Traditional notation has been abandoned in so much of the last decade’s music that players are no longer shocked by the prospect of tackling a new set of rules and symbols every time they approach a new composition. Learning a new piece can be like learning a new game or a new grammar, and first rehearsals are often taken up by discussion about the rules – about ‘how‘ to play rather than ‘how well‘ (which must be put off until later). (Behrman Reference Behrman1976: 74)
What Behrman alludes to is considered a thing of the past, a pre-digital era in which computers were seldom put to use to solve notation problems. Furthermore, a culture of new modes of paying has been infused in composers and performers, and pages of lengthy explanations rarely appear these days at the beginning of musical scores. What the new technology has brought about are novel performance situations, including telematic performances and interactive systems. The problem described by Alan Tormey, which led to his giving up on actual composing because existing notation tools could not address the reality of a network performance, has to disappear. Another aspect of notation has to do with music analysis, and several authors mentioned in this article have tried to define the problems and offer solutions. Whether for music study or performance, new forms of notation have become a major problem facing electroacoustic music.