Hostname: page-component-745bb68f8f-g4j75 Total loading time: 0 Render date: 2025-02-10T20:19:28.733Z Has data issue: false hasContentIssue false

Generative Music for Live Performance: Experiences with real-time notation

Published online by Cambridge University Press:  13 November 2014

Arne Eigenfeldt*
Affiliation:
School for the Contemporary Arts, Simon Fraser University, Vancouver, Canada
*
Rights & Permissions [Opens in a new window]

Abstract

Notation is the traditional method for composers to specify detailed relationships between musical events. However, the conventions under which the tradition evolved – controlled relationships between two or more human performers – were intended for situations apart from those found in electroacoustic music. Many composers of electroacoustic music have adopted the tradition for mixed media works that use live performers, and new customs have appeared that address issues in coordinating performers with electroacoustic elements. The author presents generative music as one method of avoiding the fixedness of tape music: coupled with real-time notation for live performers, generative music is described as a continuation of research into expressive performance within electroacoustic music by incorporating instrumentalists rather than synthetic output. Real-time score generation is described as a final goal of a generative system, and two recent works are presented as examples of the difficulties of real-time notation.

Type
Articles
Copyright
© Cambridge University Press 2014 

1. INTRODUCTION

Real-time notation is already an established, albeit experimental, tradition (Hajdu and Didkovsky Reference Hajdu and Didkovsky2009). The potential for the immediate generation of performance instructions suggests the creation of a unique musical experience with each performance, coupling the mercurial nature of improvisation with the more contemplative aspects of composition. Composers, adapt at organising multiple and simultaneous gestures in time, can now explore more immediate creative methods involving live musicians, many of whom are more comfortable reading directions from a score. However, the complexities of musical notation (Cole Reference Cole1974; Stone Reference Stone1980), which have evolved to high degrees of expressivity within the Western paradigms of pitch and pulse, have restricted the type of directions that can be generated live. Despite this, the possibility for generating musical notation during performance now exists due to the tools that are now more readily available (Freeman and Colella Reference Freeman and Colella2010).

The exploration of real-time notation has been approached from various viewpoints: collaborative paradigms for musical creativity (Freeman Reference Freeman2010), controlled improvisation (Gutknecht, Clay and Frey Reference Gutknecht, Clay and Frey2005), open-form scores (Kim-Boyle Reference Kim-Boyle2006), and composer–performer interactions (McClelland and Alcorn Reference McClelland and Alcorn2008). My own work is based within generative music, and my more recent explorations of real-time notation can be seen as a final consequence of a longer process of musical metacreation (Whitelaw Reference Whitelaw2004), involving the use of intelligent musical systems. Like Hajdu (Reference Hajdu2005, Reference Hajdu2007), Winkler (Reference Winkler2004) and others, I see notation for live performers as offering the opportunity to incorporate the expressiveness of live musicians coupled with the excitement of generative music, two formally disparate areas.

2. NOTATION IN ELECTROACOUSTIC MUSIC

Western musical notation allows for the precise control of complex interactions between musical events. Just as composing acousmatic music involves creating complex and permanent gestures – and determining relationships between these gestures – notation allows for a similar design and control over the performance of acoustic material. A major difference between notated acoustic music and electroacoustic music is the notion that the former is a symbolic representation – a set of instructions to performers regarding the desired sounds to be made – while the latter allows the composer to work with the sounds directly. These symbolic representations define relationships of time (including both onset and duration), frequency and even (to a limited extent) timbre through descriptions of playing style (e.g. ponticello for stringed instruments). As has been the case for centuries, highly trained specialists have developed their craft so as to interpret these instructions with a high degree of accuracy and reproducibility, yet coupled with an elusive notion of ‘expressivity’ (Sloboda Reference Sloboda1996).

The attraction of live performance within electroacoustic music has long been discussed (Emmerson Reference Emmerson1991; Appleton Reference Appleton1999; McNutt Reference McNutt2003), including the unique relationship between performer and audience (Putman Reference Putman1990). Live performers offer electroacoustic composers new possibilities:

skilled live performers provide a level of musicality and virtuosity that is entirely different from what is possible with electroacoustic means, and therefore can be regarded as complementary. The live performer provides not merely a visual focus for the audience, but can act as a ‘persona’ in their interaction with the soundtrack; this can lead to a sense of theatricality and drama. Lastly, an interesting timbral dialogue can be established between the performer and soundtrack, particularly if the sound sources used for the soundtrack are derived from the performer, or other related sounds. (Truax Reference Truax2014)

For many of these musicians, traditional Western musical notation remains the most direct method of communication: it is a near universal language that most performers within the Western tradition understand. A secondary benefit of using notation with such musicians is a practical one: trained interpreters can reproduce the complex directions accurately with only a limited amount of rehearsal time.

Electroacoustic music has a long history of mixing electroacoustic elements with live instrumental performance; an important early work within all of electroacoustic music is Maderna’s 1952 composition Musica su due dimensioni, for tape and flautist, the first such mixed work. This medium allows for the relationships between acoustic events to be prescribed, and, to a lesser extent, relationships between the acoustic and the electroacoustic; however, until recently, only the acoustic performer’s events have been flexible.

One pioneer of both electroacoustic and computer music, Barry Truax, has produced mixed works throughout his entire career, with over half of his catalogue comprising works for live performer and soundtrack. Truax composes the notation after the fixed soundtrack is complete, preferring to ‘make it fit the pre-recorded part’ (Truax Reference Truax2014, personal communication). Truax also achieves a high degree of integration between the live instrumental part and the soundtrack, as the melodic material is often derived from the ‘inner melodies’ he later discovers and transcribes from within the soundtrack.

Notated examples from Truax’s mixed works provide methods for allowing the performer some freedom, while maintaining a certain control over aspects of time and frequency. Figure 1 demonstrates his use of specific rhythmic units in an overall free rhythm through the use of free metre: by specifying specific rhythmic values, Truax enforces relationships between events, but allows for some liberty in overall phrasing.

Figure 1 Excerpt from page 1 of the bass oboe part to Barry Truax’s Inside (1995), demonstrating fixed pitches in free metre. Used by permission of the composer.

Truax most often foregoes using a time signature, which provides the larger rhythmical groupings and allows the musician(s) to segment music (see Section 4); as such, he maintains complexity and precision at the microlevel (in terms of acoustic music) while allowing for flexibility at the phrase level. This micro-complexity can also provide the performer with notation that requires virtuosity in execution, a desirable aspect for many performers.

In some cases, the interaction between performer and soundtrack is quite strict, and Truax transcribes the fixed tape part in detail. Performers are usually adept at this type of interaction through years of practice interacting with other musicians; although, as previously mentioned, the flexibility in performance here remains with the live musician.

Truax’s use of free metre is adapted when using more than one live performer. For example, in Twin Souls (1997), for chamber choir and tape, the ensemble must stay aligned rhythmically: as such, Truax maintains a consistent pulse (of 72 beats per minute), and subdivision of that pulse (triple, although modulating to duple in the first movement), but an unstated metre that is suggested with dotted barlines. Although stopwatch timings are indicated in the score, few tape notations are provided, other than textual descriptions.

Another method of retaining notation’s specificity while allowing performance freedom is through the use of complex rhythms that obfuscate the pulse, rather than enforcing it. Truax’s use of unison rhythm in Twin Souls will emphasise a pulse, although his lack of repetitive rhythms will loosen this notion somewhat. Figure 2 provides an example for three percussionists and soundtrack, in which a pulse is useful in visually coordinating the three performers to a vertical grid; however, this is not necessarily heard in the performance. As a result, the intricate and non-metrical performance by the live musicians reflects the non-metrical complexity within the soundtrack.

Figure 2 Excerpt from page 4 of Arne Eigenfeldt’s Les Grandes Soleils Rouges (1992), demonstrating inter-part relationships requiring a metric grid.

2.1. The relationship between performer and composition

The previous examples are methods employed by many electroacoustic composers to address the issue of the relationship between a live performer’s potential expressiveness and tape’s fixedness. A different approach has been an attempt to make the electroacoustic part more pliable and responsive to interaction with human performers through the concept of score-following. In this research, effort is made to automatically align the live performance of musicians with a pre-conceived score, which itself is usually in symbolic format. Dannenberg’s work in this area is an early example of score-following (Reference Dannenberg1984), as is that of Vercoe and Puckette (Reference Vercoe and Puckette1985). Unfortunately, space does not permit a detailed description of this concept; Cont gives an excellent history of this work and its artistic use (Cont Reference Cont2011).

While score-following is one method to align a fixed score with a potentially expressive live performance, other options have been pursued by composers as well. For example, Keith Hamel has created interactive systems in which the electroacoustic response to a fixed score is itself variable, by allowing generative MaxMSP patches to be triggered as part of the score-following (Hamel Reference Hamel2006). Another approach is allowing greater performer freedom through the use of graphic scores (Kim-Boyle Reference Kim-Boyle2005, Reference Kim-Boyle2006), or utilising structured improvisation on the part of the performer. If the electroacoustic aspect is responsive to an improvising performer, this would be considered within the paradigm explored by interactive systems such as those of Chadabe (Reference Chadabe1984), Rowe (Reference Rowe1993), Lewis (Reference Lewis2000), and Pachet (Reference Pachet2004), among others.

A further relationship between score and performer is possible: the score is fixed (but variable between performances), and the performance is not improvised, but expressive through live performance. This potential is possible through generative music, when using live performers reading real-time notation (see Table 1).

Table 1 Potential relationships between score and performer

3. NOTATION AS AN EXTENSION OF GENERATIVE MUSIC

More mercurial aspects of music-making within electroacoustic music have a history as long as the genre itself, dating back to at least the 1930s with John Cage’s Imaginary Landscape #1 (1939). Digital technologies have allowed composers to explore multi-gestural improvisation in ways that bridge performance and composition, avenues that are impossible in acoustic music (Vaughan Reference Vaughan1994; Garnett Reference Garnett2001; Dean Reference Dean2003). Generative music, a branch of generative art, offers a potential for composers to explore more extemporaneous composition.

Philip Galanter defines generative art as ‘any art practice where the artist uses a system … which is set into motion with some degree of autonomy contributing to or resulting in a completed work of art’ (Galanter Reference Galanter2003: 4). Likewise, generative music can be understood to be compositions that are created by a process – e.g. a computer program – that operates on its own to generate entire compositions that will change with each run of the system.

Generative music can be considered as a branch of algorithmic music, which is music created by a consistent set of instructions, or algorithms (Ariza Reference Ariza2005). Whereas algorithmic music does not predicate the immediate acceptance of the process – allowing for the selection by the user from multiple runs of the system – generative music is most often created directly in performance, thereby removing any possible introspection and rejection of outcomes by the artist; as such, this implies the added pressure of ‘guaranteed results’ that are not ‘cherry-picked’ from a series of outputs. Generative music does borrow many aspects from algorithmic music, including the use of mathematical models – for example, stochastic processes and Markov chains (Ames Reference Ames1989)– statistical approaches which learn from a model or corpus (Cope Reference Cope1991), and evolutionary methods (Romero, Machado, Santos and Cardoso Reference Romero, Machado, Santos and Cardoso2003). However, while an algorithmic composition may incorporate aspects of these methods, generative music assumes a work entirely created by process, what Galanter considers the ‘completed work of art’ (Galanter Reference Galanter2003: 4).

The application of digital technologies towards the creation of both algorithmic and generative music began with Hiller’s Illiac Suite created in 1956, the first use of a computer in music, as well as the first computer-composed composition (Hiller Reference Hiller1981). The advent of affordable personal computers and digital synthesisers in the 1980s proved to be a boon to generative music; coupled with programming environments such as Pd (http://puredata.info), MaxMSP (http://cycling74.com), or SuperCollider (www.audiosynth.com), composers now have had the potential to immediately hear their process-based ideas. Indeed, it is the immediacy of such tools that has created a burgeoning interest in generative music: code a process, listen to it right away, and tweak it to taste.

4. INHERENT DIFFICULTIES IN GENERATING NOTATION

Due to its symbolic nature, note-based music, with its straightforward and discrete representations of pitch and time, has been one of the first art forms to explore generative processes in great depth (see Cope Reference Cope1991). However, a dilemma of sorts is presented to the generative music composer: the output of their systems is most often note-based, and while commercial synthesisers are easily capable of creating note-based audio, these sounds are often pale representations of their highly complex acoustic models (Risset and Mathews Reference Risset and Mathews1969; Grey and Moorer Reference Grey and Moorer1977). In order to utilise the full timbral potential of actual acoustic instruments, combined with the expressive potential of human performance, the generative music must become notated for the performer, thereby taking it out of real time, and losing its distinction from algorithmic music. As such, generative music composers have looked for ways to produce the notation for musicians immediately in performance; this has proven to be a difficult task, for several reasons, which follow.

4.1. Specificity of the language

Although a great degree of complexity is possible within musical notation, performers are capable of interpreting some instructions, but not others. For example, the tempo canons of Conlon Nancarrow are straightforward in their conception – simple monophonic melodies that progress at independent rates until they arrive at a pre-determined convergence point – but are unplayable by humans untrained in performing multiple tempi. (Gann Reference Gann2006)

This tradition of a single pulse within a musical work is one main distinction between electroacoustic music and acoustic music that is considered ‘on the grid’. Standard gestures in live electroacoustic music, such as triggering independent gestures on the fly, are only possible in notation if the music isn’t conceived or notated in score form; as a result, a great deal of effort must be made to relate individual parts to the grid assumed within a score: somewhat ironically, a great deal of contemporary music involves rhythmic complexity that is an effort to remove itself from this grid, at least aurally.

Furthermore, many algorithmic procedures, such as altering all durations in a gesture by a linear amount, could require complex changes in notation.

4.2. Readability

Composers may conceive music in ways that are outside notational traditions, but are forced to translate their ideas into representations that are readable by musicians; in many cases, music that sounds natural is often notated in an extremely complex fashion.Footnote 1

Conventions have arisen within notation that are in place for the benefit of the performer, rather than the composer. For example, ‘the bar’ is essentially a method of parsing the symbols into readable and understandable segments. Although certain musical traditions evolved around this notion (e.g. emphasising certain beats within the bar) contemporary art-music composers have discarded most of these conventions. Forcing gestures into these boxes can be unnecessarily difficult, and may alter the gestures themselves.

4.3. Performability

Reading music is a developed skill, especially reading it on the spot (‘sight-reading’). Some musicians (e.g. those working in recording studios for film and television production) are adept at it, and can sight-read extremely well. There is, however, a limit to what can be expected of performers to interpret without prior knowledge. As a result, composers of generative music that have explored live notation are often forced to simplify their music for these practical reasons (see Brown Reference Brown2005).

5. REAL-TIME NOTATION FOR HUMAN PERFORMANCE

For many of these reasons, live score generation has tended to involve more graphic scores, which allow for freer interpretation by the performer. Until recently, composers interested in exploring this paradigm were forced to develop their own tools. The following is by no means an exhaustive list of examples of live notation: Hajdu and Didkovsky (Reference Hajdu and Didkovsky2009) give a more thorough overview of the history of music notation in network environments.

McAllister, Alcorn and Strain (Reference McAllister, Alcorn and Strain2004) involved audiences drawing via wireless devices; these drawings were then displayed on monitors for performers to interpret. Winkler (Reference Winkler2004) offers a theory on real-time scores, and describes a work that involved projecting a combination of musical symbols, text instructions and graphics open for interpretation by performers. Kim-Boyle (Reference Kim-Boyle2005) used images of existing piano music, transforming and manipulating the images themselves, and presenting the new images as graphic scores to a pianist reading from a computer screen. Wulfson, Barrett and Winter (Reference Wulfson, Barrett and Winter2007) developed LiveScore, which displays musical notation in proportional representation.

All of these are examples of graphic notation that suggest performative actions by musicians; generating explicit directions has been more elusive. Baird (Reference Baird2005) used Python to generate score instructions, and Lilypond (Nienhuys and Nieuwenhuizen Reference Nienhuys and Nieuwenhuizen2003) to create PDFs of the music, which were sent via network to musicians for display on computer screens. As Lilypond is non-real time, Baird used the time that the musicians performed a given page to render a new one. Additionally, Baird used audience interactions to influence the generation of new material. Freeman has worked extensively with live score generation, particularly with audience interaction and participation, including web-based collaboration (Freeman Reference Freeman2010). Freeman and Colella (Reference Freeman and Colella2010) provide an excellent overview of the available tools for real-time music notation, while Freeman (Reference Freeman2008) gives a description of his own motivations and challenges in real-time score generation.

Brown (Reference Brown2005) describes a generative system whose final output is displayed to performers on computer screens, and explains the tradeoffs made within the algorithmic design in order to produce music that is readable by performers on the spot. An image of the example music is provided, suggesting music of quite simple design.

Hajdu’s Quintet.net (Hajdu Reference Hajdu2005) is a mature platform that allows a networked ensemble to share notation through MaxScore (Didkovsky and Hajdu Reference Didkovsky and Hajdu2008). MaxScore is one of the first powerful software packages readily available for the generation of standard Western musical notation, and that allows for complexities almost on the level of offline notational environments, such as Finale (www.finalemusic.com).

The first use of MaxScore for live score generation was Hajdu’s Ivresse ’84 for violin and four laptops (Hajdu Reference Hajdu2007), and it continues to be used by composers such as Nicolas Collins (Roomtone Variations, 2011), Georg Hajdu (Swan Song, 2001), Peter Votava (Pixelache, 2010), and Fredrik Olofsson (The choir, the chaos, 2009). MaxScore is a bridge between Didkovsky’s Java Music Specification Language (Didkovsky and Burk Reference Didkovsky and Burk2001), which itself is a tool for algorithmic composition, and MaxMSP, a visual programming environment for musicians that has been widely adopted due to its ease of use. Recent additions to MaxScore include an extension for its use within Ableton Live (www.ableton.com) – a popular sequencer used by many electroacoustic composers – thereby allowing musicians less familiar with symbolic representation to generate scores to accompany their electroacoustic music.

6. EXPERIMENTS IN REAL-TIME NOTATION

Much of my own work attempts to bridge the differences between the aspects that can be created within electroacoustic music efficiently (e.g. the creation of complex gestures) with those that are more easily accomplished through human performance (e.g. expressivity). For this reason, many of my earlier real-time systems were designed to generate complex and ever-varying background material over which a human performer would improvise in the foreground (Eigenfeldt Reference Eigenfeldt1989). While my systems have increased in complexity since then, expressivity, whether it is through melodic or rhythmic means, is still extremely difficult to generate digitally, and thus I still utilise human performers for this very reason.

Many real-time systems have used the paradigm of improvisation, rather than composition, as a model (Lewis Reference Lewis2000), and can be considered reactive, listening systems (Chadabe Reference Chadabe1984); my systems have always been compositional, and could be considered as being based in ‘thinking’ rather than ‘listening’ (Eigenfeldt, Bown, Pasquier and Martin Reference Eigenfeldt, Bown, Pasquier and Martin2013). Within performance, these systems make autonomous high-level musical decisions, and these internal choices often need to be communicated to the performer; as such, the need to provide information about the current musical environment, as well as upcoming material, has been necessary. As these were variable based upon the generative nature of the system, such notation needs to be generated in real time.

6.1. Coming Together: Notomoton

In 2011, percussionist Daniel Tones and I created Coming Together: Notomoton (see Movie example 1), for live percussionist, synthesisers, and robotic instruments: Ajay Kapur’s MahaDevibot and the NotomotoN (Kapur, Darling, Murphy, Hochenbaum, Reference Kapur, Darling, Murphy, Hochenbaum, Diakopoulos and TrimpinDiakopoulos and Trimpin 2011). The use of robotic instruments in this work, and many of my subsequent works, is an attempt to overcome what could be considered the ‘Uncanny Valley’ (Gee, Browne and Kawamura Reference Gee, Browne and Kawamura2005) of electroacoustic music: rather than attempting to produce more realistic synthetic representations of acoustic instruments – which invariably led to greater criticism of their failure to reproduce the acoustic instruments accurately – I have begun to use acoustic instruments under computer control.

For each performance, the system generated a large-scale form made up of several sections differentiated by density, tempo and tala: a specific time signature with a repeating emphasised beat pattern. In order to fully interact with the system at a musical level, the performer was provided with a live overview of the entire musical structure (see Figure 3), as well as the current beat (acting as a kind of conductor). This was my first attempt to provide the live musician with a score, albeit one that did not provide performance instructions, but indications of the generated environment.

Figure 3 Performer display for Coming Together: Notomoton. Five sections have been generated, as displayed by the density, tempo and tala; note density will be highest in the second section, tempo will be slowest in the third section and tala will be longest in the final two sections. Tala is displayed in the circles at bottom (2+2+3+2+2). Progress through the entire composition is displayed at top: it is currently about half way through the third section.

Various methods were attempted to display the tala, including a rotary display and a progress bar to indicate progress through the bar, as well as flashing buttons of different colours to indicate strong and weak beats. The method preferred by the musician is shown in Figure 3, which allows the performer(s) to quickly see the beat groupings for the bar (indicated by large circles for strong beats), progress through the bar (indicated by the highlighted circle), and the pulse itself (indicated by the progression of the highlighted circle from left to right).

What I found to be interesting, and what can be seen from the video documentation, is that the performer did look at the live score occasionally, but for the most part relied upon what he heard from the robotic instruments to govern his musical actions; the score seemed to offer the performer reassurance in what he heard. Furthermore, the score also provided the performer with an indication of what was coming next: upcoming section changes were indicated by flashing red bar, not shown. This proved to be intrinsically important for musical interaction that involved anticipation, and substituted for the eye contact and subtle performance movements that performers give one another at such moments.

6.2. More Than Four

More Than Four (see Movie example 2) was my first attempt to display traditional music notation generated in real-time for live musicians: in this case, two marimbas, two vibraphones, and double bass. It used the generative engine of Coming Together: Notomoton, and borrowed that work’s complex rhythmic talas composed of additive rhythms: anywhere from 7/8 (3+2+2) to 18/8 (3+3+2+2+3+3+2) per section. In order to maintain a group pulse, the Notomoton robotic percussionist played the role of time-keeper, essentially improvising upon the additive rhythms. The work consisted of several movements of two to four minutes each, with the Notomoton beginning each movement with a one-bar ‘solo’ so as to establish the tempo and tala. Notation was handled by MaxScore, which interfaced smoothly with the composition software written in MaxMSP.

After extensive workshops with the five musicians, it was determined that the best method for displaying newly generated music would be to notate three bars at a time across individual laptop screens, with an indication of which bar to currently play (see Figure 4). The performer would read and perform the top two bars, then the display would update every two bars and the third bar would become the top bar. This allowed the musicians to scan ahead to see the upcoming music, so as not to be surprised at the virtual page turn of a screen update.

Figure 4 Example part display of bars 3–5 of a movement from More Than Four.

During rehearsal, after reading through some particularly tricky parts, the musicians often asked if they could have a second run through the music. Interestingly, they knew the music was generated live, and, as such, impossible to repeat; however, it could not be ignored that a second attempt at reading through the music would not only allow for greater precision, but also greater expressiveness. This seeming paradox underscores the dilemma faced by composers interested in generative music using live performers: the potential complexity of the music is limited by the immediate readability by the performers. This problem can be somewhat mitigated in ways Brown has suggested: ‘In an attempt to make this less difficult the algorithmic rules are described to the performers so that they know what music to expect and what pitches or rhythms are unlikely or impossible’ (Brown Reference Brown2005). In the case of More Than Four, the performers knew to anticipate pulses of 2 or 3, subdivisions of no more than a semiquaver, and durations that did not include tied notes.

MaxScore handles certain low-level aspects of notation – note positioning, for example – but requires the user to override defaults in many instances. In the case of More Than Four, using rhythmic grouping of 2 and 3, rather than the more common 4+4, required keeping track of which notes needed to be beamed together. As one of the defining features of the composition was its additive rhythm, the musicians became more familiar and comfortable with sight-reading music in unusual time signatures; however, any disruption in the consistency of this notation, such as failing to beam two quavers together properly, would almost always cause the musician to pause and subsequently lose track of their location within the music. While this may seem to be a trivial aspect of implementation, it proved to be less than straightforward, and an example of the many exceptions and permutations within musical notation which composers need not consider when generating the music directly into audio: generating arbitrarily complex rhythms for MIDI playback is trivial, but notating them requires an adherence to the many rules of musical notation.

6.3. An Unnatural Selection

My most recent work involved eight instrumental performers playing notation generated live and displayed on individual tablets (see Movie example 3). Since the commission was for a professional ensemble of non-improvising musicians who are exceptional sight-readers, the generated music was of greater complexity. In order for the generated melodic, harmonic and rhythmic material to make ‘musical sense’, methods were incorporated that expanded upon previous research into corpus-based analysis and generation, including genetic algorithms (Eigenfeldt Reference Eigenfeldt2012) and harmonic generation based upon variable Markov models (Eigenfeldt and Pasquier Reference Eigenfeldt and Pasquier2010). This resulted in music that was more intricate in its musical conception, as well as in its notation, while still necessitating sight-readability by human musicians (see Figure 5).

Figure 5 Example display from An Unnatural Selection, balancing complexity with playability. Six bars are displayed at a time; every four bars, the screen is updated, so that the last two bars become the top two bars in the new display.

The inclusion of tied durations significantly increased the intricacy of the notation; furthermore, the main genetic operator that spliced beats within segmented phrases augmented this complexity. As previously mentioned, such algorithms can be rather trivial when only dealing with onset information (as in More Than Four), but musical notation can require stipulating fractional durational information in which alignment to a grid necessitates consideration of hundreds of years of evolved common practices: in order for it to look right, many additional steps needed to be taken. Additional musical intelligence was required by the system, such as determining the best accidental selection for each bar of non-tonal music, and articulation possibilities that the music suggests. Since the music was beyond my ability to control in any meaningful way interactively, a metacreative system was necessary to guarantee logical and musical relationships between all parts.

I was extremely lucky to be afforded extended rehearsal time with the musicians, beginning with presentation of printed parts of generated music and later moving to notation on tablet displays. While the musicians found it unusual to rehearse music that they would not actually perform, they did learn the musical limits of what would be presented to them, as well as the particular quirks of the system, thereby underlining Brown’s notion of demonstrating predictability of output. For example, each of the three movements was based upon different musical corpora; as a result, each movement, while quite different from one another, had a certain similarity, as the corpora remained constant.

The musicians’ ability to interpret the displayed notation accurately and, more importantly, expressively continually amazed me. There was only one awkward moment in the first rehearsal, when one musician asked if they were supposed to play ‘musically’, incorrectly assuming that computer-generated music should somehow be performed differently from human-composed music. The greatest dilemma on my part was one inherent in generative music: there is no guarantee that one the generated material for performance will be of its highest quality. In fact, we had a particularly wonderful generation for the dress rehearsal, while the premiere had less musically interesting results (involving some periods of awkward silences). As we had two performances, I was then faced with an artistic quandary after the first performance: should I generate several movements in advance, and select the best one for final performance? Despite the fact that the audience would never know, nor would the musicians, I would essentially have forsaken the conceptual purity of generative music over algorithmic music, and I decided to retain live generation.

8. CONCLUSION AND FUTURE DIRECTIONS

The system for An Unnatural Selection did produce some wonderful music, and demonstrated the viability of real-time generation of complex notation performable by expert humans. Certain notational aspects had to be left out due to time constraints – phrase markings, hairpin dynamics, a greater variety of articulations; however, these could be readily solved in future implementations.

Following an unscientific survey of the musicians following the performance, what I found most interesting was what they considered to be missing from the system. While I had concentrated on presenting a notational display that most closely reflected historical practices, the musicians consistently commented upon an inability to incorporate rehearsal practices into the performance. It is during the rehearsal process that musicians learn the relationship of their part to the other performers, and it is this information that is often added by them to the printed part; for example, knowing that a particular melody is in counterpoint to another musician, and thus learning with whom to interact. Suggestions were made that the system could provide this information to the performers, which, while requiring them to absorb even more information while sight-reading, would give them more meaningful direction.

The use of Western musical notation for live performers within electroacoustic music has a long tradition, and much work has gone into making the relationship between fixed score and expressive performance more viable. From this perspective, generative music, using live musicians and real-time notation, can be seen as one possible avenue for the composer interested in electroacoustic music. As outlined, real-time notation already has a rich history, one that has tended to follow an experimental, live electroacoustic controlled improvisational tactic. The approach outlined here follows a somewhat different trajectory in which the performance is specified through notation, but the relationships between performers can be expressive. The complexities of musical notation, coupled with real-time composition, require the arbitration of an intelligent metacreative system; such a new way of working utilises the inherent expressiveness of individual musicians and the excitement of a generative system, and fully explores the potential complexity that musical notation affords.

Supplementary Material

To view supplementary material for this article, please visit http://dx.doi.org/10.1017/S1355771814000260.

Footnotes

1 See http://en.wikipedia.org/wiki/File:Rite_of_Spring_opening_bassoon.png, the opening of Stravinsky’s famous ballet; it sounds like a simple folk song, but its representation is extremely complex.

References

REFERENCES

Ames, C. 1989. The Markov Process as a Compositional Model: A Survey and Tutorial. Leonardo 22(2): 175187.CrossRefGoogle Scholar
Appleton, J. 1999. Reflections of a Former Performer of Electroacoustic Music. Contemporary Music Review 18(3): 1519.CrossRefGoogle Scholar
Ariza, C. 2005. Navigating the Landscape of Computer-Aided Algorithmic Composition Systems: A Definition, Seven Descriptors, and a Lexicon of Systems and Research. Proceedings of the International Computer Music Conference, Barcelona, 765–72.Google Scholar
Baird, K. 2005. Real-Time Generation of Music Notation Via Audience Interaction Using Python and GNU Lilypond. Proceedings of the 2005 Conference on New Interfaces in Musical Expression, Vancouver, 240–1.Google Scholar
Brown, A. 2005. Generative Music in Live Performance. Proceedings of the Australasian Computer Music Conference, Brisbane, 23–6.Google Scholar
Chadabe, J. 1984. Interactive Composing. Computer Music Journal 8(1): 2227.Google Scholar
Cole, H. 1974. Sounds and Signs: Aspects of Musical Notation. London, New York and Toronto: Oxford University Press.Google Scholar
Cont, A. 2011. On the Creative Use of Score Following and Its Impact on Research. SMC 2011: 8th Sound and Music Computing Conference.Google Scholar
Cope, D. 1991. Computers and Musical Style. Madison, WI: A-R Editions.Google Scholar
Dannenberg, R. 1984. An On-Line Algorithm for Real-Time Accompaniment. Ann Arbor, MI: MPublishing, University of Michigan Library.Google Scholar
Dean, R. 2003. Hyperimprovisation: Computer Interactive Sound Improvisation. Madison, WI: A-R Editions.Google Scholar
Didkovsky, N. and Burk, P. 2001. “Java Music Specification Language” An Introduction and Overview. Proceedings of the International Computer Music Conference, Havana, 123–6.Google Scholar
Didkovsky, N. and Hajdu, G. 2008. MaxScore: Music Notation in Max/MSP. Proceedings of the International Computer Music Conference, Belfast, 483–6.Google Scholar
Eigenfeldt, A. 1989. ConTour: A Real-Time MIDI System Based on Gestural Input. Proceedings of the International Conference of Computer Music. San Francisco: ICMC.Google Scholar
Eigenfeldt, A. 2012. Corpus-Based Recombinant Composition Using a Genetic Algorithm. Soft Computing: A Fusion of Foundations, Methodologies and Applications 16(12): 2049–56.Google Scholar
Eigenfeldt, A. and Pasquier, P. 2010. Realtime Generation of Harmonic Progressions Using Controlled Markov Selection. Proceedings of the First International Conference on Computational Creativity (ICCCX), 16–25.Google Scholar
Eigenfeldt, A., Bown, O., Pasquier, P. and Martin, A. 2013. Towards a Taxonomy of Musical Metacreation: Reflections on the First Musical Metacreation Weekend. Proceedings of the Artificial Intelligence and Interactive Digital Entertainment (AIIDE’13) Conference, Boston.Google Scholar
Emmerson, S. 1991. Computers and Live Electronic Music: Some Solutions, Many Problems. Proceedings of the International Computer Music Conference. San Francisco: ICMC, 135–8.Google Scholar
Freeman, J. 2008. Extreme Sight-Reading, Meditated Expression, and Audience Participation: Real-Time Music Notation in Live Performance. Computer Music Journal 32(3): 2541.Google Scholar
Freeman, J. 2010. Web-Based Collaboration, Live Musical Performance and Open-Form Scores. International Journal of Performance Arts & Digital Media 6(2): 149170.Google Scholar
Freeman, J. and Colella, A. 2010. Tools for Real-Time Music Notation. Contemporary Music Review 29(1): 101113.CrossRefGoogle Scholar
Galanter, P. 2003. What Is Generative Art? Complexity Theory as a Context for Art Theory. International Conference on Generative Art, Milan, Italy.Google Scholar
Gann, K. 2006. The Music of Conlon Nancarrow. Cambridge: Cambridge University Press.Google Scholar
Garnett, G. 2001. The Aesthetics of Interactive Computer Music. Computer Music Journal 25(1): 2133.Google Scholar
Gee, F., Browne, W. and Kawamura, K. 2005. Uncanny Valley Revisited. IEEE International Workshop on Robots and Human Interactive Communication . Nashville, TN: IEEE Press, 151157.Google Scholar
Grey, J. and Moorer, J. 1977. Perceptual Evaluations of Synthesized Musical Instrument Tones. The Journal of the Acoustical Society of America 62(2): 454462.Google Scholar
Gutknecht, J., Clay, A. and Frey, T. 2005. GoingPublik: Using Realtime Global Score Synthesis. Proceedings of the 2005 Conference on New Interfaces for Musical Expression, Singapore, 148–51.Google Scholar
Hajdu, G. 2005. Quintet.net: An Environment for Composing and Performing Music on the Internet. Leonardo 38(1): 2330.Google Scholar
Hajdu, G. 2007. Playing Performers. Proceedings of the Music in the Global Village Conference, Budapest, 41–2.Google Scholar
Hajdu, G. and Didkovsky, N. 2009. On the Evolution of Music Notation in Network Music Environments. Contemporary Music Review 28(4–5): 395407.Google Scholar
Hamel, K. 2006. Integrated Interactive Music Performance Environment. Proceedings of the 2006 Conference on New Interfaces for Musical Expression, Paris, pp. 380–3.Google Scholar
Hiller, L. 1981. Composing with Computers: A Progress Report. Computer Music Journal 5(4): 721.Google Scholar
Kapur, A., Darling, M., Murphy, J., Hochenbaum, J., Diakopoulos, D. and Trimpin, 2011. The KarmetiK NotomotoN: A New Breed of Musical Robot for Teaching and Performance. Proceedings of the 2011 Conference on New Instruments and Musical Expression, Oslo, 228–31.Google Scholar
Kim-Boyle, D. 2005. Musical Score Generation in Valses and Etudes. Proceedings of the 2005 International Computer Music Conference, Barcelona, 810–12.Google Scholar
Kim-Boyle, D. 2006. Real Time Generation of Open Form Scores. Proceedings of Digital Art Weeks, ETH Zurich.Google Scholar
Lewis, G. 2000. Too Many Notes: Computers, Complexity and Culture in Voyager. Leonardo Music Journal 10: 3339.Google Scholar
McAllister, G., Alcorn, M. and Strain, P. 2004. Interactive Performance with Wireless PDAs. Proceedings of the 2004 International Computer Music Conference, Miami, 702–5.Google Scholar
McClelland, C. and Alcorn, M. 2008. Exploring New Composer/Performer Interactions Using Real-Time Notation. Proceedings of the International Computer Music Conference. San Francisco: ICMC, 176–9.Google Scholar
McNutt, E. 2003. Performing Electroacoustic Music: A Wider View of Interactivity. Organised Sound 8(3): 297304.Google Scholar
Nienhuys, H. and Nieuwenhuizen, J. 2003. LilyPond: A System for Automated Music Engraving. Proceedings of the XIV Colloquium on Musical Informatics, 167172.Google Scholar
Pachet, F. 2004. Beyond the Cybernetic Jam Fantasy: The Continuator. Computer Graphics and Applications, IEEE 24(1): 3135.CrossRefGoogle ScholarPubMed
Putman, D. 1990. The Aesthetic Relation of Musical Performer and Audience. The British Journal of Aesthetics 30(4): 361366.CrossRefGoogle Scholar
Risset, J. and Mathews, M. 1969. Analysis of Musical-Instrument Tones. Physics Today 22(2): 2330.Google Scholar
Romero, J., Machado, P., Santos, A. and Cardoso, A. 2003. On the Development of Critics in Evolutionary Computation Artists. In Applications of Evolutionary Computing, LNCS 2611, Springer-Verlag, 559–69.Google Scholar
Rowe, R. 1993. Interactive Music Systems. Cambridge, MA: MIT Press.Google Scholar
Sloboda, J. 1996. The Acquisition of Musical Performance Expertise: Deconstructing the ‘Talent’ Account of Individual Differences in Musical Expressivity. In K. Ericsson (ed.), The Road to Excellence: The Acquisition of Expert Performance in the Arts and Sciences, Sports, and Games. Hillsdale, NJ: Lawrence Erlbaum, 107126.Google Scholar
Stone, K. 1980. Music Notation in the Twentieth Century. New York: W.W. Norton.Google Scholar
Truax, B. 2014. Combining Performers with Soundtracks: Some Personal Experiences. www.sfu.ca/~truax/live.html, accessed March 24, 2014.Google Scholar
Vaughan, M. 1994. The Human-Machine Interface In Electroacoustic Music Composition. Contemporary Music Review 10(2): 111127.CrossRefGoogle Scholar
Vercoe, B. and Puckette, M. 1985. Synthetic Rehearsal: Training the Synthetic Performer. Ann Arbor, MI: MPublishing, University of Michigan Library.Google Scholar
Whitelaw, M. 2004. Metacreation: Art and Artificial Life. Cambridge, MA: MIT Press.Google Scholar
Winkler, G. 2004. The Realtime-Score: A Missing Link in Computer-Music Performance. Proceedings of Sound and Music Computing Conference, Paris.Google Scholar
Wulfson, H., Barrett, G. D. and Winter, M. 2007. Automatic Notation Generators. Proceedings of the 7th International Conference on New Interfaces for Musical Expression, New York, 346–51.Google Scholar
Figure 0

Figure 1 Excerpt from page 1 of the bass oboe part to Barry Truax’s Inside (1995), demonstrating fixed pitches in free metre. Used by permission of the composer.

Figure 1

Figure 2 Excerpt from page 4 of Arne Eigenfeldt’s Les Grandes Soleils Rouges (1992), demonstrating inter-part relationships requiring a metric grid.

Figure 2

Table 1 Potential relationships between score and performer

Figure 3

Figure 3 Performer display for Coming Together: Notomoton. Five sections have been generated, as displayed by the density, tempo and tala; note density will be highest in the second section, tempo will be slowest in the third section and tala will be longest in the final two sections. Tala is displayed in the circles at bottom (2+2+3+2+2). Progress through the entire composition is displayed at top: it is currently about half way through the third section.

Figure 4

Figure 4 Example part display of bars 3–5 of a movement from More Than Four.

Figure 5

Figure 5 Example display from An Unnatural Selection, balancing complexity with playability. Six bars are displayed at a time; every four bars, the screen is updated, so that the last two bars become the top two bars in the new display.

Eigenfeldt supplementary movie

Movie 1

Download Eigenfeldt supplementary movie(Video)
Video 113.3 MB

Eigenfeldt supplementary movie

Movie 2

Download Eigenfeldt supplementary movie(Video)
Video 46.1 MB

Eigenfeldt supplementary movie

Movie 3

Download Eigenfeldt supplementary movie(Video)
Video 152.3 MB