1. INFLUENCES, OPINIONS AND POSITIONS
Knut Wiggen was born in rural Norway, and moved to Stockholm in 1950 as a budding pianist looking for education and training. His teacher Hans Leygraf brought him along to the Darmstadt summer courses, where he became familiar with modernist developments in contemporary music, especially electronic music. Wiggen lived at Marienhöhe in Darmstadt from 1952 to 1955.
In the wake of the Second World War, social democratic politics were hugely popular in Scandinavia, and these politics depended on spreading information about new developments in the arts, educating the population and securing broad participation in cultural activities and processes. Democratisation was on the agenda, and it was a time of radical views on culture.Footnote 3 The expectation was that people would want to participate in cultural activities and not just consume them.
In his book De två musikkulturerna (Wiggen Reference Wiggen1971), Wiggen argued his points in a historical, social and musical perspective, with electronic music as the necessary culmination of musical development at that current time. Musical structures had evolved from simple monophonic music into complex orchestral arrangements, and for Wiggen, this development was a reflection of increasingly complex societies. He believed that electronic tools would require new modes of organisation and communication, and that the acoustic ideals of the orchestral tradition would become outdated aesthetics of the past. New mass media of radio and television required a new music, and as a new musical culture, electronic music would live side by side with interval-based traditions.Footnote 4
To Wiggen, the symbolic value of technology seems to have been the same as for media theorist Marshall McLuhan,Footnote 5 but we shall see that Wiggen’s vision also had a more practical side. His key point was that music had to incorporate new media to stay relevant in social developments, and that it would become obsolete if this did not happen. Here, a social perspective joins his argument – that electronics would liberate composers and listeners from the rigid hierarchy of the bourgeois concert hall focus on the virtuoso. Electronic means would allow everyone to participate on nearly equal terms.
People needed to become familiar with technology to resist alienation,Footnote 6 and to Wiggen, this focus on technology coexisted with a focus on listening. Wiggen argued that music needed to relate to listeners and evoke responses, and if composers ‘are not successful in formulating symbols that people understand, the music is thrown out into cyberspace as rubble’ (Strömberg Reference Strömberg1994). He found the Schaefferian reduced listening method useful in eliminating oft-held notions of sound types being non-musical, and praised Schaeffer’s steps towards a terminology for musical form that could address the challenges facing concrete music (Wiggen Reference Wiggen1971: 88). Human psychology is essential in the perception of sound, and listening techniques are central in Wiggen’s argument for the music of the future. He elaborated, however, that listening technique needed to be rooted in a ‘world view’ or ‘perception of the world’ (Wiggen Reference Wiggen1971: 50, Reference Wiggen1970). and not be limited to the less socially dependent listening that Schaeffer described in his four listening modes.Footnote 7 In addition to Schaeffer’s categories, Wiggen proposed categories such as emotional, intellectual and technical listening (Wiggen Reference Wiggen1971: 39), as well as a kind of listening ‘middle ground’, where experiences of sound would become building blocks that could later be combined into music. The working methods he designed into his computer program MusicBox were manifestations of this thought – music was an intellectual and structured entity beyond the emergent qualities of sound itself.
While technological tools were to provide structure for the composer’s investigations, these structures would only be available through listening and sound experience. For this reason, sounds should be crafted to provide moods and atmospheres. Wiggen wanted to combine hard data processing with listener experience to develop useful sound categories for composers to use, and went quite far in describing this as a precondition for the success of (electronic) music of the future. In this sense, and much like Schaeffer, he was focused on the intricate psychological web that forms cognition rather than emphasising the simplicity of direct metrics. The success of a compositional logic was for Wiggen that the audience recognised the composer’s intentions; he was opposed to calling the meaning-making processes of the listeners a creative activity (Wiggen Reference Wiggen1972: 63).
2. CONCERTS, EVENTS AND CONFERENCES
Wiggen’s organisational work started in the concert organisation Fylkingen. When Wiggen was elected chairman in 1959, the programming profile of presenting important works from the last 50 years was replaced with a focus on current and contemporary music, and in particular, electronic music. Fylkingen attained the status of an avant-garde institution with a new cross-disciplinary perspective and in Wiggen’s opinion this also necessitated a deeper, theoretical reflection, not least because the aesthetic content and technical preconditions of the new music were difficult to understand.Footnote 8 In 1967, Fylkingen had work groups for music, visuals, tactile/spatial issues, language, theory, pedagogy and computer music – all formed by Wiggen.
Wiggen remained chairman until 1969, and during this period leading composers and artists such as Pierre Schaeffer, Karlheinz Stockhausen, Robert Rauschenberg, John Cage, David Tudor, the Cunningham-ballet, Iannis Xenakis, Nam June Paik and Gottfried Michael Koenig were brought to Stockholm for electronic and multimedia performances. Concerts were moved to Moderna Museet, showing Wiggen’s interest in the cross-disciplinary changes brought by new technology and media, and his desire to reach new audiences. Clearly, concerts were not enough to establish the music of the future, and Wiggen believed that the country should have several studios for public access. He also envisioned a concert house with halls, projection rooms, library, archives and composers’ suites, much like how the French centre Ircam was set up when it opened in 1977, just a couple of years after Wiggen’s departure from EMS. (For further detail on Fylkingen’s history, including a complete listing of concerts and events, see Hultberg Reference Hultberg1994.)
The first studio that Wiggen built was at the Workers’ Education Society (Arbetarnas bildningsförbund) in Stockholm, in 1961, and the location illustrates the strong links between avant-garde activities and social democratic politics at the time. Fylkingen’s aims fit well with the idea of building a society where new technical and aesthetic developments would be explored through democratic participation, and courses with Gottfried Michael Koenig from the Cologne studio recruited young composers to work with the new tools and to become Fylkingen members (Karlsson, Reference Karlsson1994: 47–54).
Wiggen believed that studios and new working conditions for composers were necessary for the music of the future to succeed, and through his travels, especially to the United States and to the studios in Cologne, Paris, Munich and Milan,Footnote 9 Wiggen was well-oriented about how the computer was being developed as a new tool for composition. In 1963, he arranged a conference with leading studio directors, including Iannis Xenakis from Paris, Jozef Patkowski from Warsaw and Herman Heiss from Darmstadt, to discuss the ideas of automation of studio processes and the transformation of the studio into an instrument initially called the ‘Symphon’.
It is important to note that Wiggen often criticised composers of electronic music for their technical focus, arguing that they were being carried away by the new possibilities to the extent that they were losing track of artistic intentions and opting for the easy way out, producing ‘novelty’. Paradoxically, there is an interesting duality in this view and his insistence that technology was a necessary component of music for the new age, placing novelty above tradition (Wiggen Reference Wiggen1970).
In order to build theory about the new music and its technology, Wiggen organised several conferences, and the first large-scale initiative was Visioner av nuet that was produced at the Museum of Technology in Stockholm in 1966 (Wiggen Reference Wiggen1966). The festival was opened with a public announcement where 14 prominent Swedish scientists expressed their concerns about the lack of connection between the new scientific and technical advances and the cultural sector. In essence, they called for an increase of artistic work that was rooted in – and expressed through – new perceptions of the technological world.Footnote 10 When describing the motivation for the conference and festival, and in order to prove its relevance, Wiggen often mentioned the much-read contemporary engineer and writer C. P. Snow’s main thesis, that there was a disjoint between the natural sciences and humanities and that this lack of connection significantly hindered the efforts of solving the world’s problems (Snow Reference Snow1959).
Four years later, in 1970, Wiggen invited participants to the UNESCO-conference Music and Technology (Proceedings 1971), where issues of perception and psychology in music were discussed together with the latest advances in the use of computers for music analysis and composition. Comprehensive proceedings were produced from this conference, with contributions from among others Max Mathews, Jean-Claude Risset and Pierre Schaeffer, and the combination of the different texts shows Wiggen’s thinking – that the new composition methods that technology encouraged, perhaps also demanded, had to be related to human psychology for the expressions to become valuable. Wiggen criticised the situation where technological concerns overrode musical focus, and it is clear from the transcript of Pierre Schaeffer’s presentation that they are in agreement about this critique.Footnote 11
3. EMS: DEVELOPMENT OF DIGITAL TECHNOLOGY IN THE INTERNATIONAL CONTEXT
It was with EMS that Wiggen could start to realise the ideas of a new composition method in the computer-based studio. However, EMS also needed to provide a practical workspace while the new tools were being developed, so when it opened in 1964 it was only with a conventional studio for analogue tape composition.
Because computers were slow, digital sound processing and computer synthesis was out of the question, and early computers were only used to generate scores from instructions entered by the composer. Although Wiggen had developed away from interval-based music for acoustic instruments, he experimented with algorithmic composition for musicians, and made at least two works (Wiggen-1 and Wiggen-2) at the same time that he was setting up the first studio in Stockholm. Wiggen’s archive does not contain any information about these works other than a couple of newspaper clippings, but he did discuss them with Lejaren Hiller (Reference Hiller1970: 87–8) only a few years after Hiller and Isaacson had produced their Illiac Suite (1957), a piece that is generally thought of as the first computer-generated musical work.
At every turn in developing computer tools, new problems waited for solutions, and the next step following computer-generated scores was to control sound processing by digital means. However, this was not straightforward, as it depended on controlling continual processes with discrete methods. At the time EMS was established, Robert Moog had already introduced his first series of commercial synthesisers, but despite the information from Bell Labs that he had incorporated into his designs, the synthesisers were analogue in both control and sound generation, and Wiggen found that little help could be gained from his technical advances. Additionally, keyboard interfaces seemed an irrelevant technology for the field of computer music, which actively sought to go deeper into the construction of sound itself and away from the interval-based paradigm.
Changes in the financing and organisation of EMS put an abrupt end to Wiggen’s development work, just as it was ready to be made fully available to composers. This was a personal disaster for Wiggen, and led to his resignation and withdrawal from public life in 1975. Despite the conflicts that led up to the dramatic change of direction at EMS, there can be little doubt that the studio gained a worldwide reputation during Wiggen’s tenure (Groth Reference Groth2010). Lars Gunnar Bodin, who Groth (Reference Groth2010: 78) describes as a strong critic, indirectly recognised Wiggen’s significance in his open letter to Per-Olov Broman (Bodin Reference Bodin2008), and the renewed attention to Wiggen’s music among younger generations of composers and musicians is further evidence of the value of his achievements. But it was clearly difficult for the Swedish composers in the mid-1970s to accept Wiggen’s approach.
3.1. Automation and digital control
A key goal for Wiggen was to speed up the production process by making several functions in the studio automatic, saving time and freeing up space for many more composers. Against the backdrop of tape studios that might be occupied for months in the realisation of one work, this is easy to understand. In his description of the studio in Interface (Wiggen Reference Wiggen1972), Wiggen outlined the development plan at EMS, making clear that he focused on rule-based, automatic composition, and that this necessitated automating the sound control.
Much of the technology needed for building digital control systems had not yet been invented, and Wiggen was dissatisfied with the level of accuracy in what he could find commercially. The problem seemed overwhelming, but engineer Per-Olov Strömberg came up with a solution. Most of the sound apparatus was finished already in 1966, and could be operated from the control console. In 1968, it became clear that using an external computer (via the simplified means provided by the rudimentary language EMS-0) would be beneficial, and tests were made with a machine located at the University of Stockholm (Manning Reference Manning1993: 243). The new PDP 15/40 computer that EMS ended up purchasing arrived in 1970, at the cost of more than €105,000 in today’s currency, and it had the calculation power of about 1/100 of what is now found in any computer mouse pointer that currently costs approximately €1.Footnote 12
A team led by Klaus Appel at Uppsala University wrote EMS-1, which was completed in spring 1972, having been under development since 1969. Zaid Holmin and Robert Strömberg were responsible for the testing at EMS. EMS-1 was written in assembler code, which was quicker than Fortran, the most common language at the time. Given the existence of Fortran-based Music IV (1964) and Music V (1966), it might seem odd that Wiggen decided to write a new program for EMS, but the Music-N family of languages (starting with Music I in 1957) did not address external hardware, and this was what EMS needed for its hybrid approach. Instead of modifying for example Music V, which would have been the first choice, it seemed better to make new software that could be optimised for the EMS hardware. Wiggen also questioned Music V’s separate instructions for instrument and score, and whether this allowed enough compositional freedom to the composers. He praised Xenakis’s closer integration of sound and compositional structure, and pointed to a future direction beyond EMS-1, which he describes as a program limited to ‘editing, education, improvisation and timbral exploration’ (Wiggen Reference Wiggen1972: 141–3). Nonetheless, in 1972 EMS had built a digitally controlled analogue synthesiser that could be performed in real time.
The sounds in all the early computer music were simple in the sense that only a few types of waveforms could be synthesised, and that sampling of more complex waveforms was not possible. The difference to analogue synthesisers was that structuring and control were digital, and that complex automated performances were possible. The challenges of controlling analogue sound equipment with digital methods were the same everywhere, and parallel developments were happening at several institutions. The Institute of Sonology in Utrecht had created a system, and started their computer implementation in 1970; Peter Zinovieff’s studio EMS in London had completed their system in 1967, presenting it in a concert performance of Partita for Unattended Computer in Queen Elizabeth Hall the same year.Footnote 13 In 1965, James Gabura and Gustav Ciamaga developed The Piper at the University of Toronto; a system consisting of two computer-controlled Moog oscillators and an envelope generator, and in 1968, Max Mathews and Richard Moore (Reference Mathews and Moore1970) started their development of the Groove system at Bell Labs in New Jersey. Groove was an acronym for Real-Time Generated Operations On Voltage-Controlled Equipment, and the computer was concerned not with synthesising the sounds but with reading a score that could be made expressive by human control. All these systems were similar in controlling analogue sound synthesis by digital means, and making real-time editing possible as the machine executed the performances.
Of these initiatives, EMS in Stockholm had the most impressive list of equipment, with 24 oscillators, a noise generator, four reverb modules, two filter banks with settings for each third, three ring modulators, two envelope generators, and two digital and several analogue tape recorders (Figure 1). It seems that although Zinovieff was the first to give a public performance with this type of technology, EMS was the first functioning real studio. This is what Wiggen claimed (Wiggen Reference Wiggen1972), and he undoubtedly knew the other efforts well, especially since the developers were all discussing their ideas and achievements at the previously mentioned conference, Music and Technology, in 1970. James Beauchamp describes EMS in the same way in his 1973 comparison of synthesis systems,Footnote 14 and Manning (Reference Manning1993: 230) describes the equipment as ‘second to none’.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20180731093948808-0506:S1355771818000079:S1355771818000079_fig1g.jpeg?pub-status=live)
Figure 1 The analog sound equipment at EMS. Photo: Unknown.
In an interesting anecdote, James Tenney, who in those years composed at Bell Labs, said that it would take eight days at Bell Labs to do what one could do in one hour in Stockholm (Wiggen Reference Wiggen2004: 34). This was due to the digital tape automation at EMS versus the punch cards used in New Jersey. Programming time also came with a cost, and Wiggen explained to a Swedish newspaper how Tenney had told him that just the programming of one of his works at Bell Labs had cost thirty thousand Swedish Crowns (Wiggen Reference Wiggenn.d.). Wiggen also probably irked Karlheinz Stockhausen a bit when sending him a version of his Studie II that had taken only hours to make using the control console in Stockholm, following Stockhausen’s original score. Wiggen estimated it had taken Stockhausen months to realise the original version at Cologne. From the more than 100 works that were produced in the large studio during 1971 to 1979 (Groth Reference Groth2010: 198, 199), it seems clear that EMS had arrived at a functional and efficient technology, although it was just as plagued by the tediousness of manual, physical data entry as all other systems at the time.
3.2. Control console and computer control
The oscillators and signal generators could initially be operated from both the control console and the digital tape recorders that stored the parameter change instructions. The computer keyboard could also be used for control. Programming in EMS-1 offline was an arduous task, and composer Lars-Gunnar Bodin described the cumbersomeness of programming: ‘You had to sit and write on a Teletype machine and the code was coming on punched tape which you had to feed into the computer. You could be sure you’d get hundreds of error messages. There wasn’t any good editing program, so you had to try and write another punched tape’ (Lars-Gunnar Bodin, quoted in Chadabe Reference Chadabe1997: 167). Although the process must have been demanding, this was the reality of any manual data entry at the time, and all composers and programmers were used to this way of working. However, the process was slow, and most likely one of the reasons Wiggen was so focused on automation and real-time editing. It should also be remembered that programming was an entirely new method for making music, introducing new ways of thinking about it.
Compositions at EMS were punched out on paper tape at Teletype terminals, and although one could not edit what had already been punched, the paper tape programming could be stored on magnetic tape. Corrections could be done when reading from one terminal to the next, and edited files could be saved. When compositions were played, the control signals would be performed directly from magnetic tape, and not from the computer, since the computer would normally be too slow to make the necessary calculations in real time. Use of digital tape was like playing from disk today, and was limited to control signals since the technology was not yet fast enough for digital audio. In addition to several Teletype terminals, EMS also had a Tectronix terminal where programmers and composers could enter data directly to magnetic tape by programming in EMS-1, thus avoiding the paper tape (EMS-information 1970) (Figure 2).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20180731093948808-0506:S1355771818000079:S1355771818000079_fig2g.jpeg?pub-status=live)
Figure 2 The PDP 15/40 computer. Photo: G. Lundholm.
Of the three control methods, the console was the most original and was also made first. The console was approximately 9 metres long, with all controllable parameters of the sound equipment laid out on the surface (Figure 3). By touching the surface controls with a copper brush, the composer could disengage and activate parameter values in real time while the control signals were being sent from one of the two tape machines and displayed via the console’s indicator lights. The console did not have built-in memory, so performance of revised sequences needed to be executed from tape. Wiggen’s plan was to use the computer to expand the possibilities of real-time control, but the control console was connected to the computer only in the sense that it received control signals – it did not send changes back to the computer.Footnote 15 It was the digital tape machines that made it possible to control the complex analogue equipment in real time, and this was the key element in making the link between the analogue and the digital studio. EMS had created a control console where near real-time adjustment on pre-programmed material was possible, and since the output could be recorded to magnetic tape without assistance from studio engineers or other personnel, this accelerated the composition process considerably.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20180731093948808-0506:S1355771818000079:S1355771818000079_fig3g.jpeg?pub-status=live)
Figure 3 The control console. Photo: Unknown.
In secondary literature about EMS, it has been suggested that an important aspect of the console was its value as a tool for public relations – its futuristic, sleek look and impressive size were unparalleled and corresponded with what composer Åke Parmerud (Reference Parmerud1994: 55) once called the ‘cosmonaut aesthetics’ of EMS. The light show from the indicator lights must have added significantly to that experience. Although it seems unlikely that the console was purposely designed as a PR tool, it seems logical that it was used that way, especially since it so effectively displayed the status of the sound equipment. It also suited Wiggen’s eye for design, which was reflected in the furnishings and architecture of the 70 square metre studio. The console may have also been too large to be fully practical, and Wiggen commented at the opening of the exhibition Music Machines at the Norwegian Museum of Science and Technology in 2009: ‘only crazy composers used the console’. Nonetheless, at this console, the composer could control every aspect of synthesis and processing of the listed analogue equipment, making it the most powerful real-time tool of its time. Interestingly, from our current perspective, the development of real-time applications has actually been an integral part of computer music since the early years, and does not represent a break with the ‘old school’ approach.
3.3. MusicBox
The composition software MusicBox was the final element in Wiggen’s development of tools for the new music of the future. From early on, Wiggen believed that the new tools and media required new composition methods. He described several times how classic interval-based art music was a historically encapsulated tradition, and that new methods were necessary in order for music to not remain a rehash of old paradigms. However, he was not very specific in describing exactly how these tools would work in order to accomplish this goal. He had grown opposed to Darmstadt 12-tone aesthetics, since he felt they did not address the new technological reality adequately, and he found the existing electronic music too close to interval-based music in its conception (Wiggen Reference Wiggen1970: 61). Conventional concrete music, on the other side, was too limited in its sole focus on the perceptual qualities of sound (although he believed that the focus on sonic properties of recorded sound was a necessary step to more structured musical investigations). So, what was his solution?
Wiggen thought it necessary to replace analogue working methods with automatic processes, but this was probably not his most important point. He wanted to use algorithmic processes in composition, without resorting to the interval-based paradigm of, for example, Lejaren and Hiller, or his own early works Wiggen-1 and Wiggen-2 for that matter. There is after all not a big difference between writing a note in a score and typing in numbers for pitch, envelope, waveform and duration – if anything, writing a note on paper is quicker than typing these values.
Wiggen’s principal contribution to composition was software that allowed the composer to work primarily with compositional logic and structure, and to step back from the sounding pitch or recorded timbre. He found that a type of abstraction was necessary for a clear focus on the structures, and also to not have cumbersome data entry limit compositional ideas. While building on several concepts from earlier EMS programs, he wanted a collection of functions that could be connected in multiple ways, and that would allow the user to control execution by selecting functions interactively. As apparent in his drawn diagrams, Wiggen imagined these functions as graphic objects (Figure 4).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20180731093948808-0506:S1355771818000079:S1355771818000079_fig4g.jpeg?pub-status=live)
Figure 4 Knut Wiggen, photographed during a presentation of MusicBox. At the bottom of the montage, a short stretch of programmed paper tape is copied in. Photo: Bo-Aje Mellin.
A composition in MusicBox was a network of interconnected boxes that described the control signals for the electronic equipment at EMS. Each box contained a mathematical function that would operate on the input and provide output, whereby the output message would be the input for the next box in the network flow. The originality of this approach was in the data model of the composition, which was then ‘simulated’ through activating (instantiating) the model components. This modelling is essential in object-oriented programming, where the composition is ‘tested’ under different conditions. The composition process was quite different from EMS-1, where the composer needed to describe each event line by line. Wiggen also aimed to include aleatory principles in the compositions, another step away from the determinism in classical music, where changing notes would most often radically alter the music.
A key feature was the ease of encapsulation where micro-boxes could be combined into macro-boxes. The encapsulations could be reused and further encapsulated, and increasingly complex events could be produced without notating each pitch. When using a layer or two of encapsulation, it would be very difficult to have an exact idea of the resulting sound, and the composer had then effectively taken a step away from the concrete-sounding result while working with the compositional concept. The specifications of the inputs for the boxes would be based not on the emergent qualities of the sound, but in logic. Logic had become the tool in the simulations of the composition rules or models.
When searching for a software environment where MusicBox could be developed, Wiggen considered the object-oriented language Simula that had recently been launched (1964) at the University of Oslo by Kristen Nygård and Ole Johan Dahl.Footnote 16 Simula has often been credited as being the first object-oriented programming language in the world, and was already popular when work on MusicBox commenced. However, Simula could not address the EMS hardware (Wiggen Reference Wiggen2004: 22), and MusicBox was thus realised in Fortran. After the programmer David Fahrland had finished the first version of MusicBox in 1971 and returned to the USA, Kaj Beskow programmed the functionality for moving sound in four channels, based on an article by John Chowning (Reference Chowning1971).Footnote 17 Zaid Holmin joined the development in 1972, and added better features for making macros.
Several of Wiggen’s sketches (on paper) for boxes and compositions have been preserved, and may be found at the National Library of Norway. Although Wiggen published the ideas behind MusicBox in 1972 (Wiggen Reference Wiggen1972), it was not made available to composers; Wiggen felt that he needed to ensure that the program worked as well as he had hoped before he would release it in the open – and the proof was in the music. He composed several studies, and became convinced that he could make music with MusicBox, and not only sounds.
Wiggen explained that he had managed to create specific atmospheres that evoked emotional responses, and when he found that his logic successfully connected to this type of psychological experience, he felt that the software had proven itself. At the time, these pieces must have sounded quite refined, and it is important to remember that the sonic world of computer music sounds was still quite simple, and that composers struggled with both synthesis methods and expressivity.
During the late 1990s and early 2000s, Dag Svanæs at the Norwegian University of Science and Technology advised students Håvard Wigtil and Harald Stendal in rewriting the software to object-based Pascal, and then to Java. Nils Tesdal and Sigurd Stendal also participated in this effort, which was done in close cooperation with Wiggen. Since that time, Zaid Holmin has expanded the number of boxes and made a graphical user interface. The latest version of MusicBox (2013) runs on Microsoft Windows XP and is written in a combination of C++ and Java. An 8-channel version has also been envisioned.
Similar ideas to Wiggen’s software MusicBox have become common in musical practice today, and several software applications are built on the same type of object-oriented approaches. Among them, the most important are Max and PD, both created by Miller Puckette.Footnote 18
3.4. Spatialisation and sound quality
Early in his career, Wiggen wrote about how the concert hall listening experience was tied to the interval-based tradition, with the music coming from the stage and resonating in the concert space. He viewed this as limiting when compared with the electrophonic possibilities afforded by loudspeakers and how they could be placed flexibly in any concert space. Composers of electroacoustic music had been working with spatialisation since the late 1950s, as we know from Varese’s and Xenakis’s music for the Phillips Pavilion at the World Expo (1958) and Stockhausen’s re-recording of sound from a rotating speaker for the electronic part of Kontakte (1960). Wiggen envisioned MusicBox as an alternative to conventional diffusion over several sets of loudspeakers, using computer control to make spatialisation an integral compositional parameter.
Civil engineer Stig Carlsson at the Swedish Royal Institute of Technology (KTH) constructed the loudspeaker system used at EMS, using orthoacoustic principles that aimed to calibrate the sound output of the speakers to the rooms where the speakers were installed. The aim was to achieve a linear representation of the music in the space, rather than in the loudspeaker output isolated in anechoic conditions. The speakers were expensive and complicated in construction, and by placing them close to a wall, the 1–5 ms. signal blurring was largely eliminated. Carlsson’s speakers were also used at Fylkingen concerts. The signal to noise ratio was an astounding 100 dB, and the quality of the speakers gave EMS engineer Per-Olov Strömberg a challenge, since 100 dB ratio exceeded what tape machines could reproduce at the time. Strömberg launched his solution in 1965, and the studio could deliver CD-quality sound. In 1971, David Blackmear patented the same type of noise reduction in the USA under the name of dbx. As a compression/expansion system, dbx was useful when recording to noisy media such as magnetic tape.
In sum, it was possible to work at EMS with computer-controlled, four-channel amplitude panning in CD-quality audio as early as 1971, in a format much the same as the current 5.1 surround standard. Wiggen also had ideas for extending spectral panning techniques to three dimensions, and in a radio interview from 2003, he described work on making sound sources appear as though they were physically placed outside of the circle of speakers. This goal has since become important in research and development of 3D sound, for methods of wavefront construction.Footnote 19
3.5. Database of perceived sonic qualities
Wiggen’s ideas of connecting art and technology through the psychological testing and categorisation of synthesised sounds have perhaps received the least attention. Similar to Jean-Claude Risset (Reference Risset1971: 123), Wiggen believed that computer analysis of sound could be used to further the field of psychoacoustics. Risset published his spectral analysis of trumpet tones in 1965, and used this type of approach to generate a catalogue of computer-generated sounds that had a high degree of similarity with their acoustic counterparts. This catalogue was published in 1969 by Bell Labs, where Risset was a colleague of Max Mathews. But where Risset explored synthesis of natural sounds, Wiggen wanted to use computer-controlled synthesis to manufacture sounds that could subsequently be categorised in psychological terms, not based on similarity with naturally occurring sounds. Wiggen believed this would unite the pure logic of the new computer tools with the psychological effects of musical resonance (Wiggen Reference Wiggen2004: 53), and produce reciprocity between external and internal objects to ‘try to bridge the gap between the physical and psychological descriptions of a sound object’ (Wiggen Reference Wiggen1972: 134). Wiggen was familiar with Pierre Schaeffer’s Traité des objets musicaux (Schaeffer Reference Schaeffer1966; Groth Reference Groth2010: 97, 98), and found his categorisations useful in developing his own, numerically based approach.
At the Music and Technology conference in 1970, Schaeffer praised Wiggen’s work and was positive towards this use of computers for analysis and composition (Schaeffer Reference Schaeffer1971: 57), although he generally had a more careful and sceptical tone regarding the current use of computers.Footnote 20 Unfortunately, since Wiggen left EMS just as MusicBox was nearing completion, and before work could start on this categorisation process, his ideas are only outlined and described in theory (Wiggen Reference Wiggen1971, Reference Wiggen1972, Reference Wiggen2004).
4. MUSICBOX COMPOSITIONS AND PERFORMANCES
While Wiggen’s musical output during the 1950s for the most part consisted of interval-based works composed from 12-tone principles,Footnote 21 they also show romantic sensitivity. His works from this time are surprisingly expressive for a composer who would later make radical and principled arguments about the music of the future. The works have not been performed often, but his Quartet for Piano, Violin, Clarinet and Bassoon (1955) was performed at ISCM in Oslo in 1956, and Ny Musikk in Oslo performed several piano works at a concert in 1959.Footnote 22 His electronic sound installation Musikmaskin 1 from 1961 was part of the exhibition aspect 61 at Liljevalchs konsthall.Footnote 23
Wiggen’s two first works composed with a computer, Wiggen-1 and Wiggen-2, were algorithmically composed, and programmed in Algol (Hiller Reference Hiller1970: 87) by Gunnar Helström (Paus 1965). However, no record of any public performance of these works has been found, nor are any recordings available.
The works that best represent Wiggen’s ambitions about a new composition method are his five studies Sommarmorgon (1972), Etyd (1972), Resa (1972), Massa (1974) and EMS för sig själv (EMS by itself, 1975). A detailed description of these works will lead too far in this text, however, a brief discussion of Sommarmorgon will explain the principles in his composition method. It must be pointed out that Sommarmorgon was the first study that Wiggen made with MusicBox, and that it is a quite simple piece in comparison with, for example, Resa.
Wiggen was, much like Xenakis, interested in systematic approaches and in forming coherent structures without determining the details. In order to achieve this, he imagined extensive co-variation in combination with mathematical methods for generating numbers for control of the parameter changes. Figure 5 shows how he imagined a co-variant outcome of such processes.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20180731093948808-0506:S1355771818000079:S1355771818000079_fig5g.jpeg?pub-status=live)
Figure 5 This sketch from Wiggen’s archive shows a specific development of frequencies, partials, envelopes, durations, reverb and spatial movement. The sketch is not marked, and it is not clear which piece the diagram refers to. Scan by the National Library of Norway.
Wiggen composed with streams of numbers, and by restraining and sorting these streams he shaped the musical expression. The music did not emerge from micro-level detail, but from his control of the typical characteristics of the streams. The collection of boxes shown in Figure 6 is a graphical representation of the score for Sommarmorgon, where some of the functions are simple, just passing a value, and others more complex, generating streams of numbers that vary with input and internal processing. It can be difficult to see exactly what is happening in scores such as these, but today, this is a common method for display and control of signal processing. In the early 1970s it was not.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20180731093948808-0506:S1355771818000079:S1355771818000079_fig6g.jpeg?pub-status=live)
Figure 6 This graphical representation of the score for Sommarmorgon is realised in the Java-version of MusicBox.
For example, in Sommarmorgon, the Multiple Gauss Random Generator (MULG1.mac, square box no. 1) produces an irregular and unstable train of numbers ranging from 35 to 14,000, and these numbers are sent to two places. First to two If boxes (square box no. 2), where the stream is sorted into three ranges (0 to 200), (200 to 1600) and (1600 to 14,000). Choice boxes below the numbers are used to select from set values, by statistical likeliness. The first Choice box will, for example, select the number 15,000 more often than any other value in the list. The output from the Choice boxes are used to set tone duration in the Multiple signal generator (MSG) (square box no. 6).
The outputs from the Choice boxes are also used to trigger Gauss boxes (square box no. 3) that produce values that are quantised and used for amplitudes. The Choice boxes also trigger another set of Choice boxes (square box no. 4) that determine waveform. In effect, each time a number is sent from the MULG1 box (square box no. 1) it is used for determining waveform, duration and intensity.
The numbers from MULG1 are also sent to a Deter box (square box no. 5), where they are passed each time the If boxes (2) receives a number. The numbers are sent directly to the MSG box as frequency values, and are used as triggers for selecting envelope types and values. This can be read from the bottom right of the figure.
The train of numbers that are sent from the MULG1 box are generated in relatively complex manner at the left side of the score, and it is the output from the MULG1 box that provides the musical parameters available at the EMS synthesiser.
A musical example from the score: Sommarmorgon is largely dominated by high-pitched sine tones, while at some points some deeper tones with other waveforms are easily heard. The deeper tones also have longer durations than the more glittering weave of sine tones. The deeper tones are generated as follows: the first If box passes all numbers below 200 to the first Choice box, and with the Gaussian distribution (bell shape) from the MULG1 box, these numbers are few. The numbers that can be triggered in the first Choice box are larger than in the other two Choice boxes, but they are triggered more seldom. The numbers are used for durations, and this means that longer tone durations are rare. When the same low numbers (0–200) are passed by the Deter box (square box no. 5), they are used as frequencies, and the result is that a dark sound is also a long sound. The waveform of this sound is determined by another Choice box (square box no. 4), where the likelihood of selecting from all waveforms are equal: 25 per cent. With two waveform options being a sine, 50 per cent of all dark and long tones will have another waveform than a sine. Opposite, the third Choice box in the top group (square box no. 2) selects short durations, and only sine waveforms.
Had Wiggen changed the conditions in the Choice boxes, Sommarmorgon would have been a different piece. There are now two versions of Sommarmorgon in existence, one realised at EMS, and the other realised in Fredrikstad in preparation for a concert in Bergen in 2008. Because the seed values are not identical, the versions are different but still recognisable as different performances of the same work. This short discussion will have to suffice in this article, but for a more complete analysis, see Rudi (Reference Rudiforthcoming).
Wiggen’s works for MusicBox have seen a few performances, but not many. They were played at EMS in December 1975,Footnote 24 and Resa was performed at MIT in October 1976, as part of the first international conference on computer music. This concert was produced as part of the ISCM festival, and a recording from the conference was later played on Swedish radio. During the same year, works were also performed at a conference for European radio stations, where Wiggen gave a lecture on synthesised music in studios. In 1977, he was invited by Luciano Berio to submit material for Ircam’s historical exhibition. In 1984, Fylkingen performed the study EX14C, and Resa was performed at a Gaudeamus concert in Amsterdam in 1985. His electronic works were performed during the 30-year celebration of EMS in 1994 (Strömberg Reference Strömberg1994), at the Ultima festival in October 2003, in Trondheim in 2005, and at the Bergen International Festival in 2009. In January 2017, all works were part of celebratory events at Ringve Music Museum in Trondheim.
5. SUMMARY
There is a thread running through Knut Wiggen’s technical development work at EMS, starting with digital control of sound generation and processing, proceeding with a hardware tool for real-time interaction on a scale relevant for studio production, and finally integrating the computer in the development of a new composition method.
Knut Wiggen moved to Sweden as a young man, and became a major contributor to the musical development in Sweden through his tenure as chairman of Fylkingen and studio director at EMS. During his tenure, Fylkingen became the main Swedish hotbed for cross-media art, and was an important stage for international exchange and presentation of new approaches to the changing media. The research at EMS was at the forefront of international developments, and EMS gained a reputation that is arguably still beneficial for Swedish technology-based music.
When discussing Wiggen’s work, it is impossible not to recall how he was forced to resign his position at EMS. During the early 1970s, EMS had lost its standing as the major development project of the Swedish Broadcasting Corporation; the organisational base had shifted, and a new board with several composers among its members wanted a new direction. EMS was to receive less funding for research, and the activities were to become more pragmatically oriented towards composers’ immediate needs. Wiggen did not accept these ideas and changes, and resigned his position. In effect, this meant abandoning MusicBox development, as well as his aims for synthesis and psychology-based research. When Wiggen left EMS, he took the MusicBox code with him, partly because EMS did not want it, and partly to ensure that the software would not be used for other purposes; for example, making ‘elevator music’. MusicBox was intended as a tool for art, and has never been published as a compiled program. This pioneering software more or less disappeared with Wiggen’s departure from EMS.
In a sense, it is possible to say that the composers at EMS who worked with text/sound chose a traditional approach when they insisted on funnelling funding into the tape studio (Klangverkstaden). It is, however, difficult to see how their artistic aims of semantic play could have been realised in MusicBox, and they also needed facilities and resources that were available immediately, not at some point in the future.
With his logic-based approach to composition, and the technology development to support it, Wiggen’s work was cutting edge in the early 1970s, however, his abrupt resignation severely hindered its dissemination. This article serves to make the history of early computer-controlled synthesis more complete, and to help the scholarship of future generations.
Acknowledgements
I am grateful to Carol van Nuys, Zaid Holmin and Dag Svanæs for granting access to source materials and their time spent correcting errors in my initial text, as well as to Ingrid Romarheim Haugen at the National Library of Norway, who has been of invaluable help in providing access to materials not yet catalogued from Wiggen’s archives.