1. Introduction
The use of computers in music is typically associated respectively with algorithmic composition – that is, computational approaches to the organisation of musical form – and with sound generation – that is, the generation of digital audio signals as the sonic output of a computer-based musical system. However, a less evident possibility concerns the use of a computer for the generation of acoustic sounds, in order to regain a specific acoustic physicality in output. With respect to such a goal, a still new but now firmly established perspective is opened by ‘physical computing’, the term meaning ‘computation with physical objects’ (O'Sullivan and Igoe Reference O'Sullivan and Igoe2004; Igoe Reference Igoe2007). Physical computing fosters the idea that computation can be taken outside standard computers and embedded into real-world objects in order to create ‘a conversation between the physical world and the virtual world of the computer’ (O'Sullivan and Igoe Reference O'Sullivan and Igoe2004: xix). The key elements for developing physical computing are microcontrollers; that is, computation units packed in a small-sized circuit board, with input/output facilities allowing the connection of sensors and actuators. In a music context, microcontrollers can drive physical objects as acoustic sound sources. In this way, it is possible to create ‘acoustic computer music’, a music entirely controlled by computational means, but in which sounds are generated by acoustic bodies (see Goto Reference Goto2006). Musicians are considered to be the first to practise physical computing (O'Sullivan and Igoe Reference O'Sullivan and Igoe2004). Physical computing emphasises a creative use of found objects and technologies: in perspective, it can be seen as a computational reprise of some issues and attempts from a long, even if partially submerged, tradition that has been established in electronic music-making. Such a tradition, best exemplified by David Tudor (VV.AA. Reference VV2004), aims at creatively using, building and perverting electronic technologies by means of a DIY approach (Bowers and Archer Reference Bowers and Archer2005; Ghazala Reference Ghazala2005; Collins Reference Collins2006; Richards Reference Richards2008). One of the key points of all these experiences is that hardware is not ‘hard’ any more, as it can be ‘reprogrammed’, like in software development. In the digital reprise of such a perspective, microcontrollers play a pivotal role, adding a versatility that depends on the computational, abstract nature of the control of the information flow.
2. Design principles for a computationally controlled, debris-based instrument
Physical immediacy of control, reprogrammability, DIY attitude, a softening approach to hardware, sound generation via physical objects: all these issues are at the origin of the design and realisation of the Rumentarium project.Footnote 1
The Rumentarium is rooted in different but often intermingled traditions: percussion instruments, musical robots, programmable mechanical devices, kinetic art, sound installations. Historically, the use of percussion instruments has deeply influenced twentieth-century Western music in its quest for new timbres. The experimentation with percussion instruments has revealed to composers that, simply, every object of the world can be used as a sound body (including car brakes, for example; see Facchin Reference Facchin1989 for an exhaustive catalogue). Futurism has underlined the role of percussion instruments, and Futurist Luigi Russolo created the notorious, and lost, ‘intonarumori’ by partly mechanising percussion sources.Footnote 2 Another relevant example from Early Modernism is George Antheil's Ballet mécanique (1926), originally scored for percussion and mechanical instruments (three xylophones, electric bells, two wood propellers, a metal propeller, a tam-tam, four bass drums, a siren, two pianos and a pianola, Oja Reference Oja2000).
Musical robots have a long history, dating back to at least the seventeenth century (Prieberg Reference Prieberg1960). The myth of the playing machine has never ceased to foster new inventions, and the increasing diffusion of microcontrollers has led to a robust revitalisation of this tradition. In recent years we have witnessed a proliferation of working experiments, including mechanical pianos; turntable robots; percussion, string and wind robots; and ‘extensions’ (i.e. other robots more difficult to define) (Kapur Reference Kapur2005; a good example is the Man and the Machine robot orchestra: Maes, Raes and Rogers Reference Maes, Raes and Rogers2011).
While in robots the main issue is the autonomy of the object, mechanical pianos belong also to a different tradition, that of programmable devices for playing back music sequences, a tradition that in music dates back to fourteenth-century mechanical carillons (Prieberg Reference Prieberg1960; Roads Reference Roads1996). The use of a player piano by Conlon Nancarrow (Gann Reference Gann1995) connects this tradition directly to the most advanced contemporary algorithmic strategies for musical composition and control: Nancarrow's experiments lead directly to the digital reprise of the player piano, the Yamaha Disklavier.
Finally, and going beyond the domain of music, kinetic artists have created many works including mechanical devices capable of producing sounds. In 1956 Nicolas Schöffer was the first to create a physical installation implementing a cybernetic system – the CYSP 1 – that sensed the environment and reacted to it by generating sounds (Prieberg Reference Prieberg1960; see also CYSP 1 2013). Starting with junk materials, Jean Tinguely assembled complex machines, often exhibiting a mechanical-sound-generation behaviour (Hulten Reference Hulten1987). The works by German-American artist Trimpin (Leitman Reference Leitman2011) are probably the most representative in the domain of automated mechanical sound devices.
Sound art emphasises the role of sound in relation to specific places, and in many occasions sound artists have made use of physical devices (LaBelle Reference LaBelle2006). In this sense, the research between acoustics and electronics by musician-artist Jean-François LaPorte is particularly relevant (LaPorte Reference LaPorte2013).
The Rumentarium is a set of handmade percussion sound bodies (resonators) that are acoustically excited by DC motors (sources). The motors are controlled via computer through microcontrollers. The name ‘Rumentarium’ originates from rumenta, in northern Italian meaning ‘rubbish, junk’ and evoking at the same time the Latin instrumentarium.
With respect to the previously discussed traditions, the Rumentarium extends the percussion tradition by assembling resonators from a huge variety of recycled/reused materials. In relation to the tradition of musical robots, the Rumentarium – being resolutely non-anthropomorphic and avoiding the use of traditional instruments – is a sort of ‘extension’ robot. Its aim is to be programmable and interactive, so that different control strategies can be implemented, putting together algorithmic and gestural control. Finally, it is intended both as an instrument to be performed in a musical context and as a standalone, site-specific sound installation. Similar works include ModBots, digitally controlled percussion robots that have been developed by Bill Bowen in collaboration with Lemur since 2002. ModBots are miniature, modular instruments designed with an emphasis on simplicity, and making use of only one electromechanical actuator (a rotary motor, or a linear solenoid) remotely operated by a microcontroller (Lemur 2013). While Bowen's Modbots are used mainly as components of sound installations (Beall Center Installation 2013), Lemur is the main technological provider of Pat Metheny's recent Orchestrion project, in which solenoid actuators are used interactively in real-time by the renowned guitar player (Metheny Reference Metheny2013).Footnote 3 The ModBots project was also the origin of William Brent's Ludbots (2008): LudBots have been used as instruments in live performances, but also as components of the ‘False Ruminations’ installation (see False Ruminations 2013). In the ‘Constante’ project Ivan Puig and his collaborators built a set of mechanical instruments from discarded objects and industrial materials which were remotely operable (in this case by means of analogue equipment), so that they could be played live but also automatically sequenced (Constante 2013). Finally, DC motors are at the core of most works by Swiss artist Zimoun, where they typically act as mechanical sound sources (Zimoun 2013), and Japanese Kanta Horio uses them to create complex, movable electromechanical installations (Horio Reference Horio2013).
In the Rumentarium, the design and production of the instruments (from here on, ‘sound bodies’) follow three main principles, inspired by sustainable design (Tamborrini Reference Tamborrini2009): refabrication, softening, flexibility.
The design embraces an ecological perspective against the huge amount of wasted resources that characterises ‘mature’ capitalism, determining an increasingly relevant place for recycling policies in national and international agendas. Still, many practices around the world have traditionally developed specific attitudes towards the ‘refabrication’ of objects as a normal way of shaping and reshaping the semiotic status of material culture (Seriff Reference Seriff1996). An inspiring concept is the ‘System-D’ approach by recyclers from Dakar (Roberts Reference Roberts1996). Here ‘D’ means ‘débrouille-toi!’ (French for ‘to cope with’, ‘to find your way’), as a general philosophy of quickly finding a way forward with limited resources, leading to variable strategies of reuse of available materials. Accordingly, in the Rumentarium the DC motors are scavenged from discarded electronics (CD/DVD players, mobile phones, toys) and they can be extended by adding parts of various materials (plastic, wood, metal), thus implementing different modes of excitation (percussion/friction). Resonators are constructed from an undefined variety of recycled/reused materials: such as pipe tobacco boxes, glass bowls, broken cymbals or kitchen pans. Sound bodies are generally assembled using metal wires, glue or soldering. They can include Lego parts, as Lego is suitable for fast prototyping, in particular when support structures are needed.
Figure 1 shows a minimal example of a sound body. Here the resonator is a broken crash cymbal, while the sound source is a vibrator motor from a mobile phone. Apart from powering the motor, the wire, glued to the cymbal's surface with duct tape, is used also as a flexible support, so that, when powered, the vibrator bounces over the cymbal because of the eccentric mechanism.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170719083112-57154-mediumThumb-S1355771813000216_fig1g.jpg?pub-status=live)
Figure 1 Minimal example: a cymbal and the vibrating mechanism from a mobile phone.
Figure 2 shows some sound bodies making use of Lego supports. The ‘Technic’ variant of Lego is used, as it allows complex mechanical behaviours to be implemented (by means of gears, axles, pins and beams), but also for robustness, as it shows a good resistance to transportation. In the tinkling ball (on the right), originally a toy generating sound while rolling, the beater is connected to a motor, this time inserted into the Lego structure. Figure 2 (left) also shows a sound body consisting of a little bell in which the beater is driven by motors. Between the bell and the ball, a tinkling toy is placed on a metal box. It oscillates thanks to a beater connected to the Lego structure behind.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170719083112-62442-mediumThumb-S1355771813000216_fig2g.jpg?pub-status=live)
Figure 2 A pitched section: toy lamellophones and bells mounted onto Lego structures (Centro Stabile di Cultura, San Vito di Leguzzano, photo courtesy Gabriele Grotto).
Figure 3 shows a 24-element setup. Sound bodies include different instrumental subfamilies of idiophones (see Hornbostel and Sachs Reference Hornbostel and Sachs1961). Previous examples represent ‘struck’ idiophones. In the Rumentarium, ‘scraped’ idiophones are present with metal pipes with a motor inserted to an end, making a metal or plastic piece continuously rotate against the internal surface. ‘Shaken’ idiophones are rattles, in particular vessel rattles, where rattling objects enclosed in a vessel strike against each other or against the walls of the vessel, or usually against both. In the Rumentarium, vessel rattles are built by inserting movable objects (seeds, rice, plastic beads, buttons) inside metal boxes (typically, pipe tobacco containers): the motor is inserted in the bottom of the box, and shakes the objects by rotating a flexible plastic or metal wire.
A second design principle is the softening of the hardware. Being so simple and intrinsically costless, sound bodies can be ‘produced while designed’ in an improvisation-like manner, starting from available materials. As a consequence, their hardware nature is quite ‘soft’: sound bodies, and their parts, can be replaced easily and effortlessly. This attitude is similar to the development of software, in particular to iterative and incremental development methodologies (Larman and Basili Reference Larman and Basili2003), where fast software releases allow the inclusion of evaluation phases in the development process. In addition, the wiring connections in the Rumentarium are alligator clips exclusively (see Figures 3 and 8). In this way, it is easy to assemble or disassemble the whole setup, thus implementing another feature of sustainable design, design for disassembly (Tamborrini Reference Tamborrini2009). Such a wiring technique contributes to the blurring of the boundary between the phase of installation and the phase of usage, as sound bodies can be added or removed on the fly during a live performance, by simply connecting or disconnecting them through the alligator clips. Eventually, broken pieces can be replaced, repaired and reinserted in the Rumentarium setup while it is at work.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170719083112-28751-mediumThumb-S1355771813000216_fig3g.jpg?pub-status=live)
Figure 3 Rumentarium installed at Share Festival 2009, Turin (courtesy Silvio Lucchini).
Finally, a third principle is flexibility. This is here intended as the capability of the Rumentarium to be modified in relation to specific needs: as an example, a performance can require a certain setup in relation, for example, to the presence of microphones for the amplification of sound bodies, while in an installation the setup is primarily defined according to the exhibit space. The number and the kind of sound bodies can be adapted to the context of the setup. In order to ensure modularity, the Rumentarium uses a variable number of Arduino microcontrollers (Banzi Reference Banzi2009) that act as six-out,Footnote 4 real-time digital-to-analogue converter interfaces between the computer and the electrical devices. Each microcontroller, and the relative basic electronic components, is inserted into a plastic box (the ‘controller box’). For each controller box, a second box is used as a power hub (‘power box’). Thus, the Rumentarium is based on a six-element unit, made of six DC motors, a controller box and a power box. The Rumentarium may include up to four units, allowing the management of up to 24 sound bodies. In the Rumentarium, the only parameter available to drive the sound bodies is voltage applied to motors (typically between 3 and 6 volts), which can be regulated by sending a numerical value to the microcontroller. In turn the microcontroller converts the value into voltage, and finally the voltage determines the speed of the motor. Thus, the numerical value acts as a general ‘dynamic’ parameter. Higher values result in faster rotation and higher sound volume, even if details depend strictly on the mechanics of each sound body. Sound bodies exhibit hysteresis: once a motor has started rotating, it is possible to decrease voltage under the activation threshold, and the motor will keep on rotating. These nonlinearities and unpredictable behaviours are a structural part of the project, as they allow the introduction of noisy patterns from the physical (in this case, mechanical) environment, instead of using algorithmically generated noise (see O’ Sullivan and Igoe 2004: 188).
3. The software interface
Various software applications have been created to drive the Rumentarium, all written in the SuperCollider language (Wilson, Cottle and Collins Reference Wilson, Cottle and Collins2010), which provides both features of a general programming language (e.g. Java) and DSP capabilities, useful for audio/musical applications. In order to control the Arduino boards, SuperCollider offers software bindings to various Arduino libraries that force the boards to listen to the USB port, where they can receive messages from the host computer. Figure 4 shows the general structure of the Rumentarium, including both hardware and software elements.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170719083112-31732-mediumThumb-S1355771813000216_fig4g.jpg?pub-status=live)
Figure 4 Hardware/software structure of the Rumentarium.
The main software component is the RuMaster class (1), that acts as the interface with the physical output (microcontrollers and sound bodies). A RuMaster instance allows the treatment of a set of different Arduino microcontrollers (2) as a unique abstract device with 24 abstract ports indexed from 0 to 23, to be sent float values in the range [0, 1]. On this abstract software layer it becomes easier to build mapping strategies. Such strategies can be related to real-time incoming data from external inputs, such as MIDI gestural controllers (3) or from the keyboard (4). Data can also be gathered in non-real-time from external sources such as handmade graphical scores (5), or from digital information (6). In all these cases, mapping algorithms (7) are needed to specify a semantics in terms of Rumentarium's behaviour. It is also possible to directly generate control data for RuMaster (8). Finally, RuMaster allows the messages sent to Arduinos to be stored in a log file, so that they can be played back on demand.
4. On control: mapping strategies and emerging issues
Computers have been used for live performing since the late 1960s (Chadabe Reference Chadabe1996), but it is only since the 1990s that the increasing computational power of processors has consistently allowed a large diffusion of real-time audio software to perform in live situations. In this context, the use of the computer raises some relevant and possibly contradictory issues, in particular in relation to musical performance. In particular, for the first time in the history of music a separation has arisen between ‘a gestural controller or input device that takes control information from the performer(s) and a sound generator that plays the role of the excitation source’ (Jordà Reference Jordà2002: 24). A relevant consequence of uncoupling the interface from the generator concerns the status of the digital instrument as a musical performing tool. From the point of view of controllers, a traditional acoustic instrument acts like a ‘servant’ following a stimulus–response logic: the sound generator responds closely to the gestures of the performer operating the controller. Rather, by means of a computer-based instrument it is easy to implement different degrees of agency, from stimulus–response behaviours to much more complex ones, including artificial-intelligence features (a memory, a set of action rules, etc.). In short, a continuum between the ‘instrument’ and the ‘machine’ emerges that allows us to investigate and exploit different forms of semiotic subjectivity in artworks (Fontanille Reference Fontanille1998), posing the questions: is the tool an instrument? Or is it an agent? Is it something in-between?
Of course, these issues do not only emerge in relation to real-time: rather, they are relevant in the wider context of computer music, each time an interface (in the most abstract sense) is connected to a sound generator. But live performance, and improvisation in particular, highlights these (see Bowers Reference Bowers2002). Many of the design issues that emerged in relation to the Rumentarium have been discussed by Bowers and Archer (Reference Bowers and Archer2005) while introducing the notion of the ‘infra-instrument’. In opposition to an ideology of the technological sublime that seems predominant in the field of interface research, the two authors individuate in the infra-instrument a specifically constrained, lo-fi instrument that forces the musician or improviser to focus on a ‘reductionist’ approach – that is, on an exploration of the instrument's intrinsic limitations. Strategies to build infra-instruments are proposed that share many aspects with the Rumentarium's design.
In the following I will discuss some methodologies that have been developed for the control of the Rumentarium in relation to some artistic applications.
5. MIDI controller and Loop station
Real-time input can benefit from standard MIDI controllers. In particular, a Behringer BCR2000 rotary controller has been chosen, as it includes up to 32 rotary encoders that can be directly associated to a variable number of sound bodies. By means of the MIDI controller, it becomes possible to play the Rumentarium as an ensemble of acoustic sound sources. While voltage control could be realised without recurring to microcontrollers – that is, by simply using potentiometers to vary the voltage applied to motors – the computational nature of the system allows us to add a further layer of control. A loop station has been implemented, capable of recording MIDI input over time, and then playing it back. The loop station allows us to start or stop recording input events into a history array, and to play the history back while other incoming events are arriving from the same MIDI controller. Recording start/stop, duration of the history in seconds, and number of repetitions (from one to infinite looping) are assigned to rotary encoders and can be controlled in real-time.
6. Score reading
The MIDI controller makes it possible for the musician to play the Rumentarium like an instrument. A completely different strategy is focused on a non-real-time, composition-like interest, and involves handmade graphic scores. While it would be possible to use common-practice notation for percussion to drive the Rumentarium (see Facchin Reference Facchin1989), as a consequence of the singularity of the Rumentarium with respect to standard instruments other notations may be taken into account (e.g. Cage and Knowles Reference Cage and Knowles1969; Sauer Reference Sauer2009). As the only control parameter in Rumentarium is voltage, notation needs to represent only one (continuous) dimension. The devised score system uses n-space staves (where n is the number of involved sound bodies) to be printed on paper. Once the score template is printed on paper, it is possible to draw on it by means of pencils or pens, assuming that the horizontal dimension represents time (as in time notation), spaces on the staff represent sound bodies, and the relative darkness of the grey represents voltage amount to be sent to motors.
Figure 5 reproduces a score for 18 sound bodies (three units). In order to be performed, the score is scanned, colour information is discarded, and the image is automatically processed in order to reconstruct a unique staff from different staves. The image is then resampled into 18 rows (that is, as many as the sound bodies). The number of columns (i.e. time quantisation) depends on the desired time resolution. The resulting matrix is the equivalent of a piano roll, where the grey values represent the required ‘dynamics’ of the sound bodies. In order to perform the score, a routine scans the matrix at the desired sample rate: at each time step, it takes into account a column, and sets for each sound body the value specified by the corresponding grey level; then, it waits for the scanning period; finally, it moves to the next column until the score is ended. Figure 6 shows a screenshot of the software application. The digitised score is displayed on the top right. The graphic window at the front shows the quantised matrix, with grey background displaying advancement in time. The other graphic elements show some SuperCollider code and a chronometer.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170719083112-51969-mediumThumb-S1355771813000216_fig5g.jpg?pub-status=live)
Figure 5 SottoCopernico, a graphic score for Rumentarium.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170719083112-96705-mediumThumb-S1355771813000216_fig6g.jpg?pub-status=live)
Figure 6 A screenshot from the software that performs a graphic score.
7. Writing systems
In the development of control strategies for Rumentarium, writing systems have been taken into account as an answer to the question: ‘What does it mean to play (with) a computer?’ The input capabilities of a ‘multimedia computer’, here intended as a standard computer with built-in audio/video, are very limited, being reduced to the mouse and the keyboard. In this sense, ‘laptop music performance’ has been criticised as an example of an invisible interaction between the performer and the computer (O’ Sullivan and Igoe 2004: 187). On the other side it must be noted that keyboard input, even if limited to a suboptimal QWERTY system (see Gould Reference Gould1991: chapter 4), can happen at a very fast pace (more than 200 characters per minute; Brown Reference Brown1988). Such an input rate is comparable to musicians’ playing, but it is accessible also to non-musicians. Moreover, the keyboard usage provides an easy mnemonic technique for sequences, as input sequences can be literally remembered by the user as words from the natural language. Writing mappings are associations between characters on the computer keyboard and sound bodies in the Rumentarium (see later for two examples). They allow it to be played by the user typing texts, thus generating and exploring in real-time fast sequences of (sets of) sound bodies. The user is given a meaningful way to control the ensemble simply by typing different words. By means of writing mappings, the statistical phonological/lexicographic properties of languages and texts can be exploited in artistic applications. In this sense, poetic texts are particularly effective as they can be thought as very idiosyncratic ways of configuring ordinary languages by means of redundancy and uneven distributions of phonemes/graphemes (Jakobson and Waugh Reference Jakobson and Waugh1979).
When writing mappings are used in real-time, temporal information is defined by the typing process, and the performer can modulate input rhythm while typing. The mapping function includes handling for non-alphabetic characters (e.g. numbers, blank spaces, punctuation), while uppercase is typically turned into lowercase.
Two writing mappings have been explored up to now, Braille and alphabet.
7.1. Braille mapping
While we were experimenting with a single, six-out unit of the Rumentarium, the Braille mapping emerged as a way to relate keyboard input and control of the sound bodies. The braille writing system for tactile reading consists of raised dots arranged in six-dot cells (BANA 2013). As each dot is given an index, Braille can be thought as a six-bit alphabet (presence/absence of the raised dot for each position). In short, the Braille mapping treats a Rumentarium unit as a Braille encoder, activating the sound bodies in relation to the dots in the Braille cell.
As shown in Figure. 7, when a key is pressed (1), the associated character is retrieved (2), and converted into Braille (3): the resulting binary array (4) is used to drive the Rumentarium (5), mapping each dot to a sound body, and assuming that 1 represent activation, while 0 is no powering. In the example, the key ‘e’ triggers simultaneously the sound bodies 1 and 5, connected to the first and fifth ports of the unit. When more than one unit is involved, the Braille mapping is applied simultaneously to all of them. In short, the mapping can be replicated over more units (connected to different sound bodies) in a sort of amplification process. A sound installation based on the Braille mapping is La terra guasta. Inspired by The Waste Land by T.S. Eliot (the Italian name of the installation being an etymological translation of the poem), it has been shown at the Share Festival 2009 in Torino, at Spazio Orioles in Palermo (2009, Figure 8) and at Generazioni elettroniche Festival in Gorizia (2011).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170719083112-43907-mediumThumb-S1355771813000216_fig7g.jpg?pub-status=live)
Figure 7 Braille mapping: from keyboard to Rumentarium unit.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170719083112-83029-mediumThumb-S1355771813000216_fig8g.jpg?pub-status=live)
Figure 8 La terra guasta installed in Palermo, Live iXem 2009 (courtesy Ester Lo Coco).
In Eliot's text, the desolation of human condition in modernity is often acutely shown by a landscape (and a soundscape) of waste. In particular, this aspect emerges clearly in the third section of the poem, ‘The Fire Sermon’, where, not by chance, the main rhetorical device is the figure of the congeries (‘chaotic accumulation’): as an example, the Thames is described bearing ‘empty bottles, sandwich papers, silk handkerchiefs, cardboard boxes, cigarette ends’ (Eliot Reference Eliot2001, III: 177–8). Inspired by these features, the installation places the Rumentarium on the floor of the exhibit space (Figure 8), in an unstructured form, underlining the randomness and the heterogeneity of the whole, reflecting the residual nature of objects and materials. The mass of cables and visible electronic components contributes to stress the ‘broken’, residual nature of the ensemble. While active, the installation scans the whole text of the poem, character by character. When the text is finished (the average duration is two and a half hours), it is read again from the beginning. In The Waste Land a characteristic feature is wandering: the hollow men (to quote another Eliot poem) wander through urban spaces with no apparent direction, a mood representing contemporary living's loss of sense. This random walk is used as the inspiring concept for the installation's behaviour. As a consequence, a unique function is dedicated to updating all the control parameters: number of units involved, amplitude scaling factor, scan rate. The function describes a random walk, with a particular behaviour near the extremes, as the conditions are intended to make the values fold over the maximum and the minimum. When the function folds over, the folded value is still near the opposite extreme, and it is highly probable that it will fold over again. In this way, a Brownian motion is periodically perturbed by series of jumps between the extremes.
7.2. Alphabet mapping
The alphabet mapping is an extension of the Braille mapping and it is based on the same principles of injecting into the Rumentarium the structure of textual information. When using four units, 24 sound bodies are fully available: thus, it becomes possible to associate each sound body with a particular character from the alphabet. For Italian orthography, a number of 21 is enough, while English (or French) requires 26 symbols, and thus sound bodies. In these cases, opportune adjustments can be made, as the frequency of letters in usage varies enormously:Footnote 5 as an example, the letter ‘Q’, very rare in English, can be associated with the same port as the letter ‘U’, as it always appears in the digraph ‘QU’. Compared to the Braille mapping, the alphabet mapping allows us to explore and exploit in a more analytical fashion the sound bodies’ setup, as each of them is directly controlled by a letter, which consequently receives a unique sonic identity. Analogously to the Braille mapping, the Alphabetic mapping is not limited to keyboard input, but can be used to drive the Rumentarium by implementing time-based automatic scanning of texts. Detti del tuono (‘Tales of the thunder’) is a cycle for alto flute and Rumentarium playing back a fixed sequence of events, and was commissioned by Turin-based Fiarì ensemble for their 2009 season. Again, the piece owes its name to Eliot's The Waste Land (Part V: ‘What the thunder said’). Both the flute part (not discussed here) and the percussion part refer to the same excerpts from the poem. In the Rumentarium, 24 sound bodies have been organised into four different subsets (one for each unit): beat generators, cymbals, rattles and pitched sound bodies. In each piece, the percussion part is the result of different scheduling and layering techniques. Many different fragments from the poem have been selected, in relation both to their textual meaning and to the way they sound once mapped into the sound bodies.
8. (Self-)Listening control
Finally, the addition of machine listening capabilities allows the Rumentarium to interact with the acoustic environment. Audio events are detected via onset recognition: retrieved audio content can be analysed and used as input for a mapping function driving the Rumentarium. Again, these capabilities can be explored in relation both to performance and to installation. In the first case, a possible setup – explicitly aimed at interactive improvisation – includes a ‘bassanjola’, a modified, fretless banjo with mandola strings, tuned in fourths like a bass (Figure 9). Onset detection on the signal captured from the bassanjola triggers new events from an automatically generated pattern for the Rumentarium that works as a rhythmic background for improvisation.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170719083112-28059-mediumThumb-S1355771813000216_fig9g.jpg?pub-status=live)
Figure 9 Live performing with bassanjola and Rumentarium (Centro Stabile di Cultura, San Vito di Leguzzano, courtesy Gabriele Grotto).
When the pattern has reached its end, the Rumentarium proposes a new, automatically generated, pattern for the performer. In the Rumentario autoedule installation (‘autoeating rumentarium’, a pun on the literal translation into Italian of feedback, premiered at Villa Aldobrandini, Frascati, for the Quadratonomade exhibition in 2011), the audio environment, including the audience, is sensed via microphones. A software analysis module extracts onset, pitch and loudness from the environment, which are used to drive the orchestra (see Figure 10). At each detected onset, the recognised pitch, quantised to quarter-tone pitch classes (one for each of the 24 sound bodies), selects the next sound body that will play, with the current used to drive motors proportional to detected loudness. As the acoustic environment coincides with the orchestra itself scattered over a surface, the system reacts to itself. This configuration converts the Rumentarium into an audio feedback system,Footnote 6 in which the audio output of the sound bodies, captured by microphones, is mapped into control commands for the sound bodies themselves.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170719083112-38183-mediumThumb-S1355771813000216_fig10g.jpg?pub-status=live)
Figure 10 Diagram of the Rumentario autoedule installation.
9. Mapping reconsidered, and the soundscape that follows
Looking back to the various mapping schema designed for Rumentarium, some final considerations can be drawn. First of all, the general rationale at their basis is to respect the structure of the information to be mapped, that is the Rumentarium itself. In this sense, all the mappings take into account its three relevant features: the presence of a number of sound bodies, their grouping into six-element subsets, and a unique continuous control parameter for each sound body. To sum up, the use of the Rumentarium as a computationally augmented instrument by means of a MIDI controller has been extensively tested live since 2009 in the context of free improvisation, both in solo and in collective acts with AMP2 and IVVN ensembles (featuring the Rumentarium with laptops, prepared guitars and acoustic percussion), and has proven to be very flexible, as required by the musical context of group improvisation.Footnote 7 As each motor keeps on moving until a zero voltage is applied, complex textures are easily created: a general layering technique involves progressively adding sound bodies, and changing their speeds while they keep on playing. A possibility provided by the presence of a computational layer is to define a physical button on the MIDI controller as a ‘mute all’ utility, which sets to zero all the sound bodies at the same time: such a behaviour is crucial mainly for aesthetic matters (in order to obtain a sudden silence from maximum dynamics) but also for technical ones (e.g. resetting the Arduino boards in case of unexpected behaviour).
In relation to the score mapping, one of the main criticisms of graphic notations has always been their lack of precision (see Valle Reference Valle2002 for a review) when applied to musical instruments. While this partially holds true also for the Rumentarium (there is indeed an information loss comparing the score and the matrix in Figure 6, for example), nevertheless, the automatic process of scanning the graphic score and driving the Rumentarium ensures the composer has an immediate way to organise large temporal frames, with a good approximation from score to performance.Footnote 8 Its efficacy evidently depends on the fact that only a single parameter is required to be set (voltage): but, differently from standard notation, gestural control is still strongly present in the score, as the latter literally contains traces of the drawing hands. It must be noted that, even if score drawing is a non-real-time operation, because of the very electromechanical nature of the Rumentarium, it is still a control strategy for real-time acoustic playback.
A complementary attitude emerges from writing mappings: while still focusing on an analytical, mediated attitude towards music organisation, nonetheless they reveal the beauty of writing not much as a cognitive disembodied activity but, rather, as a form of gestural control. In this sense, the Braille mapping shows an interesting chain of mediations, as here writing – as a visible process operated by the user – is received by a ‘blind’ reader in the computer, which turns the keyboard input into the shape of an invisible pattern (through the Braille encoding), and uses the resulting information to drive a sort of audible reading through the Rumentarium. A crucial feature in writing mappings is that they neither specify how long the sound bodies have to stay active after receiving a message nor how much voltage has to be applied. These parameters can be set interactively as they relevantly affect the sonic output. Event duration allows control of the thickness of the overall texture. Voltage is crucial for dynamics, as the latter can vary from full volume to much quieter sounds, with motors hardly moving and their electric buzzes creating a specific textural mood. At the end, writing mappings associate characters to Arduino ports: as a consequence, the final acoustic behaviour depends indeed on which physical sound bodies are connected to each port. This aspect is left to experimentation. In particular, the alphabet mapping has proven to be particularly fruitful on the performance side, leading to the definition of a ‘live writing’ approach in which the sonification of typing by means of the Rumentarium is seamlessly intermingled with live coding. In Valle Reference Valle2011b all the text entered by the performer is mapped onto the sound bodies: moreover, electronic sound is added, mapping keys to pitches and including processed sound from the Rumentarium and the typing action. The digital audio processing is controllable in real-time by means of SuperCollider code, the latter being sonified by the Rumentarium while entered. Finally, enhancing the Rumentarium with listening capabilities has made it possible to introduce a further ecological perspective into the project, as the mapping schema prompts for the exploration of physical self-organised systems.
An evident conclusion is that there is not an optimal mapping strategy per se: rather, each schema is suitable for a specific use and situation, assuming that Rumentarium's three features are taken into account.
All these strategies are ways to tune a peculiar sound source. Rumentarium's typical soundscape is marked by a clear metallic and electrical nature, based respectively on bright, almost harsh percussion instruments and on small DC motors. Among sound bodies, pitched elements play an important role, even though they are a minority numerically, as they provide clear acoustic cues that stand out in the typical grained texture. Acoustic duos including Rumentarium and flute (as in Detti del tuono) or bassanjola result in a sort of dazed exoticism. Motors are not only sources of mechanical energy but also sources of noise in themselves, each one being provided with a distinct buzzing sound that can be exploited, in particular by playing near the activation threshold. Some sound bodies can be controlled with lower voltages (e.g. typically, vibrator motors), while others require higher voltages just to start moving. As a consequence, complex textures can be created with some elements appearing distinctly while other still buzzing.
When treated as a musical instrument, as peculiar it may be, the Rumentarium does not alter the usual relationship that is established between the performer and the audience. With respect to this aspect, its use in installations its far more interesting because the Rumentarium's physical presence, coupled with its acoustic behaviour, suggests the presence of a set of communicative agents. As an example, in La terra guasta the overall behaviour has proved to be very interesting for audiences, as it alternates gradual changes in the parameters (‘normal’ Brownian motion) with hectic moments when parameters rapidly change between the extremes. The mapping type (that is, how many units are involved in the Braille conversion of each letter) has an important contribution, as different subsets of the Rumentarium can be activated, up to a ‘tutti’ where all the sound bodies play. The large range of different behaviours, unpredictable from the user's perspective and injected into the mass of heterogeneous objects, let the visitors suppose a sort of animistic nature in those discarded things. A final relevant element lies in the textual structure. The text of Eliot's poem includes large blank spaces (e.g. separating two adjacent sections). As they are mapped to silence, the blanks determine long periods of inactivity, leading the audience to think that the installation has ceased its activity. But then, often surprisingly, all of a sudden it starts again. This animistic feature is indeed emphasised in case of interaction between the Rumentarium and the audience, as it happens in Rumentarium autoedule. Here an environmental noise, be it accidental or caused by the visitor (e.g. a spoken word or the clapping of a hand), may trigger the – until then – silent installation, and eventually lead it to a complex feedback loop, evoking a long reply to the viewer's communicative act.
10. Conclusions
The Rumentarium project aims at generating complex sound content starting from available, discarded materials and low-cost technologies. Its goal is acoustic computer audio, both for musical and installation applications. Its motto is ‘acoustic sources and computational control’. Bypassing the digitally generated and loudspeaker-diffused sound of electronic computer music, the Rumentarium rather suggests a world populated by acoustically interacting bodies. Differently from computer music where all sound synthesis details are up to the composer, semi-random variations introduced by nonlinear behaviours in the electromechanical environment let the artist/composer focus on high-level control, while exploring unpredictable behaviours of the sound bodies at the low level. The Rumentarium emphasises the indexical dimension of sound, its origin as the result of a mechanical gesture, in which bodies leave their traces into sound by displaying the energetics of their actions. Built with common, discarded objects, it allows us to rethink the domestic soundscape around us in terms of its acoustic potentiality. In this sense, the Rumentarium – while indeed adhering to the notion of infra-instrument – properly defines a regime of ‘proto-instrumentality’, which goes back to the instrument as an amplification of the body mechanics (Schaeffner Reference Schaeffner1994). But this anthropologically rooted physicality of the Rumentarium, which reverberates in the resulting sound, sharply contrasts with the pervasive computational control. In this sense, while it is less than an instrument, as it goes back to a condition of ‘proto-instrumentality’, at the same time it is more than an instrument, as it is programmable and interactive. Thus, it allows the exploration of a wide and variable degree of subjectivity.