Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-11T03:41:20.813Z Has data issue: false hasContentIssue false

Embodied Aesthetics in Auditory Display

Published online by Cambridge University Press:  26 February 2014

Stephen Roddy
Affiliation:
Department of Electronic and Electrical Engineering, Trinity College, Dublin 2, Rep. of Ireland
Dermot Furlong
Affiliation:
Music & Media Technologies, Trinity College, Dublin 2, Rep. of Ireland
Rights & Permissions [Opens in a new window]

Abstract

Aesthetics are gaining increasing recognition as an important topic in auditory display. This article looks to embodied cognition to provide an aesthetic framework for auditory display design. It calls for a serious rethinking of the relationship between aesthetics and meaning-making in order to tackle the mapping problem which has resulted from historically positivistic and disembodied approaches within the field. Arguments for an embodied aesthetic framework are presented. An early example is considered and suggestions for further research on the road to an embodied aesthetics are proposed. Finally a closing discussion considers the merits of this approach to solving the mapping problem and designing more intuitively meaningful auditory displays.

Type
Articles
Copyright
Copyright © Cambridge University Press 2014 

1. Introduction

The mapping of data to synthesis parameters poses a challenge to the field of auditory display. There is a very large set of possible mappings but a notoriously small subset of perceptually, or cognitively, valid mappings. The same issue is present in computer music (Roads Reference Roads1996: 889; Hunt and Kirk Reference Hunt and Kirk2000). It is more pronounced in auditory display, where mappings must faithfully relate data to a listener.

Auditory display has inherited a disembodied and positivistic sense of auditory phenomena from, among other places, electroacoustic music. This paradigm implies an illusory sense of linearity and orthogonality to auditory dimensions, when the opposite is the case. For example, changes in timbre can effect changes in pitch while changes in pitch can effect changes in amplitude. This complicates the process of conveying a data set through parametric mappings to auditory dimensions.

The ‘mapping problem’ (Worrall Reference Worrall2010 and Reference Worrall2013, Grond and Berger Reference Grond and Berger2011) has grown from the erroneous treatment of the auditory system as a disembodied ‘computer’ of auditory symbols (see Searle Reference Searle2004). The mapping problem treats the ‘perceptual entanglement’ of auditory dimensions as an obstacle to the accurate representation of data to the listener. This is a paradigmatic error resulting from a positivistic misrepresentation of auditory dimensions. The ‘standard model’ of hearing is not accurate to our everyday experiences of hearing (O'Callaghan Reference O'Callaghan2010). A model that goes beyond the mapping problem to fully account for the non-linearities and entanglements of the auditory system is required for the advancement of data sonification. Semantic listening (the mode of listening by which the ear extracts coded information from a sound) does not focus on individual acoustical dimensions but processes sound in context (Chion Reference Chion1994). To communicate data effectively the designer must shape this context, offering the listener some syntax by which to make sense of it. The aesthetic dimensions of speech (prosody, inflection, articulation, etc.) convey rich meaning to a listener. Might we use aesthetic dimensions in a sonification to the same effect?

There are many auditory display design frameworks. Some have drawn from ecological acoustics (Gaver Reference Gaver1993), ecological psychoacoustics (Walker and Kramer Reference Walker and Kramer2004) and perceptual faculties studies (Barrass Reference Barrass2005). Embodied cognition is still a largely unexplored area when it comes to designing auditory display mapping strategies. Embodied cognition principles have brought progress to other areas of auditory display in recent times (Antle, Corness, Bakker, Droumeva, van den Hoven and Bevans Reference Antle, Corness, Bakker, Droumeva, van den Hoven and Bevans2009; Diniz, Deweppe, Demey and Leman Reference Diniz, Deweppe, Demey and Leman2010). Most of this development has been focused on interaction (Wakkary, Hatala, Lovell and Droumeva Reference Wakkary, Hatala, Lovell and Droumeva2005; Antle, Diniz et al. 2010; Maes, Leman and Lesaffre Reference Maes, Leman and Lesaffre2010; Antle, Corness and Bevans Reference Antle, Corness and Bevans2011; Diniz, Coussement, Deweppe, Demey and Leman Reference Diniz, Coussement, Deweppe, Demey and Leman2012). The basic task of mapping data to sound has yet to benefit from such consideration.

Embodied cognition deals with auditory phenomena at a level abstracted from the individual auditory dimensions (pitch, tempo, etc.) dealt with in traditional sonification methodologies. The literature describes larger-scale aesthetic patterns which emerge from the organisation of these dimensions. Mapping sonification data to these patterns may help overcome perceptual entanglements. Embodied cognition also allows for embodied meaning to be exploited for sonification, reducing learning requirements and allowing for more intuitive understanding of auditory displays. The framework could be an important design resource for auditory display.

2. Embodied Cognition

The embodied cognition hypothesis that has become important in cognitive science represents a number of similar and inter-related approaches to cognition (Varela, Thompson and Rosch Reference Varela, Thompson and Rosch1991; Hampe and Grady Reference Hampe and Grady2005), which derive from a shared belief in ‘experientialism’. Experientialism is the theory that knowledge derives from first-hand experience. The embodied hypothesis generally holds that the mind is thoroughly defined by the human body and perceptual faculties; bodily experiences provide the language by which cognitive processing unfolds. Cognitivism is the more traditionally popular alternative to embodied cognition. It derives from positivism: the view that knowledge drives from logical and mathematical analysis of experience. Cognitivism views the mind and body as fundamentally opposed, and thought as the computation of arbitrary symbols (Gardner Reference Gardner1987).

3. Historical roots of disembodiment in music technology

Cognitivism is deeply entrenched in Western cultural history (Todes Reference Todes2001; Uttal Reference Uttal2004), and has become the dominant paradigm in science and the arts (Damasio Reference Damasio2008; Leman Reference Leman2008; McGilchrist Reference McGilchrist2009). The influence of cognitivism has shaped phenomenology, psychology and cognitive science (Gardner Reference Gardner1987; Todes Reference Todes2001). Though beyond the scope of this paper, a detailed account of these points as they relate to music and sonification is offered by Worrall (Reference Worrall2010).

Descartes made a distinction between the mathematical and physical aspects of music in 1618 (Descartes Reference Descartes1647). He detached these from their subjective counterparts, which he thought were unworthy of scientific analysis (Augst Reference Augst1965). His philosophy had a profound influence on Immanuel Kant's aesthetic theory. Kant believed in a universal formal aesthetics and logic that are independent of human cognition (Kant Reference Kant1929). Schopenhauer considered music to be a disembodied manifestation of ‘the will’, a ‘noumenal’ or objective reality represented to cognition by phenomena. This served to re-enforce subject–object duality in musical thinking (Schopenhauer 1844; Magee Reference Magee1999). Richard Wagner subscribed to Schopenhauer's view of music as the manifestation of an objective reality (Darcy Reference Darcy1994). Arnold Schoenberg would borrow from Wagner's musical philosophy and expand his Hauptmotife technique (Brand and Hailey Reference Brand and Hailey1997) into serialism. Both of these techniques focus on the objective, positivistic formalisation of musical structure in accordance with some assumed universal aesthetic code. Anton Webern and Pierre Boulez inherited and further extended Schoenberg's techniques and thinking in pursuit of an idealistically ‘democratic’ serial music (Grant Reference Grant2005). Through these developments, disembodied and positivistic models came to dominate the landscape of Western art music, setting a dualistic tone for further musical and technological developments to come during the mid-1900s.

Pierre Schaeffer's adapted Husserl's epoché (Reference Husserl1931) in his ‘reduced listening’: a mode of listening that requires the suspension of judgement on the source of a sound in order to reveal more about the sound itself. Such sounds, theoretically decoupled from their sources, he termed objets sonores (sound objects). This concept carries with it dualistic assumptions. Steeped in Husserl's disembodied worldview, it asserts a division between subjective sound and objective cause (Kane Reference Kane2007). A new field of acousmatic (‘behind the veil’) music grew from the application of this methodology.

Early proponents of musique concrète recorded sound objects to tape for use as materials in a compositional process determined by the reduced listening method. In their decoupled context, sound objects were explored and elaborated through tape manipulation techniques like cutting and pasting, looping, reverb, speed manipulation and eventually overdubbing. Many groundbreaking techniques pioneered by Schaeffer and others during the 1950s were built around the same disembodied assumptions as reduced listening and epoché. Today disembodiment in music technology is a global phenomenon (Terrugi Reference Terrugi2007). Data sonification is often undertaken using these disembodied techniques and technologies. The mapping problem is a result of the positivistic cognitivism which auditory display has inherited from these technologies.

4. Aesthetics in Auditory Display

Aesthetic issues were less explored in the early days of sonification research. Today, they are gaining an appreciation within the field (Vickers and Hogg Reference Vickers and Hogg2006). However, despite some notable examples to the contrary (Goßman Reference Goßman2010; Barrass and Vickers Reference Barrass and Vickers2011), the discussion on aesthetics rarely ranges beyond the classification of individual sonifications as ‘aesthetically pleasing’. Aesthetics are treated as a means of reducing annoyance and guaranteeing listener engagement. The general rule of thumb is to ‘design aesthetically pleasing (e.g., musical, etc.) sonifications to the extent possible while still conveying the intended message’ (Walker and Nees Reference Walker and Nees2011). Such approaches emphasise the cosmetic value of aesthetics without fully exploring the potential they offer for more semantically rich and complex auditory displays. Aesthetic concerns no doubt weigh heavily on the design of attractive sonifications, but an important finding by Leplâtre and McGregor (Reference Leplâtre and McGregor2004) shows that aesthetics and functionality cannot be dealt with independently in auditory display. This highlights aesthetics as being more important to the sonificaiton process itself than was previously accepted.

Music and sound art practitioners have harnessed sonification (and sonification-like processes) as an artistic technique (see Xenakis Reference Xenakis1985; Dunn and Clark Reference Dunn and Clark1999; Quinn Reference Quinn2001; McKinnon Reference McKinnon2013). Sonification also turns to these arts to inform its aesthetic choices (Childs Reference Childs2002; Vickers and Hogg Reference Vickers and Hogg2006). This has led to much debate about the place of art in sonification. It has been argued that overly artistic sonifications might miss the point of by opting for expression over the revelation of data features. A debate between utilitarians and aestheticians has run since the field's inception, characterised by a misreading of aesthetics as an exclusively artistic pursuit where in reality art and aesthetics are not synonymous (Barrass and Vickers Reference Barrass and Vickers2011). Aesthetics need not take away from the faithful communication of the data. Rather, they can provide new channels for sonification that fit our perceptual and cognitive faculties, better than the old model of linear, independent auditory dimensions.

In Reference Dewey1934 John Dewey demonstrated how meaning unfolds within the aesthetic dimension of human experience. To convey meaning is to shape this aesthetic domain to semantic effect. Mark Johnson (Reference Johnson1987, Reference Johnson2007) took the idea of aesthetics as the substrate of meaning and mapped the syntax (termed embodied schemata) by which that meaning is expressed. Lawrence Zbikowski (Reference Zbikowski2005) demonstrated how embodied schemata define our auditory experiences. As designers and creators of auditory displays, systems with the sole intention of meaningful communication, we cannot afford to overlook or relegate the very medium in which meaning unfolds: aesthetics. A semantically rich aesthetic framework could serve the sonic expression of data by offering more meaningful aesthetic channels along which to sonify data.

5. The Aesthetics of Embodied Music Cognition

Nothing is beautiful, only man: on this piece of naïveté rests all aesthetics, it is the first truth of aesthetics.

Friedrich Nietzsche

For an auditory display to be broadly accessible it must appeal to some commonly shared aesthetic framework. However, aesthetic values tend to differ from person to person. For example, musicians routinely outperform non-musicians in interpreting auditory displays (Walker and Nees Reference Walker and Nees2011). In 1781, Kant proposed that the universe provided its own aesthetics and logic of which the embodied human had but a limited access (Kant Reference Kant1929). Today, in stark contrast, embodied cognition researchers argue that the human body provides its own aesthetics and logic. These are common to all similarly embodied organisms although not universal in the Kantian sense (Johnson Reference Johnson2007). It is to such a broadly inclusive embodied aesthetics that auditory display must look.

In Conceptualizing Music: Cognitive Structure, Theory, and Analysis Lawrence Zbikowski gathers together concepts and theories from across the embodied cognition corpus which ‘are visible at every turn in our encounters with music’ (Zbikowski Reference Zbikowski2005: 333) and applies them in musical analysis. His framework provides an alternative interpretation of sound, music and meaning. These cognitive processes imbue our auditory experiences with meaning (Zbikowski Reference Zbikowski2005: 328), shaping our understanding of rhythm, pitch, tempo and timbre. This meaning-making unfolds itself on the aesthetic level, and aesthetics are critical to conceptual meaning and reason. Experientialists argue that aesthetic experience is embodied meaning-making at its most potent. Johnson (Reference Johnson2007: 261) describes aesthetic effect as the heightened stimulation of our embodied meaning-making capacities.

Embodied schemata (Johnson Reference Johnson1987) are the basic units of cognition. These dynamic, gestalt-like frameworks are derived from the recurrent perceptual patterns encountered in daily life. By virtue of having similar physical bodies, entire populations use similar embodied schemata in their meaning-making. They provide the logical and aesthetic syntax by which auditory experiences are interpreted (Zbikowski Reference Zbikowski2005; Johnson Reference Johnson2007).

In ‘conceptual metaphor theory’ (Lakoff and Johnson Reference Lakoff and Johnson1980) embodied schemata are mapped to lend familiar structure to new perceptual and cognitive domains. This cross-domain mapping is an essential mechanism of cognition and provides a basis for building conceptual models and networks. Metaphorical mapping allows for the perception of, for example, movement in music by mapping schemata from bodily experiences to the musical domain (Johnson Reference Johnson2007; Zbikowski Reference Zbikowski2012).

Conceptual blending describes how meaning emerges from the combination of multiple spaces (perceptual and cognitive) so that elements of the input spaces are integrated and elaborated (Fauconnier and Turner Reference Fauconnier and Turner2002). It has been used to account for musial features like ‘text-painting’ (Zbikowski Reference Zbikowski2005). Blends draw from conceptual domains that are rooted in embodied schemata. Outside of the work of Zbikowski (Reference Zbikowski2005), and Fauconnier and Turner (Reference Fauconnier and Turner2002), the process has also been studied in the context of human–computer interaction (Imaz and Benyon Reference Imaz and Benyon2007) and education (Tolentino, Birchfield, Megowan-Romanowicz, Johnson-Glenberg, Kelliher and Martinez Reference Tolentino, Birchfield, Megowan-Romanowicz, Johnson-Glenberg, Kelliher and Martinez2009).

Prototype theory (Rosch Reference Rosch1999) states that members of mental categories are graded on a scale of prototypicality where the most prototypical member acts as a representation of the entire category. We then reason about categories in terms of their prototypical members. Music is understood at a basic level in terms of graded categories of musical events. Motive-based, rhythmic and atonal structures exhibit categorical structuring where prototypical members and their graded counterparts give rise to a unique syntax within a piece of music (Zbikowski Reference Zbikowski2005). These prototypes tend to be gleaned from the first-hand embodied experiences of day-to-day life. All category members, prototypical or otherwise, derive from embodied experience and are rooted in embodied schemata.

These competencies come together in conceptual models, which are aesthetic, knowledge structures comprising multiple cross-mapped concepts around a common theme (Zbikowski Reference Zbikowski2005). They determine the meaning and aesthetic nature of what we hear. Aesthetic experiences, such as the quality of a certain timbre, perceived movement in a specific section or proximity of a perceived source, are all formed through the interaction of these cognitive competencies. Even one's perception of pitch relies on a conceptual model that maps the schemata from bodily experiences of ‘up’ and ‘down’ onto pitch scales thus rendering it meaningful (Zbikowski Reference Zbikowski2005).

The crucial mechanism by which sound patterns become aesthetic experiences is the cross-domain mapping of embodied schemata. Embodied schemata provide the perceptual basis by which metaphors, blends and category members are built up into the meaningful conceptual models that define aesthetic experience and meaning-making (Zbikowski Reference Zbikowski2005). They provide the aesthetic and logical content on which there rest of cognition rests.

6. Towards an Embodied Cognitive Aesthetics for Auditory Display

Sonification has borrowed many concepts and insights from its sister field, visualisation. In visualisation designers do not map data to random visual dimensions, such as line height or thickness, as sometimes seems to be the case in sonification. Data are represented through the organisation of the features of a visual symbol in keeping with an overarching syntax. In a pie chart, for example, a sectored circle is used as a visual symbol. Features of the circle (the sectors) are organised to represent features in the data. It has its own syntax where the percentage area of each sector corresponds to a point in the data set. The syntax relates the data to the symbol. Without this overarching syntax any relationship between the data and the symbol would be arbitrary. In auditory display, the syntax that relates data to audio is referred to as the mapping strategy.

The human body provides its own logic and aesthetics in the form of embodied schemata. These can be called on to inform mapping strategies for sonification. Data need not be directly mapped to entangled auditory dimensions. Instead they can be mapped at the level of embodied schemata. Dimensions within an embodied schema can be represented in sound, allowing for the expression of data as sonic changes along these dimensions. This approach has merit beyond offering possible solutions for the mapping problem. Embodied schemata are acquired in early childhood and do not require learning. They are subconsciously recognised when encountered in perception and their meanings are intuitively felt (Johnson Reference Johnson1987; Hampe and Grady Reference Hampe and Grady2005). Mapping strategies based on embodied schemata can be understood without the need for much learning. This makes them critical to the design of intuitively familiar auditory displays. A cohesive account of embodied schematic mappings would be of benefit to the further development of intuitive auditory displays and to solving the mapping problem. Preliminary research in this area has been promising (Roddy and Furlong Reference Roddy and Furlong2013a).

In parameter mapping sonification (PMSon) the relationship between sound and data is often arbitrary. The task of relating data to sound can fall solely to the mapping function. This is not the case with auditory icons, which mimic real-world sounds to represent data. Auditory icons are chiefly used in human–computer interaction to add sonic feedback to a system. They are not typically used to sonify multi-dimensional data, as they do not provide individual auditory dimensions to which data can be mapped. They do provide higher aesthetic and cognitive dimensions which data can be mapped to. In recent years, a theory of ‘auditory affordances’ (drawing from ecological psychoacoustics) has been recommended as a design framework for such mappings (Brazil and Fernström Reference Brazil and Fernström2011). Preliminary research into the links between conceptual metaphor and auditory icons has shown positive results (Brazil and Fernström Reference Brazil and Fernström2006). Parameterised auditory icons (icons that express data through sonic changes) with embodied schematic mapping strategies have been used to sonify rainfall data (Roddy and Furlong Reference Roddy and Furlong2013b), as more complex everyday sounds lend themselves well to embodied schematic mappings. Further research in the use of embodied auditory icons for multivariate data will help to develop a better comprehension of the role that complex sounds can play in an embodied cognitive aesthetics for auditory display.

An understanding of how best to map specific embodied schemata for specific sonification tasks would be of great value. This knowledge would be invaluable in the creation of a repository of task-oriented, reusable mapping strategies in auditory display, such as that proposed by Degara, Nagel and Hermann (Reference Degara, Nagel and Hermann2013). A deeper understanding of the workings of a users perception of auditory symbols in terms of prototype theory would also be of great use to the design of meaningful sounds. The first step along this road is the recognition of aesthetics as critical to the process of sonification itself beyond merely cosmetic concerns. The second will be the recognition of embodied meaning making as critical to auditory display aesthetics.

The point of an embodied aesthetic framework is to enhance the current design methodologies in auditory display. It should always be applied alongside a chosen sonification methodology. It is not possible, or even preferable, for an aesthetic framework to replace these methodologies outright. Aesthetic concerns must serve the overall function of the auditory display. In the same way designers look to auditory scene analysis (Bregman Reference Bregman1994) or ecological psychoacoustics (Neuhoff Reference Neuhoff2004) to inform their work, it is suggested here that they also consider what embodied cognition has to offer. It is best suited to creating semantically rich auditory displays. An awareness of embodied meaning-making in design facilitates the creation of artefacts that fit our meaning-making capacities. We envision two scenarios for its application. Firstly, frameworks of this type should be considered when a designer wishes to convey a rich sense of meaning. Secondly, an embodied approach should be considered when the designer wishes to circumvent the mapping problem.

7. Example

The mapping of data along the embodied schematic auditory channels has been referred to repeatedly in the current article. What would such a process actually look like and how can it be achieved? Firstly, a data-relevant embodied schema must be chosen to provide the sonic framework for the sonification. One example uses the ‘centre–periphery’ schema to structure a sonification of rainfall data (Roddy and Furlong Reference Roddy and Furlong2013b). This schema, and the ‘near–far’ sub-schema, describes the conceptual structure common to bodily experiences of centrality and periphery. The schemata used are said to be implicit in the comparison and evaluation of states in a dynamic situation (near–far) and the treatment of important information as being centrally located in space (centre–periphery) (Lakoff, Espenson, Goldberg and Schwartz Reference Lakoff, Espenson, Goldberg and Schwartz1991). A short parameterised auditory icon of rainfall is then be modulated in keeping with the logic of this schema. There are two data points to be communicated: days of the week, and probability of rain. The seven days of the week are represented using amplitude envelopes structured in terms of the source–path–goal schema. This schema has been implicated in our understanding of time (Lakoff and Johnson Reference Lakoff and Johnson1999). Audio fades in to represent the beginning of a day, it continues for an explicit time period, and then fades out. This pattern is repeated seven times, to represent each day. Modulating the distal cues of the rainfall icon in the auditory stage, in accordance with the ‘near–far’ and ‘centre–periphery’ schemata, represents the probability data. The greater the chance of rain the closer the rain sounds to the listener.

The approach articulates the latent relationships that exist between the data points within a data set. It is not intended to convey discrete data points, which tend to require some form of symbolic mediation. This angle is pursued in the interests of answering the call for more ‘meaningful’ data to sound mappings (Neuhoff and Heller Reference Neuhoff and Heller2005; Walker and Nees Reference Walker and Nees2011). This approach places minimal learning requirements on the user by employing an already familiar syntax and symbol.

Design approaches of this type call for the use of phenomenological research methodologies for analysis and evaluation. Barrass and Vickers (Reference Barrass and Vickers2011) discuss in greater detail the need for phenomenological methodologies in evaluating auditory display aesthetics before suggesting IPA (interpretative phenomenological analysis) as one such tool. The example presented here used a co-operative evaluation methodology borrowed from human computer interaction. Future explorations of the aesthetic dimension in auditory display would benefit greatly from a standardised phenomenological evaluation methodology.

8. Discussion

In Reference Harnad1990 Steven Harnad presented the definitive formulation of the symbol-grounding problem. This philosophical quandary originally arose from Searle's ‘Chinese Room’ argument (see 2004). The problem asked ‘if thought is simply symbol manipulation, how are those symbols connected to the things to which they refer?’ This problem deeply troubled leading figures in philosophy, cognitive science and artificial intelligence during the 1980s (Gardner Reference Gardner1987). A solution to this problem has come from embodied cognitive science: symbols relate to reality by way of our embodied meaning-making capacities (Varela et al. Reference Varela, Thompson and Rosch1991; Steels Reference Steels2008). This answer could not come from the positivistic, computationalist paradigm in which the question arose. That paradigm precluded any possibility of providing a suitable answer because it relied too heavily on positivism and did not account for the role of the human body in cognition.

The historical treatment of aesthetics as a second-class citizen in auditory display reflects a similar positivistic bias. The misinterpretation of auditory cognition as the computation of context-free symbols transmitted along individual auditory dimensions is reflective of computationalism. This has led to the mapping problem, which, like the symbol-grounding problem, cannot be solved in the same paradigm in which it has arisen. It requires a shift in focus towards embodied meaning-making, or, more accurately, embodied symbol grounding. Organising auditory symbols in terms of an embodied syntax reconfigures entangled auditory dimensions into more comprehensible channels, to which the mapping problem does not apply.

Information is data coupled with context. Data without context is incomprehensible. Embodied meaning-making can provide that context by grounding the auditory symbols we use for sonification in our shared bodily experiences. Grounding the auditory symbols we use in our shared embodied experiences can also diminish user-learning requirements for a sonification. The level to which this reduction of learning is possible is reliant only upon the extent to which we exploit our shared embodied meaning-making capacities.

Taking embodied meaning-making seriously in auditory display requires that we first take aesthetics seriously. Auditory display is still quite far removed from any consensus on a common aesthetic framework. Embodied cognition has found multiple applications in auditory display. Conceptual metaphors are used to structure mapping strategies (Gaver Reference Gaver1989; Walker and Kramer Reference Walker and Kramer2004, Reference Walker and Kramer2005) and interactions with auditory display environments (Antle et al. Reference Antle, Corness, Bakker, Droumeva, van den Hoven and Bevans2009, Reference Antle, Corness and Bevans2011). Studies of the micro-gestures of trained musicians promise more comprehensible data to sound mappings (Worrall Reference Worrall2011, Reference Worrall2013). Applications of interactive affect design have resonated widely (Barrass Reference Barrass2013). Still, the basic aesthetic issues raised by the use of sound to relate data have yet to be broached from the side of embodied cognition. Still, the very question of aesthetics is met with controversy.

We propose erring on the side of the pragmatic in the march towards any unified aesthetic framework. A middle ground can be gained where aesthetics serve the functionality of auditory display, and where embodied cognition is recognised as foundational to both meaning-making and aesthetics. The embodied cognition literature asserts no meaningful distinction between meaning-making and aesthetics. Both are a product of our shared cognitive competencies. We suggest that this lends itself to a democratised aesthetics, making for more widely useful auditory displays. This is timely considering the recent aesthetic turn towards sonification as a cultural medium (Barrass and Vickers Reference Barrass and Vickers2011; Barrass Reference Barrass2012). Aesthetics have more to offer auditory display than is currently recognised in the field. The unification of meaning-making and aesthetics is a promising avenue for future research. The embodied aesthetic domain expresses the mundane and the imaginative in a common tongue. It represents a framework for the creation of semantically rich and intuitive auditory displays which are experientially grounded, and free from the constraints of the mapping problem.

References

Antle, A.N., Corness, G., Bakker, S., Droumeva, M., van den Hoven, E., Bevans, A. 2009. Designing to Support Reasoned Imagination through Embodied Metaphor. In Proceedings of the Conference on Creativity and Cognition. Berkeley, CA: ACM Press, 275–84.Google Scholar
Antle, A.N., Corness, G., Bevans, A. 2011. Springboard: Designing Embodied Schema Based Embodied Interaction for an Abstract Domain. In D. England (ed.) Human-Computer Interaction Series: Whole Body Interaction 2011. Berlin: Springer.Google Scholar
Augst, B. 1965. Descartes's Compendium on Music. Journal of the History of Ideas 26(1): 119132.CrossRefGoogle Scholar
Barrass, S. 2005. A Perceptual Framework for the Auditory Display of Scientific Data. ACM Transactions on Applied Perception (TAP) 2(4): 389402.Google Scholar
Barrass, S. 2012. The Aesthetic Turn in Sonification Towards a Social and Cultural Medium. AI & Society 27(2): 177181.Google Scholar
Barrass, S. 2013. ZiZi: The Affectionate Couch and the Interactive Affect Design Diagram. In K. Franinović and S. Serafin (eds.) Sonic Interaction Design. Cambridge, MA: The MIT Press.Google Scholar
Barrass, S., Vickers, P. 2011. Sonification Design and Aesthetics. In T. Hermann, A. Hunt and J.G. Neuhoff (eds.) The Sonification Handbook. Berlin: Logos.Google Scholar
Brand, J., Hailey, C. (eds.) 1997. Constructive Dissonance: Arnold Schoenberg and the Transformations of Twentieth-Century Culture. Berkeley: University of California Press.Google Scholar
Brazil, E., Fernström, M. 2006. Investigating Concurrent Auditory Icon Recognition. In Proceedings of the 12th International Conference on Auditory Display, London, UK.Google Scholar
Brazil, E., Fernström, M. 2011. Auditory Icons. In T. Hermann, A. Hunt and J.G. Neuhoff (eds.) The Sonification Handbook. Berlin: Logos.Google Scholar
Bregman, A.S. 1994. Auditory Scene Analysis: The Perceptual Organization of Sound. Cambridge, MA: The MIT Press.Google Scholar
Childs, E. 2002. Achorripsis: A Sonification of Probability Distributions. In Proceedings of the 8th International Conference on Auditory Display, Kyoto, Japan.Google Scholar
Chion, M. 1994. Audio-Vision: Sound on Screen. New York: Columbia University Press.Google Scholar
Damasio, A. 2008. Descartes’ Error: Emotion, Reason and the Human Brain. London: Random House.Google Scholar
Darcy, W.J. 1994. The Metaphysics of Annihilation: Wagner, Schopenhauer, and the Ending of the ‘Ring’. Music Theory Spectrum 16(1): 140.Google Scholar
Degara, N., Nagel, F., Hermann, T. 2013. SonEX: An Evaluation Exchange Framework for Reproducible Sonification. In Proceedings of the 19th International Conference on Auditory Display, Lodz, Poland.Google Scholar
Descartes, R. 1647. Meditations on First Philosophy: With Selections from the Objections and Replies. Cambridge: Cambridge University Press.Google Scholar
Dewey, J. 1934. Art as Experience. New York: Minton, Balch and Company.Google Scholar
Diniz, N., Deweppe, A., Demey, M., Leman, M. 2010. A Framework for Music-Based Interactive Sonification. In The 16th International Conference on Auditory Display, Washington, DC, USA.Google Scholar
Diniz, N., Coussement, P., Deweppe, A., Demey, M., Leman, M. 2012. An embodied Music Cognition Approach to Multilevel Interactive Sonification. Journal on Multimodal User Interfaces 5(3–4): 211219.Google Scholar
Dunn, J., Clark, M.A. 1999. Life Music: The Sonification of Proteins. Leonardo 32(1): 2532.CrossRefGoogle Scholar
Fauconnier, G., Turner, M. 2002. The Way We Think: Conceptual Blending and the Mind's Hidden Complexities. New York: Basic Books.
Gardner, H. 1987. The Mind's New Science: A History of the Cognitive Revolution. New York: Basic Books.Google Scholar
Gaver, W.W. 1989. The SonicFinder: An Interface that Uses Auditory Icons. Human-Computer Interaction 4(1): 6794.Google Scholar
Gaver, W.W. 1993. How Do We Hear in the World? Explorations of Ecological Acoustics. Ecological Psychology 5(4): 285313.CrossRefGoogle Scholar
Goßman, J. 2010. From Metaphor to Medium: Sonification as Extension of our Body. In The 16th International Conference on Auditory Display, Washington, DC, USA.Google Scholar
Grant, M.J. 2005. Serial Music, Serial Aesthetics: Compositional Theory in Post-War Europe. Cambridge: Cambridge University Press.Google Scholar
Grond, F., Berger, J. 2011. Parameter Mapping Sonification. In T. Hermann, A. Hunt and J.G. Neuhoff (eds.) The Sonification Handbook. Berlin: Logos.Google Scholar
Hampe, B., Grady, J.E. 2005. From Perception to Meaning: Embodied-Schemas in Cognitive Linguistics. Berlin: Mouton de Gruyter.Google Scholar
Harnad, S. 1990. The Symbol Grounding Problem. Physica D: Nonlinear Phenomena 42(1): 335346.Google Scholar
Hunt, A., Kirk, R. 2000. Mapping Strategies for Musical Performance. In M. Wanderley and M. Battier (eds.) Trends in Gestural Control of Music. Paris: IRCAM – Centre Pompidou.Google Scholar
Husserl, E. 1931. Ideas, trans. W.R. Boyce Gibson. London: George Allen & Unwin.Google Scholar
Imaz, M., Benyon, D. 2007. Designing with Blends: Conceptual Foundations of Human-Computer Interaction and Software Engineering Methods. Cambridge, MA: The MIT Press.Google Scholar
Johnson, M. 1987. The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason. Chicago, IL: University of Chicago Press.Google Scholar
Johnson, M. 2007. The Meaning of the Body: Aesthetics of Human Understanding. Chicago, IL: University of Chicago Press.CrossRefGoogle Scholar
Kane, B. 2007. L'Objet Sonore Maintenant: Pierre Schaeffer, Sound Objects and the Phenomenological Reduction. Organised Sound 12(1): 1524.Google Scholar
Kant, I. 1929. Critique of Pure Reason, trans N.K. Smith. New York: St Martin's Press, 1965.Google Scholar
Lakoff, G., Johnson, M. 1980. Metaphors We Live By. Chicago: Chicago University Press.Google Scholar
Lakoff, G., Johnson, M. 1999. Philosophy in the Flesh: The Embodied Mind and its Challenge to Western Thought. New York: Basic Books.Google Scholar
Lakoff, G., Espenson, J., Goldberg, A., Schwartz, A. 1991. Master Metaphor List. Manuscript (revised version). Berkeley: Cognitive Linguistics Group, University of California at Berkeley.Google Scholar
Leman, M. 2008. Embodied Music: Cognition and Mediation Technology. Cambridge, MA: The Mit Press.Google Scholar
Leplâtre, G., McGregor, I. 2004. How to Tackle Auditory Interface Aesthetics? Discussion and Case Study. In Proceedings of the 10th Meeting of the International Conference on Auditory Display, Sydney, Australia.Google Scholar
Maes, P.J., Leman, M., Lesaffre, M. 2010. A Model-Based Sonification System for Directional Movement Behavior. In Proceedings of the 3rd Interactive Sonification Workshop, Stockholm, Sweden.Google Scholar
Magee, B. 1999. Confessions of a Philosopher: A Personal Journey through Western Philosophy from Plato to Popper. London: Random House.Google Scholar
McGilchrist, I. 2009. The Master and His Emissary: The Divided Brain and the Making of the Western World. New Haven, CT: Yale University Press.Google Scholar
McKinnon, D. 2013. Dead Silence: Ecological Silencing and Environmentally Engaged Sound Art. Leonardo Music Journal 23.Google Scholar
Neuhoff, J.G. (ed.) 2004. Ecological Psychoacoustics. Amsterdam: Elsevier Academic Press.Google Scholar
Neuhoff, J.G., Heller, L.M. 2005. One Small Step: Sound Sources and Events as the Basis for Auditory Graphs. In Proceedings of the 11th Meeting of the International Conference on Auditory Display, Limerick, Ireland.Google Scholar
O'Callaghan, C. 2010. Sounds and Events. In C. O'Callaghan and M. Nudds (eds.) Sounds and Perception: New Philosophical Essays. Oxford: Oxford University Press.Google Scholar
Quinn, M. 2001. Research Set to Music: The Climate Symphony and Other Sonifications of Ice Core, Radar, DNA, Seismic, and Solar Wind Data. In Proceedings of the 7th International Conference on Auditory Display, Espoo, Finland.Google Scholar
Roads, C. 1996. The Computer Music Tutorial. Cambridge, MA: The MIT Press.Google Scholar
Roddy, S., Furlong, D. 2013a. Embodied Cognition in Auditory Display. In Proceedings of the 19th International Conference on Auditory Display, Lodz, Poland.Google Scholar
Roddy, S., Furlong, D. 2013b. Rethinking the Transmission Medium in Live Computer Music. Paper presented at the third Irish Sound Science and Technology Convocation. Available at: www.researchgate.net/publication/256473641_Rethinking_the_Transmission_Medium_in_Live_Computer_Music_Performance/file/60b7d522f0f830f85c.pdf Google Scholar
Rosch, E. 1999. Principles of Categorization. In E. Margolis and S. Laurence (eds.) Concepts: Core Readings. Cambridge, MA: The MIT Press.Google Scholar
Schopenhauer, A. 1884. The World as Will and Representation, Vol. 1, trans. E.F.J. Payne. New York: Dover, 1969.Google Scholar
Searle, J.R. 2004. Mind: A Brief Introduction. Oxford: Oxford University Press.Google Scholar
Steels, L. 2008. The Symbol Grounding Problem Has Been Solved. So What's Next? In M. de Vega (ed.) Symbols and Embodiment: Debates on Meaning and Cognition. Oxford: Oxford University Press.Google Scholar
Terrugi, D. 2007. Technology and Musique Concrète: The Technical Developments of the Groupe de Recherches Musicales and Their Implication in Musical Composition. Organised Sound 12(3): 213231.Google Scholar
Todes, S. 2001. Body and World. Cambridge, MA: The MIT Press.Google Scholar
Tolentino, L., Birchfield, D., Megowan-Romanowicz, C., Johnson-Glenberg, M.C., Kelliher, A., Martinez, C. 2009. Teaching and Learning in the Mixed-Reality Science Classroom. Journal of Science Education and Technology 18(6): 501507.Google Scholar
Uttal, W.R. 2004. Dualism: The Original Sin of Cognitivism. London: Routledge.Google Scholar
Varela, F.J., Thompson, E.T., Rosch, E. 1991. The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: The MIT Press.Google Scholar
Vickers, P., Hogg, B. 2006. Sonification Abstraite/Sonification Concrète: An ‘Aesthetic Perspective Space’ for Classifying Auditory Displays in the Ars Musica Domain. In Proceedings of the 12th International Conference on Auditory Display, London, UK.Google Scholar
Wakkary, R., Hatala, M., Lovell, R., Droumeva, M. 2005. An Ambient Intelligence Platform for Physical Play. In Proceedings of the 13th Annual ACM International Conference on Multimedia, Singapore, 764–73.Google Scholar
Walker, B.N., Kramer, G. 2004. Ecological Psychoacoustics and Auditory Displays: Hearing, Grouping, and Meaning Making. In J. Neuhoff (ed.) Ecological Psychoacoustics. New York: Academic Press.Google Scholar
Walker, B.N., Kramer, G. 2005. Mappings and Metaphors in Auditory Displays: An Experimental Assessment. ACM Transactions on Applied Perception (TAP) 2(4): 407–12.Google Scholar
Walker, B.N., Nees, M.A. 2011. Theory of Sonification. In T. Hermann, A. Hunt and J.G. Neuhoff (eds.) The Sonification Handbook. Berlin: Logos.Google Scholar
Worrall, D. 2010. Parameter Mapping Sonic Articulation and the Perceiving Body. In Proceedings of the 16th International Conference on Auditory Display, Washington, DC, USA.Google Scholar
Worrall, D. 2011. A Method for Developing an Improved Mapping Model for Data Sonification. In Proceedings of the 17th International Conference on Auditory Display, Budapest, Hungary.Google Scholar
Worrall, D. 2013. Understanding the Need for Micro-Gestural Inflections in Parameter-Mapping Sonification. In Proceedings of the 19th International Conference on Auditory Display, Lodz, Poland.Google Scholar
Xenakis, I. 1985. Arts/Sciences: Alloys. New York: Pendragon Press.Google Scholar
Zbikowski, L.M. 2005. Conceptualizing Music: Cognitive Structure, Theory, and Analysis. Oxford: Oxford University Press.Google Scholar
Zbikowski, L.M. 2012. Music and Movement: A View from Cognitive Musicology. In S. Schroedter (ed.) Bewegungen zwischen Hören und Sehen: Denkbewegungen über Bewegungskünste. Würzburg: Königshausen and Neumann.Google Scholar