Hostname: page-component-745bb68f8f-g4j75 Total loading time: 0 Render date: 2025-02-05T22:10:45.117Z Has data issue: false hasContentIssue false

Felt Emotion Elicited by Music: Are Sensitivities to Various Musical Features Different for Young Children and Young Adults?

Published online by Cambridge University Press:  21 May 2020

Xuqian Chen*
Affiliation:
South China Normal University (China) Key Laboratory of Mental Health and Cognitive Science of Guangdong Province (China)
Shengqiao Huang
Affiliation:
South China Normal University (China) Key Laboratory of Mental Health and Cognitive Science of Guangdong Province (China)
Xueting Hei
Affiliation:
South China Normal University (China) Key Laboratory of Mental Health and Cognitive Science of Guangdong Province (China)
Hongyuan Zeng
Affiliation:
Department of Education, Jiangmen Polytechnic (China)
*
Correspondence concerning this article should be addressed to Xuqian Chen. School of Psychology, Center for Studies of Psychological Application, South China Normal University. 510631 Guangzhou (China). Email: cxqpsychology@163.com
Rights & Permissions [Opens in a new window]

Abstract

In the present study, we extended the issue of how people access emotion through nonverbal information by testing the effects of simple (tempo) and complex (timbre) acoustic features of music on felt emotion. Three- to six-year-old young children (n = 100; 48% female) and university students (n = 64; 37.5% female) took part in three experiments in which acoustic features of music were manipulated to determine whether there are links between perceived emotion and felt emotion in processing musical segments. After exposure to segments of music, participants completed a felt emotion judgment task. The chi-square test showed significant tempo effects, ps < .001 (Exp. 1), and strong combined effects of mode and tempo on felt emotion. In addition, strength of these effects changed across age. However, these combined effects were significantly stronger under the tempo-and-mode consistent condition, ps < .001 (Exp. 2) than inconsistent condition (Exp. 3). In other words, simple versus complex acoustic features had stronger effects on felt emotion, and that sensitivity to these features, especially complex features, changed across age. These findings suggest that felt emotion evoked by acoustic features of a given piece of music might be affected by both innate abilities and by the strength of mappings between acoustic features and emotion.

Type
Research Article
Copyright
© Universidad Complutense de Madrid and Colegio Oficial de Psicólogos de Madrid 2020

There is considerable research on how people access emotions through verbal information (Chen, Liu, et al., Reference Chen, Liu and Lin2016; Kousta, et al., Reference Kousta, Vigliocco, Vinson, Andrews and Del Campo2011; Kousta, et al., Reference Kousta, Vinson and Vigliocco2009) as well as nonverbal information such as facial expressions (Deliens, et al., Reference Deliens, Antoniou, Clin, Ostashchenko and Kissine2018; Pell, Reference Pell2005) and music (Bowman & Yamauchi, Reference Bowman and Yamauchi2016; Bresin & Friberg, Reference Bresin and Friberg2011; Juslin & Sloboda, Reference Juslin, Sloboda and Deutsch2013; Kawakami, et al., Reference Kawakami, Furukawa, Katahira, Kamiyama and Okanoya2013; Zentner, et al., Reference Zentner, Granjean and Scherer2008). The term “musical emotions” refers both to the perception of the emotion that a musical piece was meant to express (i.e., perceived emotion) and the emotions induced in the person listening to the music (i.e., felt emotion) (Kawakami et al., Reference Kawakami, Furukawa, Katahira, Kamiyama and Okanoya2013). Prior research on perceived emotion has shown that awareness of the associations between musical features and their intended emotional meaning may be affected by musical experience. In the present research on felt emotion, we examined the associations between musical features and the emotions induced in people of different ages (i.e., young children vs. university students) while listening to music. Testing age differences is important for determining whether felt emotion, like perceived emotion, is influenced by musical experience.

Perceived emotion and felt emotion are both important for music appreciation, a set of complex skills that are usually considered to be learned abilities requiring music lessons or long lasting exposure to music (Dalla Bella, et al., Reference Dalla Bella, Peretz, Rousseau and Gosselin2001). However, researchers have shown that the listener’s perception of the emotion the music was meant to convey does not always coincide with the felt emotion evoked by the music (Gabrielsson, Reference Gabrielsson2002). Unlike most other stimuli that evoke predictable felt emotions, such as encounters with dangerous animals, threats or negative facial expressions (Lundqvist, et al., Reference Lundqvist, Carlsson, Hilmersson and Juslin2009), music evokes felt emotion in ways that are not obvious or direct.

One aspect of music appreciation is awareness of the different acoustic features of a piece of music; this awareness helps the listener make judgments about perceived emotion. Previous research has shown that there are multiple acoustic features of music that affect perceived emotion, including mode (major or minor keys) (Gabrielsson & Lindström, Reference Gabrielsson, Lindström, Juslin and Sloboda2010; Livingstone, et al., Reference Livingstone, Mühlberger, Brown and Loch2007), tempo (Kawakami et al., Reference Kawakami, Furukawa, Katahira, Kamiyama and Okanoya2013; Livingstone et al., Reference Livingstone, Mühlberger, Brown and Loch2007), timbre (Bowman & Yamauchi, Reference Bowman and Yamauchi2016; Hailstone et al., Reference Hailstone, Omar, Henley, Frost, Kenward and Warren2006), intervals (Schellenberg & Trehub, Reference Schellenberg and Trehub1996), texture (Webster & Weir, Reference Webster and Weir2005), harmony and rhythm (Juslin & Sloboda, Reference Juslin, Sloboda and Deutsch2013). Though research has shown that infants and young children are aware of some of the same musical features that adults are, music-related perceived emotion, as an ability, increases with age. For example, with regard to tempo, children by age 5 make reliable fast-happy, slow-sad associations (Dalla Bella et al., Reference Dalla Bella, Peretz, Rousseau and Gosselin2001), but with regard to musical key or mode, they do not yet make major-happy, minor-sad associations (Dalla Bella et al., Reference Dalla Bella, Peretz, Rousseau and Gosselin2001; Garardi & Gerken, Reference Garardi and Gerken1995). However, people make stronger major-happy and minor-sad associations as they age (Gregory, et al., Reference Gregory, Worral and Sarge1996). Children reliably respond to mode by age 6 (Dalla Bella et al., Reference Dalla Bella, Peretz, Rousseau and Gosselin2001), and significant mode effects on felt emotion have been found among 7–8-year olds and college students (Garardi & Gerken, Reference Garardi and Gerken1995; Gregory et al., Reference Gregory, Worral and Sarge1996). There are also age-related declines in skills such as music identification (Andrews, et al., Reference Andrews, Dowling, Bartlett and Halpern1998), and recognition (Dowling, et al., Reference Dowling, Bartlett, Halpern and Andrews2008), which may be partially due to slowed cognitive processes (Andrews et al., Reference Andrews, Dowling, Bartlett and Halpern1998; Dowling et al., Reference Dowling, Bartlett, Halpern and Andrews2008; Hailstone et al., Reference Hailstone, Omar, Henley, Frost, Kenward and Warren2006; Krumhansl, Reference Krumhansl2002). For example, older listeners (over 60), compared with younger (18 to 30) and middle-aged (31 to 59) adults, have difficulty processing very rapid auditory patterns in music. According to the above evidence, there appears to be developmental change in sensitivity to different musical features, and awareness of the links between these features and the emotions meant to be communicated by music.

Importantly, there is evidence that felt emotion is shaped in part by some of the same musical features that affect perceived emotion, including mode (Livingstone et al., Reference Livingstone, Mühlberger, Brown and Loch2007) and tempo (Kawakami et al., Reference Kawakami, Furukawa, Katahira, Kamiyama and Okanoya2013; Livingstone et al., Reference Livingstone, Mühlberger, Brown and Loch2007). The effects of these musical features on felt emotion could be due to several factors (Gabrielsson, Reference Gabrielsson, Juslin and Sloboda2001; Husain, et al., Reference Husain, Thompson and Schellenberg2002). First, there may be a cognitive element. Husain et al. (Reference Husain, Thompson and Schellenberg2002) tested relationships among affective arousal and mood triggered by music and performance of a spatial task. Participants completed two “mood and arousal questionnaires,” before and after the paper-folding-and-cutting (PF&C) task devised by Nantais and Schellenberg (Reference Nantais and Schellenberg1999). Results showed that mean scores on the PF&C task were higher for participants who heard the fast rather than the slow tempo, and for those who heard the major rather than the minor mode. Second, different felt emotions induced by the same music might be triggered in different cultures (Balkwill & Thompson, Reference Balkwill and Thompson1999; Laukka, et al., Reference Laukka, Eerola, Thingujam, Yamasaki, Beller, Laukka, Eerola, Thingujam, Yamasaki and Beller2013). In Laukka and colleagues’ research (2013), professional bowed-string musicians from different cultures were instructed to perform short pieces of music to convey emotions and related states to listeners, and results indicated combination of universal and culture-specific factors in processing affective expression in music. A third factor concerns musical experience, but to date there is still no evidence of age-related changes in the arousal and mood triggered by music. Tests of age differences would provide information about the extent to which musical experience—in other words, learning—changes listeners’ felt emotion when listening to music.

More importantly, felt emotion might be elicited by the mixed effects of various musical features in any given musical segment. However, there have been few studies that directly compared the effects of two musical features on perceived emotion, and even fewer that examined combined effect on felt emotion. Evidence with regard to perceived emotion has mainly come from research on tempo and mode (Dalla Bella et al., Reference Dalla Bella, Peretz, Rousseau and Gosselin2001), which are both considered simple musical features (Juslin & Sloboda, Reference Juslin, Sloboda and Deutsch2013). There are still open questions about differences in felt emotion induced among people of different ages (i.e., with different levels of musical experience) when processing musical features with different complexities.

In the present study, we were interested in whether acoustic features with different polarities (e.g., tempo has two different polar: Fast vs. slow) and complexities have different effects on felt emotion across age. More precisely, the present study focused on the effects of tempo (a simple feature) and timbre (a complex feature) on felt emotion, when music is presented in different modes, in two different age periods (i.e., children-less musical experience vs. adults-more musical experience). There are three reasons for studying the music features of tempo and timbre as the focus of the current study. Firstly, relationships between tempo and emotion, and between timbre and emotion, are similar in that they both contribute to perceived emotion (Bowman & Yamauchi, Reference Bowman and Yamauchi2016; Juslin & Sloboda, Reference Juslin, Sloboda and Deutsch2013), for example the perception that the music is meant to convey positive or negative emotions. Secondly, encoding of tempo (Husain et al., Reference Husain, Thompson and Schellenberg2002) and encoding of timbre (Kraus, et al., Reference Kraus, Skoe, Parbery-Clark and Ashley2009) have both been shown to change with experience or age. Thirdly, and more importantly, tempo and timbre can be seen as representatives of musical features with different polarities and different complexities.

In addition to being distinguished in terms of being simple or complex, tempo and timbre can also be distinguished in terms of polarity. Whereas tempo is a bipolar feature of music, timbre is considered to be multi-polar. Tempo is considered to be bipolar because it is perceived only in terms of fast and slow (or faster and slower). This phenomenon has been widely discussed in the literature (see review by Webster & Weir, Reference Webster and Weir2005). One perspective is that tempo, mainly divided into fast and slow, and mode, divided into major and minor keys, are both bipolar, and have interactive effects on happy-sad ratings (Webster & Weir, Reference Webster and Weir2005).

By contrast, timbre (along with pitch and loudness) is a complex property of a sound. An example of timbre is the distinctive sound quality of an instrument (Isaac, Reference Isaac2018), that affects perceived emotion (Bowman & Yamauchi, Reference Bowman and Yamauchi2016; Hailstone et al., Reference Hailstone, Omar, Henley, Frost, Kenward and Warren2006; Juslin & Sloboda, Reference Juslin, Sloboda and Deutsch2013). Listeners of music associate the trumpet with happiness (Bresin & Friberg, Reference Bresin and Friberg2011), and associate the violin with sadness (Juslin & Laukka, Reference Juslin and Laukka2003; Juslin & Sloboda, Reference Juslin, Sloboda and Deutsch2013). Therefore, whereas tempo is bipolar, timbre is multi-polar in that it is expressed differently by different instruments. This distinction can also be defined in terms of acoustic complexity. Tempo as a bi-polar feature can also be described as a simple acoustic feature. Timbre, as a multi-polar feature, can also be described as a complex acoustic feature that arises from a distribution of acoustic features rather than one single physical dimension (Bowman & Yamauchi, Reference Bowman and Yamauchi2016; Isaac, Reference Isaac2018). The difference in the acoustic complexity of tempo and timbre might cause different processing patterns across age groups.

One key focus of the current study was to establish links between perceived emotion (defined by the researcher) and felt emotion (reported by the participant). Therefore, it was important to have a standard by which to choose musical segments with different perceived emotions. Mode was chosen for this purpose. Juslin and Sloboda’s review (Juslin & Sloboda, Reference Juslin, Sloboda and Deutsch2013) noted that there are not simple one-to-one relationships between musical features and felt emotions, except in the relationships between modes and emotions. That is, a major mode almost always elicits positive emotions (i.e., happiness and tenderness), and a minor mode almost always elicits negative emotions (i.e., sadness, anger, and fear) (Juslin & Sloboda, Reference Juslin, Sloboda and Deutsch2013, p. 596). In addition, the mode is nearly fixed for a specific musical segment, and cannot be changed as easily as other musical features (e.g., tempo and timbre). Therefore, in the current study mode was the basis on which a musical segment was chosen as a representative of positive and negative perceived emotion.

Another key focus of the current study concerned the question of whether there are age differences in sensitivity to different acoustic features as markers of perceived emotions. In the literature, researchers have maintained that perceived emotion in response to music is due to innate perceptual predispositions together with learned associations that develop in childhood. Consistent with the argument that some skills are innate, there is evidence that three-year-old children are similar to adults in their judgments of the perceived emotion of a given musical segment (Nawrot, Reference Nawrot2003). In addition, the ability to distinguish happiness, sadness, anger and fear from music fragments has been seen as early as 4 years old (Cunningham & Sterling, Reference Cunningham and Sterling1988; Dolgin & Adelson, Reference Dolgin and Adelson1990). Children in this age group can also distinguish major mode from minor (Kastner & Crowder, Reference Kastner and Crowder1990) although there is mixed evidence concerning how well young children specifically associate mode with emotion. One study reported that 4–to–5-year-old children had difficulty correctly judging perceived emotions by mode (Dalla Bella et al., Reference Dalla Bella, Peretz, Rousseau and Gosselin2001), whereas another reported that even three-year-olds mark the major mode as expressing happy affection, just as adults do (Kastner & Crowder, Reference Kastner and Crowder1990).

However, in contrast to the literature on perceived emotion, the difference in felt emotion between children and adults has not yet been discussed. According to researchers’ assumptions about relationships among felt emotion, musical features and musical experiences, the expectation would be that children’s felt emotion is more affected by simple and bipolar musical features, whereas adults’ felt emotion is more affected by complex and multipolar musical features. In the present study, felt emotion in response to the acoustic features of a given piece of music was examined in young children, from age 3 to 6, and university students.

Among young children, those who are around three years old have the consciousness of felt emotion, at least positive and negative emotion. In addition, in a pre-test investigation before our main study, three- to six-year-old children showed generally correct responses to music expressing the emotions of happiness and sadness according to the mode of the musical segment (i.e., perceived emotion), and some of them, mainly from age 4 to age 6, could distinguish music expressing anger, fear and tenderness. These findings were generally consistent with what has been reported in the literature (Cunningham & Sterling, Reference Cunningham and Sterling1988; Dolgin & Adelson, Reference Dolgin and Adelson1990; Kastner & Crowder, Reference Kastner and Crowder1990; Nawrot, Reference Nawrot2003). It is plausible that these early skills would become stronger with time, due to increases in the amount and diversity of musical experiences. Young children mostly listen to happy music in their daily lives, whereas university students have more and various musical experiences, even if they have not received any music training. Therefore, besides differing in age, the two groups of participants likely differ in the type and extent of their musical experiences.

Based on the research on perceived emotion in processing music, the present study on felt emotion tested two competing hypotheses. One hypothesis was that mapping from bipolar, simpler features (i.e., mode and tempo) to the felt emotion might easier than mapping from multi-polar, more complex features (e.g., timbre) to the felt emotion. However, the competing hypothesis should not be ignored. Prior research has shown that tempo and polar emotions (i.e., positive and negative emotions) are not simply correlated one-to-one: Fast tempo is correlated with happiness but also with anger and fear, whereas slow tempo is correlated with sadness but also tenderness (Juslin & Sloboda, Reference Juslin, Sloboda and Deutsch2013, p. 596). This is similar to the relations between timbre and polar emotions, which are also not stable, one-to-one connections (Hailstone et al., Reference Hailstone, Omar, Henley, Frost, Kenward and Warren2006; Juslin & Laukka, Reference Juslin and Laukka2003). Therefore, mapping between tempo and felt emotion might be as difficult as mapping between timbre and felt emotion.

Three experiments were conducted. In Experiment 1, young children and young adults were asked to complete a felt emotion task after being exposed to music of a specific tempo while controlling for tempo-mode consistency. The aim of Experiment 1 was to find out whether young children and young adults report similar felt emotion in response to a piece of music, and how tempo affects this pattern across age. In Experiments 2 and 3, participants completed the same task after being exposed to music with a specific timbre while controlling for tempo-mode consistency. Experiment 2 addressed whether the timbre of a piece of music elicited similar felt emotion at different ages, while Experiment 3 addressed whether timbre and tempo together elicited similar felt emotion at different ages. These three experiments allowed us to address the questions of whether two different acoustic features, alone and together, affect the felt emotion elicited by a given piece of music, whose perceived emotion was defined by mode, and whether these effects differ across age groups.

Experiment 1

Method

Participants

There were two groups of participants, namely young children and university students. The children (n = 50; ages 3 to 6; see Table 1) were from the same kindergarten, which included preschool and kindergarten classrooms. Their caregivers volunteered for the children to participate and the children were given a picture book as a token of appreciation. The university students (n = 32; ages 18 to 22) were volunteers who were given a small payment after participation. The children and university students were all native Chinese speakers, without auditory or visual problems. None of them were had received formal musical training, defined as lessons on an instrument or in music theory.

Table 1. Participants’ Demographic Information

Design

Stimuli in a felt emotion task were manipulated in a repeated-measures factorial design with the perceived emotion of the music (major-happy vs. minor-sad), felt emotion (positive vs. neutral vs. negative), and tempo of the music (fast vs. slow) as within group factors, and the number of selections on felt emotions under different conditions as the dependent variable. Data from children and adults were submitted to separate analyses.

The children underwent fewer trials than the adults did. To shorten the trial list, children were first randomly divided into two groups with 25 children in each group. The different musical segments were randomly arranged into two sets, and each group listened to only one of these sets.

Materials

Fifty musical segments played on a piano, 25 in a major key and 25 in a minor key, were evaluated by thirty university students to select representatives of happy and sad music. The students, who did not take part in the main study, rated the perceived emotion (1 = very sad, 3 = neutral, 5 = very happy) and familiarity (1 = absolutely unfamiliar, 5 = very familiar) of each musical segment. Finally, 12 musical segments were selected. The 6 musical segments with the highest perceived emotion ratings (M rating of emotion = 4.4, SD = 0.49) were defined as happy music. These 6 segments were all in a major mode. The 6 musical segments with the lowest perceived emotion ratings (M rating of emotion = 1.72, SD = 0.45) were defined as sad music. These 6 segments were all in a minor mode. The raters reported low familiarity with these 12 musical segments (M familiarity rating = 1.59, SD = 0.50).

All musical segments were edited by Adobe Audition CS6 to create two versions, each with a different tempo: fast or slow. Average length of the fast music (184.6 bpm) was set as 9.78s (from 7 s to 14 s, SD = 2.02), whereas average length of the slow music (84.8 bpm) was set as 24.96s (from 18 s to 33 s, SD = 5.63). Finally, 24 musical segments (6 happy and fast; 6 happy and slow; 6 sad and fast; 6 sad and slow) were used as the target stimuli.

Procedure

Control files were constructed to play on a 13.3–inch Dell notebook (Dell Ins 13MF–D1208TA), and the musical segments were played through a loudspeaker. Participants were tested individually in a sound-proof room. Each child was accompanied by a researcher, who helped the child to make ratings. No participants reported having difficulty hearing the sounds. Stimuli were presented in a random order for each participant. After listening to the stimuli, participants reported their felt emotion by touching a smiling face, crying face, or neutral face, displayed on the monitor. Different selections (i.e., smiling face, crying face, and neutral face) were displayed on the left (X: 25%, Y: 50%), middle (X: 50%, Y: 50%) and on the right (X: 75%, Y: 50%) of the monitor, in an order that was pseudo randomly arranged.

Figure 1. Experimental Procedure in Experiments 1 and 2.

Instructions for young children

Before the main test, children were asked whether they knew what emotion was being represented by three faces: “How does each face feel?” All 50 child participants answered correctly (smiling face felt happy, crying face felt sad, and neutral face) and further took part in the main test. The main test took place in the following steps.

Step 1: Have you seen these Emojis? Would you please tell me what kind of emotion each Emoji represents?

Step 2: Good, you can identify these three Emojis. Now, you will hear some pieces of music. We want to use these pieces of music in a new cartoon, to go along with parts of the cartoon that show happiness, sadness, or neutral emotion. Can you help us? After listening to each segment, please use one of these Emojis to tell me what part of the cartoon you think each piece of music should be in.

It should be noted that young children had difficulty in understanding instructions about neutral felt emotion in a pilot test. Therefore, once the children had chosen which musical segment represented happiness and which represented sadness, they would be further asked: “Do you also feel happy/sad when listening to this piece of music?” This question provided information about the children’s felt emotion.

Instructions for adults

“Now, you will hear some musical segments. Please rate these musical segments with these Emojis, which represent happiness, sadness, or neutral emotion. Make sure that the emotion you choose is induced in yourself. If no obvious emotion was evoked, please use the neutral face.”

Results and Discussion

In the present study, young children’s reports of each musical segment’s perceived emotion as positive (i.e., happiness) or negative (i.e., sadness) coincided with their responses of felt emotion. Therefore, we considered the perceived emotion responses to be indicators of felt emotion. Table 2 shows the number of selections under different conditions. Figure 2 shows mean proportions of selections that were of smiling and crying faces. Data from children and data from adults were submitted separately to chi-square tests.

Table 2. Numbers of Selections in Experiment 1

Note. N = No significant positive or negative felt emotion was elicited.

Figure 2. Percentages of Participants who Chose Various Felt Emotions in Response to Given Musical Segments by (a) Young Children and by (b) Young adults, under Different Tempo-mode Consistencies in Experiment 1

Results on Data from Children

Children’s data were first submitted to a 2 (tempo: fast/slow) * 3 (felt emotion: positive/neutral /negative) chi-square test, which indicated a significant interaction between tempo and felt emotion, χ2(2) = 77.89, p < .001. Simple effects analysis showed that children reported positive felt emotion (46.25% of participants) more often than neutral felt emotion (28.75%), χ2(1) = 16.33, p < .001, and more often than negative felt emotion (25.00%), χ2(1) = 25.35, p < .001, after listening to fast music, but the difference between neutral and negative felt emotion did not reach significance, χ2(1) = 1.05, p > .10. By contrast, children reported negative felt emotion (53.50%) more often than neutral felt emotion (24.75%), χ2(1) = 48.68, p < .001, and more often than positive felt emotion (21.75%), χ2(1) = 53.59, p < .001, after listening to slow music, but the difference between neutral and positive felt emotion did not reach significance, χ2(1) = 0.77, p > .10.

Data were then submitted to a 2 (tempo: fast/slow) * 3 (felt emotion: positive/neutral /negative) chi-square test under happy and sad music conditions, separately. Under the happy music condition, results showed a significant interaction between tempo and felt emotion, χ2(2) = 55.33, p < .001. Simple effects analysis showed that children reported positive felt emotion (53.00% of participants) more often than neutral felt emotion (30.50%), χ2(1)=12.13, p < .001, and more often than negative felt emotion (16.50%), χ2(1) = 38.34, p < .001, in response to fast music. In addition, the difference between neutral and negative felt emotion reached significance, χ2(1) = 8.34, p < .01. By contrast, children reported negative felt emotion (48.50%) more often than neutral felt emotion (28.50%), χ2(1) = 10.39, p < .001, and more often than positive felt emotion (23.00%), χ2(1) = 18.19, p < .001, in response to slow music. However, the difference between neutral and positive felt emotion did not reach significance, χ2(1) = 1.18, p > .10.

Under the sad music condition, the interaction between tempo and felt emotion reached significance, χ2(2) =17.12, p < .001. Simple effects analysis showed that for the fast music, children reported positive felt emotion (39.50% of participants) significantly more often than negative felt emotion (33.50%), χ2(1) = 4.70, p < .05, but no significant difference between positive felt emotion and neutral felt emotion (27.00%), or between negative and neutral felt emotion, ps > .10; By contrast, for slow music, children reported negative felt emotion (58.50%) more often than neutral felt emotion (21.00%), χ2(1) = 35.37, p < .001, and more often than positive felt emotion (20.50%), χ2(1) = 36.56, p < .001.

Results on Data from Adults

In the adult sample a 2 (tempo: fast/slow) * 3 (felt emotion: positive/neutral/negative) chi-square test showed a significant interaction between tempo and felt emotion, χ2(2) = 554.45, p < .001. Simple effects analysis showed that fast music elicited positive felt emotion (61.72% of participants) more often than neutral felt emotion (24.74%), χ2(1) = 121.47, p < .001, and more often than negative felt emotion (13.54%), χ2(1) = 236.85, p < .001. By contrast, slow music elicited negative felt emotion (57.94%) more often than neutral felt emotion (34.77%), χ2(1) = 44.50, p < .001, and more often than positive felt emotion (7.29%), χ2(1) = 302.04, p < .001; in addition, the difference between positive and neutral felt emotion reached significance, χ2(1) = 137.84, p < .001.

The adult data were then submitted to a 2 (tempo: fast/slow) * 3 (felt emotion: positive /neutral/negative) chi-square test under happy and sad music conditions, separately. Under the happy music condition, the interaction between tempo and felt emotion reached significance, χ2(2) = 379.97, p < .001. Simple effects analysis that focused on tempo in relation to different emotions showed that fast music elicited positive felt emotion (79.12% of participants) more often than neutral felt emotion (17.97%), χ2(1) = 148.06, p < .001, and more often than negative felt emotion (2.86%), χ2(1) = 272.54, p < .001; in addition, fast music elicited neutral felt emotion more often than negative felt emotion, χ2(1) = 42.05, p < .001. By contrast, slow music elicited neutral felt emotion (46.61%) more often than positive felt emotion (10.94%), χ2(1) = 71.42, p < .001, and elicited negative felt emotion (42.45%) more often than positive felt emotion, χ2(1) = 71.42, p < .001; however, the difference between neutral felt emotion and negative felt emotion did not reach significance, χ2(1) = 0.75, p > .10.

Under the sad music condition, there was a significant interaction between tempo and felt emotion, χ2(2) = 232.72, p < .001. Simple effects analysis that focused on tempo in relation to different felt emotions showed that fast music elicited positive felt emotion (44.27% of participants) more often than neutral felt emotion (31.51%), χ2 (1) = 8.25, p < .01, and more often than negative felt emotion (24.22%), χ2(1) = 22.54, p < .001; however, the difference between neutral and negative felt emotion did not reach significance, χ2(1) = 3.66, p > .05. By contrast, slow music elicited negative felt emotion (73.44%) more often than neutral felt emotion (22.92%), χ2(1) = 101.72, p < .001, and more often than positive felt emotion (3.65%), χ2(1) = 242.65, p < .001; in addition, slow music elicited neutral felt emotion more often than positive felt emotion, χ2(1) = 53.69, p < .001.

Summary

Findings in Experiment 1 indicated that tempo affected the felt emotion of most of the participants, including young children and young adults. That is, fast musical segments more often elicited positive felt emotion, whereas slow musical segments more often elicited negative felt emotion.

Comparing findings from children and adults, the results suggested that young adults depend more than young children do on tempo when processing unfamiliar musical segments under consistent tempo-mode conditions. In addition, children, but not adults, were inconsistent in their reports of felt emotion in response to minor-mode music played in a fast tempo. By contrast, although adults had inconsistent views on major-mode music played in a slow tempo, they did not think that these musical segments represented happiness (see Figure 2).

Experiment 2

Experiment 2 tested timbre effects on felt emotion. Consistent with the findings in recent research (Bresin & Friberg, Reference Bresin and Friberg2011; Juslin & Laukka, Reference Juslin and Laukka2003; Juslin & Sloboda, Reference Juslin, Sloboda and Deutsch2013), music played on a trumpet, evoking more happy felt emotion, and music played on a violin, evoking more sad felt emotion, were selected as the target timbres. In addition, consistency between tempo and mode were controlled. That is, happy musical segments (i.e., music with major mode) were presented with fast tempo, whereas sad musical segments (i.e., music with minor mode) were presented with slow tempo. Therefore, timbre effects on felt emotion could be examined in isolation of these other two characteristics of music.

Method

Participants

As in Experiment 1, there were two subsamples, one of children and one of university students. The children (n = 50; ages 3 to 6; see Table 1) were from the same school as those in Experiment 1. Their caregivers volunteered for them to participate, and they were given a picture book as a token of appreciation. The university students (n = 32; ages 18 to 22; from the same university as in Experiment 1) were volunteers and were given a small payment after participation. The children and university students were all native Chinese speakers, without auditory or visual problems. None of them had received formal musical training, defined as lessons on an instrument or in music theory.

Design

Stimuli in the felt emotion task were manipulated in a repeated-measures factorial design with perceived emotion of the music (major/happy vs. minor/sad), felt emotion (positive vs. neutral vs. negative), and timbre (trumpet vs. violin) as within group factors, and the number of selections on felt emotions under different conditions as the dependent variable. Data from children and adults were submitted to separate analyses.

To shorten the trial list for young participants, children were divided into two groups randomly, 25 children for each group. The different musical segments were randomly arranged into two sets, and each group listened to only one of these sets.

Materials

The same 12 original musical segments (6 happy and 6 sad) that were used in Experiment 1 were edited by Adobe Audition CS6 so that each musical segment had two versions, each with a different timbre: trumpet or violin. Thus, 24 musical segments (6 happy, trumpet; 6 sad, trumpet; 6 happy, violin; 6 sad violin) were used as the target stimuli.

Procedure and instructions

Experimental equipment, procedures and instructions were identical to those in Experiment 1.

Results and Discussion

The number of selections under different conditions is shown in Table 3, and mean proportions of children who selected different faces are shown in Figure 3. Chi-square tests were conducted separately on the number of selections from children and the data from adults.

Table 3. Number of Selections in Experiment 2

Note. In Experiment 2, perceived emotions represented by tempo and by mode were always consistent. N = No significant positive or negative felt emotion was elicited.

Figure 3. Percentages of Different Felt Emotions Reported by (a) Young Children and by (b) Young Adults, under Different Timbre-mode Consistencies in Experiment 2

Results on Data from Children

Data from children were submitted to a 2 (timbre: trumpet/violin) * 3 (felt emotion: positive/neutral/negative) chi-square test. Results showed a significant interaction between timbre and felt emotion, χ2(2) = 34.49, p < .001. Simple effects analysis showed that musical segments played on a trumpet elicited positive felt emotion (49.00% of participants) more often than neutral felt emotion (25.00%), χ2(1) = 15.57, p < .001, and more often than negative felt emotion (26.00%), χ2(1) = 14.11, p < .001. However, the difference between neutral felt emotion and negative felt emotion did not reach significance, χ2(1) = 0.04, p > .10. By contrast, musical segments played on a violin elicited negative felt emotion (50.50%) more often than neutral felt emotion (27.00%), χ2(1) = 14.25, p < .001, and more often than positive felt emotion (22.50%), χ2(1) = 21.48, p < .001. However, the difference between positive felt emotion and neutral felt emotion did not reach significance, χ2(1) = 0.82, p > .10.

The data were then submitted to a 2 (timbre: trumpet/violin) * 3 (felt emotion: positive/neutral/negative) chi-square test under happy and sad music conditions, separately. Under the happy music condition, the interaction between timbre and felt emotion reached significance, χ2(2) = 19.76, p < .001. Simple effects analysis showed that happy music played by a trumpet elicited positive felt emotion (66.00% of participants) more often than neutral felt emotion (27.00%), χ2(1) = 16.36, p < .001, and more often than negative felt emotion (7.00%), χ2(1) = 47.69, p < .001. In addition, happy music played by a trumpet elicited neutral felt emotion more often than negative emotion, χ2(1) = 11.77, p < .001. By contrast, happy music played on a violin showed no significant difference between positive (38.00%) and negative (26.00%) felt emotion, χ2(1) = 2.25, p > .10, or between positive and neutral felt emotion (36%), χ2(1) = 0.05, p > .10, or between neutral and negative felt emotion, χ2(1) = 1.61, p > .10.

Under the sad music condition, a significant interaction existed between timbre and felt emotion, χ2(2) = 24.14, p < .001. Simple effects analysis showed no significant difference between negative (45.00% of participants) and positive (32.00%) felt emotions, χ2(1) = 2.20, p > .10, or between positive and neutral felt emotions (23.00%), χ2(1) = 1.47, p > .10, when sad music was played on a trumpet; the difference between negative and neutral felt emotions reached significance, χ2(1) =7.12, p < .01. By contrast, sad music played on a violin elicited negative (75.00%) more often than positive (7.00%) felt emotion, χ2(1) = 56.39, p < .001, and more often than neutral felt emotion (18.00%), χ2(1) = 34.94, p < .001; in addition, the difference between neutral and positive felt emotion reached significance, χ2(1) = 4.84, p < .05.

In sum, the results indicated that trumpet music elicited negative felt emotion more often than positive felt emotion, but only under the major mode condition, and the reverse was true for violin music, but only under the minor mode condition. In other words, children’s feelings in response to music were affected by timbre, weakening the effect of mode when the mode and timbre did not represent the same emotion. However, there is also an alternative explanation. That is, timbre effects could also be weakened by tempo. This alternative explanation will be discussed later in the General Discussion.

Results on Data from Adults

Data from adults were submitted to a 2 (timbre: trumpet/violin) * 3 (felt emotion: positive/neutral/negative) chi-square test. Results showed a significant interaction between timbre and felt emotion, χ2(2) = 29.56, p < .001. Further simple effects analysis showed that music played on a trumpet elicited positive emotion (45.83% of participants) more often than negative emotion (28.91%) and more often than neutral felt emotion (25.26%), χ2(1) = 22.29, p < .001, χ2(1) = 14.72, p < .001, but the difference between neutral and negative felt emotion did not reach significance, χ2(1) = 0.94, p > .10. By contrast, music played on a violin elicited negative emotion (47.40%) more often than positive emotion (36.98%), χ2(1) = 33.29, p < .001, and more often than neutral emotion (15.63%), χ2(1) = 61.50, p < .001. In addition, music played on a violin elicited negative emotion more often than positive emotion, χ2(1) = 4.94, p < .05.

Data were then submitted to a 2 (timbre: trumpet / violin) * 3 (felt emotion: positive/neutral/negative) chi-square test under happy and sad music conditions, separately. Under the happy music condition, the interaction between timbre and felt emotion reached significance, χ2(2) = 10.43, p < .01. Simple effects analysis showed that happy music played by a trumpet elicited positive felt emotion (84.4% of participants) more often than neutral felt emotion (15.1%), χ2(1) = 92.61, p < .001, and more often than negative felt emotion (0.50%), χ2(1) = 159.03, p < .001; in addition, neutral felt emotion was elicited more often than negative felt emotion, χ2(1) = 26.13, p < .001. Similarly, happy music played on a violin elicited positive felt emotion (73.96%) more often than neutral felt emotion (20.83%), χ2(1) = 57.17, p < .001, and more often than negative felt emotion (5.21%), χ2(1) = 114.63, p < .001; in addition, the difference between neutral and negative felt emotion reached significance, χ2(1) = 18.00, p < .001.

Under the sad music condition, the interaction between timbre and felt emotion reached significance, χ2(2) = 53.81, p < .001. Simple effects analysis showed that sad music played on a trumpet elicited negative felt emotion (57.29% of participants) more often than neutral felt emotion (35.42%), χ2(1) = 35.56, p < .001, and more often than positive felt emotion (7.29%), χ2(1) = 74.32, p < .001, and the difference between neutral and positive felt emotion reached significance, χ2(1) = 159.03, p < .001. In addition, sad music played on a violin elicited negative felt emotion (89.58%) more often than neutral felt emotion (10.42%), χ2(1) = 120.33, p < .001, and no participants judged such music as eliciting negative emotion.

Summary

With regard to tempo and mode, the findings were similar for children and adults. According to the findings in Experiment 1 and also reports in the literature, there were clear fast-major and slow-minor associations in both age groups. Results from Experiment 2 indicated a strong combined effect of mode and tempo. Participants judged the perceived emotion effects of fast-major musical segments as happy more often than sad, and judged slow-minor musical segments as sad more often than happy. That is, the perceived emotion of tempo and the perceived emotion of mode were the same. This finding is called a consistent tempo-mode effect in the following description.

With regard to timbre and mode, the results in Experiment 2 were somewhat similar between children and adults. For children, timbre’s role in eliciting felt emotion seemed to be subsidiary to the role of mode. Firstly, when comparing children’s results in Experiments 1 and 2, it appears that perceived emotion-consistent timbre enhanced the consistent tempo-mode effect. Secondly, perceived emotion-inconsistent timbre weakened the consistent tempo-mode effect on felt emotion, evident in a comparison of children’s responses in Experiment 1 and Experiment 2. However, the effect of inconsistent timbre was not strong enough to influence felt emotion (see Figure 2a and Figure 3a). Data from the young adults indicated the same tendency, but only under the conditions in which major-mode music was played on a trumpet and minor-mode music was played on a violin. In other words, young adults seemed to judge the emotion of the given musical segments with little or no consideration for information provided by timbre.

However, the aim of the present research was to discuss the effects of tempo and timbre on felt emotion across age. In Experiment 2, perceived emotion based on tempo and mode was always consistent (i.e., fast-major-happiness, slow-minor-sadness), so the effects of mode and tempo could not be distinguished. Therefore, timbre effects on felt emotion were further examined in inconsistent tempo-mode conditions in Experiment 3.

Experiment 3

In Experiment 3, timbre effects on felt emotion were examined again but under inconsistent tempo-mode conditions. That is, major mode musical segments, representing happiness, were presented with slow tempo, whereas minor mode musical segments, representing sadness, were presented with fast tempo. These conflict manipulations were used to compare the effects of musical features of different complexities (i.e., mode, simple; timbre, complex) on felt emotions.

Method

Participants

Participants who took part in Experiment 2 also participated in Experiment 3.

Design

Stimuli in the felt emotion task were manipulated in a repeated-measures factorial design with perceived emotion of the music (major-happy vs. minor-sad), felt emotion (positive vs. neutral vs. negative), and timbre (trumpet vs. violin) as within group factors, and the number of selections under different conditions as the dependent variable. Data from children and adults were submitted to separate analyses.

To shorten the trial list for young participants, children were divided into two groups randomly, 25 children for each group.

Materials

The 12 original musical segments (6 happy and 6 sad) that were used in Experiment 2 were edited by Adobe Audition CS6 so that each musical segment had two versions, each with a different timbre: trumpet or violin. Thus, 24 musical segments were used as the target stimuli (6 happy, trumpet; 6 happy, violin; 6 sad, trumpet; 6 sad, violin). Unlike the manipulation in Experiment 2, the tempo of the music in Experiment 3 was not edited. Therefore, happy music was fast, whereas sad music was slow, as in the original segments.

Procedure

Experimental equipment, procedures and instructions were identical to those in Experiment 1.

Results and Discussion

Table 4 shows the percentage of participants who made different felt emotion selections under happy, neutral, and sad conditions. Figure 4 shows the mean proportion of participants who selected the smiling and crying faces. Data from children and data from adults were analyzed separately.

Table 4. Numbers of Selections in Experiment 3

Note. Perceived emotions represented by tempo and by mode were always inconsistent. N = No significant positive or negative felt emotion was elicited.

Figure 4. Percentages of Felt Emotion Types Elicited by the Given Musical Segments in (a) Young Children and (b) Young Adults, under Different Timbre-mode Consistencies in Experiment 3.

Results on Data from Children

Data from children were submitted to a 2 (timbre: trumpet/violin) * 3 (felt emotion: positive/neutral/negative) chi-square test. Results showed a significant interaction between timbre and felt emotion, χ2(2) = 10.16, p < .01. Simple effects analysis showed that music played on a trumpet did not trigger positive felt emotion (38.50%) more often than negative felt emotion (34.50%), χ2(1) = 0.16, p > .05, but did trigger positive felt emotion more often than neutral felt emotion (27.00%), χ2(1) = 4.04, p < .05. The difference in the percentage of participants who reported neutral and negative felt emotion did not reach significance, χ2(1) = 1.83, p > .05. By contrast, music played on a violin elicited negative (48.00%) more often than neutral (27.00%), χ2(1) = 11.76, p < .001, and more often than positive (25.00%) felt emotion, χ2(1) = 14.49, p < .001, but the difference between neutral and positive felt emotion did not reach significance, χ2(1) = 0.15, p > .10.

Data were then submitted to a 2 (timbre: trumpet/violin) * 3 (felt emotion: positive/ neutral/negative) chi-square test under slow-major and fast-minor music conditions, separately. Under the slow-major music condition, a significant interaction existed between timbre and felt emotion, χ2(2) = 6.32, p < .05. Simple effects analysis showed that for music played on a trumpet, there were no differences in the percentage of participants endorsed happy, neutral or sad felt emotions, χ2(1) < 3.27, p > .05. By contrast, for music played on a violin, participants reported negative felt emotion (54.00%) more often than neutral felt emotion (30.00%), χ2(1) = 4.26, p < .05 and more often than positive felt emotion (16%), χ2(1) = 20.63, p < .001, with the difference between neutral and positive felt emotions also being significant, χ2(1) = 6.85, p <.01.

Under the fast-minor music condition, a significant interaction existed between timbre and felt emotion, χ2(2) = 6.87, p < .05. Simple effects analysis showed that when music was played on a trumpet, participants reported positive felt emotion (46.00%) more often than negative felt emotion (28.00%), χ2(1) = 5.56, p < .05, and more often than neutral felt emotion (26.00%), χ2(1) = 4.38, p < .05. However, the difference between negative and neutral felt emotion did not reach significance, χ2(1) = 0.07, p > .10. By contrast, when music was played on a violin there was no difference in the proportion of participants who endorsed the happy, neutral and sad felt emotions, χ2(1) < 2.90, p > .05.

Results on Data from Adults

Data from adults were submitted to a 2 (timbre: trumpet/violin) * 3 (felt emotion: positive/neutral/negative) chi-square test. Results showed a significant interaction between timbre and felt emotion, χ2(2) = 150.25, p < .001. Simple effects analysis showed that when music was played on a trumpet, participants reported positive felt emotion (44.79%) more often than negative felt emotion (16.15%), χ2(1) = 51.70, p < .001, and participants reported neutral felt emotion (39.06) more often than the negative felt emotion, χ2(1) = 36.53, p < .001. However, the difference between positive and neutral felt emotion did not reach significance, χ2(1) = 1.50, p > .10. By contrast, when music was played on a violin, participants reported negative (50.52%) more often than neutral felt emotion (39.06%) and more often than positive (10.42%) felt emotion, χ2(1) = 5.63, p < .05, χ2(1) = 101.35, p < .001, with the difference between neutral and positive felt emotion also reaching significance, χ2(1) = 63.68, p < .001.

Data were then submitted to a 2 (timbre: trumpet/violin) * 3 (felt emotion: positive/neutral/negative) chi-square test, separately under slow-major and fast-minor music conditions. Under the slow-major music condition, the interaction between timbre and felt emotion reached significance, χ2(2) = 39.39, p < .001. Simple effects analysis showed that when music was played on a trumpet, participants reported neutral felt emotion (53.13%) more often than negative felt emotion (28.65%), χ2(1) = 14.07, p < .001, and more often than positive felt emotion (18.23%), χ2(1) = 32.77, p < .001. The difference between negative felt emotion and positive felt emotion also reached significance, χ2(1) = 4.44, p < .05. In addition, when music was played on a violin, participants reported negative felt emotion (56.25%) more often than neutral felt emotion (40.10%), χ2(1) = 5.20, p < .05, and more often than positive felt emotion (3.65%), χ2(1) = 88.70, p < .001. The difference between triggered neutral felt emotion and positive felt emotion also reached significance, χ2(1) = 58.33, p < .001.

Under the fast-minor music condition, a significant interaction existed between timbre and felt emotion, χ2(2) = 135.90, p < .001. Simple effects analysis showed when music was played on a trumpet, participants reported positive (71.35%) more often than neutral felt emotion (25.00%), χ2(1) = 42.82, p < .001, and more often negative felt emotion (3.60%), χ2(1) = 117.36, p < .001. The difference in the proportion of participants endorsing neutral and negative felt emotion was also significant, χ2(1) = 30.56, p < .001. By contrast, music played on a violin elicited negative felt emotion (44.79%) more often than positive felt emotion (17.18%), χ2(1) = 23.61, p < .001. However, no significant difference was captured between negative felt emotion and neutral (39.06%) felt emotion, χ2(1) = 1.06, p > .10.

Summary

Firstly, differences between findings in Experiment 2 and 3 can be interpreted as being due to a strong effect of tempo on the feelings elicited while listening to music. Musical segments with slow tempo elicited negative felt emotion less often in Experiment 2 (when timbre and mode had the same perceived emotion) than in Experiment 3 (when timbre and model had different perceived emotions) and the reverse was true for musical segments with fast tempo (see Figures 2 and 3). The way that we selected the musical segments in Experiment 3 was similar to that in Experiment 1 and 2.

Secondly, consistence in the perceived emotions of tempo and timbre (e.g., slow music played on a violin, or fast music played on a trumpet) enhanced corresponding feelings in response to the given musical segments. By contrast, inconsistence between tempo and timbre (e.g., slow music played on a trumpet, or fast music played on a violin) hindered typical felt emotion while listening to the given musical segments, especially for young children. These findings suggest that timbre was subsidiary to tempo in eliciting felt emotion in Experiment 3. As compared to findings reported in Experiment 2, in which a timbre effect was found in data from young children but not in data from young adults, Experiment 3 indicated that different information (i.e., tempo and timbre) had different effects on the felt emotion of young children and adults.

General Discussion

Researchers have shown great interest in whether children use the same properties of music as adults do when determining whether music makes them feel happy or sad (i.e., felt emotion). However, most of these studies focused on tempo and mode, which are both simple acoustic features, and there has been little research focused on interactions between simple and complex acoustic features in processing music at different ages. The current study addressed an unanswered question: Does the perceived emotion associated with various musical properties influence listeners’ felt emotion differently across different ages?

In the present study, simple and complex acoustic features, namely tempo and timbre, were examined in relation to felt emotion in processing music in young children and young adults. In all three experiments, mode (major and minor), as the fixed feature for each specific musical segment, was used to define the perceived emotion of the musical segments as happy or sad. By manipulating the consistency among mode, tempo (i.e., simple acoustic feature) and timbre (i.e., complex acoustic feature) in the present three experiments, we tested how different characteristics of acoustic features (simple or complex) affected felt emotion at different ages. As was expected:

  1. 1. Tempo had a significant effect on felt emotion, and the pattern of effect in young adults was similar to that in young children, but generally stronger (Experiment 1).

  2. 2. There was a timbre effect, but it was negligible once connections (i.e., consistent or inconsistent) between tempo and mode were considered (Experiment 2).

  3. 3. When tempo and mode were inconsistent, felt emotions tended not to match the music’s perceived emotion as represented by mode (Experiment 3); the timbre effects appeared to be due to a consistent tempo-timbre effect on young children, an effect that was even stronger in young adults.

  4. 4. Surprisingly and interestingly, young children seemed to show higher sensitivity to timbre than young adults did (Experiments 2 and 3).

Comparing the first, second and third findings in the above list, the effects of simple acoustic features were more reliable than that of complex feature. However, participants’ felt emotion depended more on tempo than mode, even though both of them are simple acoustic features, according to the fourth finding. The current findings are consistent with the view that mood states are affected by tempo (Husain et al., Reference Husain, Thompson and Schellenberg2002). In addition, the third finding indicated a conditional and weak timbre effect, which is consistent with the timbre effects on felt emotion in the literature (Bowman & Yamauchi, Reference Bowman and Yamauchi2016; Hailstone et al., Reference Hailstone, Omar, Henley, Frost, Kenward and Warren2006). All these findings seemed to reveal the interactive effects of acoustic information with different complexities on felt emotion at different ages, which until now have not been documented.

Researchers have been interested in how two seemingly incommensurable phenomena, music and emotion, are linked together (see review in Juslin & Sloboda, Reference Juslin, Sloboda and Deutsch2013, p. 583). Davies argued, “although music may convey meaning, it does not do so in terms of a language-like semantics where every note pattern has a fixed meaning” (see Juslin & Sloboda, Reference Juslin, Sloboda and Deutsch2013, p. 595). The “fixed meaning” in this argument means a clear correspondence between emotion and a note pattern. This could be generalized as a correspondence between emotion and its corresponding events (Chen, Liu, et al., Reference Chen, Liu and Lin2016), and this correspondence can be easily accessed. For example, the word “happy” corresponds to events such as “passed the exam,” “falling in love,” etc. As a consequence, events corresponding to the concept “happy” could be easily accessed when processing the word “happy,” if these events have been experienced (Chen, Liu, et al., Reference Chen, Liu and Lin2016).

In the present study, “clear correspondence” and “easily accessed” should also be seen as two key terms in understanding links between acoustic features and felt emotion. Firstly, compared with simple acoustic features, complex features, including multipolarity and mixed acoustic information, correspond less clearly to felt emotions. In addition, if mappings between emotions and acoustic features are considered, the mapping between tempo / mode and felt emotions, and between timbre and felt emotions, were shown to be different in the current study. That is, the mapping between timbre and felt emotion was more complicated, and less clear, than the others (Hailstone et al., Reference Hailstone, Omar, Henley, Frost, Kenward and Warren2006; Juslin & Sloboda, Reference Juslin, Sloboda and Deutsch2013). Instead of seeing these findings as opposite to those reported in the literature (Juslin & Sloboda, Reference Juslin, Sloboda and Deutsch2013), the lack of a clear timbre effect should be seen as evidence of the interactions between simple and complex acoustic features in music processing. Secondly, tempo has been found to be mastered earlier than mode as a means to infer the emotional tone conveyed by music (Dalla Bella et al., Reference Dalla Bella, Peretz, Rousseau and Gosselin2001), and tempo effects documented in Experiment 2 were stronger than mode effects documented in Experiment 3. These findings could also be explained by the mapping view of cognition. People build mappings between concepts, especially abstract concepts, so that they can easily learn a new concept via an old one, or categorize concepts more easily (Chen, Wang, et al., Reference Chen, Liu and Lin2016). Mappings between concrete concepts are more easily built than those between abstract concepts (Chen, Liu, et al., Reference Chen, Liu and Lin2016). Therefore, the connection between tempo and felt emotions might be more easily accessed than the connection between mode and felt emotions.

It is a bit surprising and very interesting that in the young children sample the strong tempo effect was missing under the inconsistent timbre-mode conditions (i.e., major mode music played on violin and minor mode music played on trumpet). In both Experiments 2 and 3, more than half of the young listeners’ felt emotion was not consistent with what was expected given the tempo, indicating that their judgments were less dependent on tempo information than timbre information (see Figures 3 and 4). There are two alternative explanations: (a) Perhaps young children cannot identify all characteristics of felt emotion, as young adults can; (b) perhaps young children show greater sensitivity to timbre and use it as one of the indexes to identify happy and sad music. Given the differences in response tendencies between young children and young adults, the first explanation should be less acceptable than the second. By contrast, if the second explanation were truly acceptable, it could be considered an indication that felt emotion in response to music is partially innate. In addition, the lack of equal sensitivities between simple and complex acoustic features in the adults’ data could be seen as evidence of learning. That is, after listening to thousands of pieces of music, mappings between simple acoustic features and emotion become much stronger than those between complex acoustic features and emotion. However, because infants were not tested in the present study, the current evidence cannot speak directly to this explanation. It will be important to address this issue in future research.

In summary, the results suggest that simple acoustic features have stronger effects than complex acoustic features on felt emotion. Sensitivity to both types of features changes with age. These results might be due to different strengths of mappings between acoustic features (i.e., simple vs. complex) and felt emotion built up in music learning.

All adult participants and parents/legal guardians of all non-adult participants in Experiment 1, as well as in Experiments 2 and 3, provided their written informed consent to participate in this study and the study was reviewed and approved by the Human Research Ethics Committee for Non-Clinical Faculties (ethics committee of the School of Psychology, South China Normal University) before the study began.

Footnotes

This project was supported by the China Scholarship Council (No. 201906755010), the National Natural Science Foundation of China (No. 31970983) and the Planned project of Philosophy and Social Science in Jiangmen, 2017 (No. JM2017B12).

References

Andrews, M. W., Dowling, W. J., Bartlett, J. C., & Halpern, A. R. (1998). Identification of speeded and slowed familiar melodies by younger, middle-aged, and older musicians and nonmusicians. Psychology & Aging, 13, 462471. https://doi.org/10.1037/0882-7974.13.3.462CrossRefGoogle ScholarPubMed
Balkwill, L.-L., & Thompson, W. F. (1999). A cross-cultural investigation of the perception of emotion in music: Psychophysical and cultural cues. Music Perception, 17(1), 4364. http://doi.org/10.2307/40285811CrossRefGoogle Scholar
Bowman, C., & Yamauchi, T. (2016). Perceiving categorical emotion in sound: The role of timbre. Psychomusicology: Music, Mind, and Brain, 26(1), 1525. http://doi.org/10.1037/pmu0000105CrossRefGoogle Scholar
Bresin, R., & Friberg, A. (2011). Emotion rendering in music: Range and characteristic values of seven musical variables. Cortex, 47, 10681081. https://doi.org/10.1016/j.cortex.2011.05.009CrossRefGoogle ScholarPubMed
Chen, X., Liu, B., & Lin, S. (2016). Is accessing of words affected by affective valence only? A discrete emotion view on the emotional congruency effect. Frontiers in Psychology, 7, Article 916. http://doi.org/10.3389/fpsyg.2016.00916CrossRefGoogle Scholar
Chen, X., Wang, G., & Liang, Y. (2016). The common element effect of abstract-to-abstract mapping in language processing. Frontiers in Psychology, 7, Article 1623. http://doi.org/10.3389/fpsyg.2016.01623CrossRefGoogle ScholarPubMed
Cunningham, J. G., & Sterling, R. S. (1988). Developmental change in the understanding of affective meaning in music. Motivation and Emotion, 12(4), 399414. https://doi.org/10.1007/BF00992362CrossRefGoogle Scholar
Dalla Bella, S., Peretz, I., Rousseau, L., & Gosselin, N. (2001). A developmental study of the affective value of tempo and mode in music. Cognition, 80, B1B10. http://doi.org/10.1016/S0010-0277(00)00136-0CrossRefGoogle Scholar
Deliens, G. t., Antoniou, K., Clin, E., Ostashchenko, E., & Kissine, M. (2018). Context, facial expression and prosody in irony processing. Journal of Memory and Language, 99, 3548. https://doi.org/10.1016/j.jml.2017.10.001CrossRefGoogle Scholar
Dolgin, K. G., & Adelson, E. H. (1990). Age changes in the ability to interpret affect in sung and instrumentally-presented melodies. Psychology of Music, 18, 8798. https://doi.org/10.1177/0305735690181007CrossRefGoogle Scholar
Dowling, W. J., Bartlett, J. C., Halpern, A. R., & Andrews, W. M. (2008). Melody recognition at fast and slow tempos: Effects of age, experience, and familiarity. Perception & Psychophysics, 70, 496502. https://doi.org/10.3758/pp.70.3.496CrossRefGoogle ScholarPubMed
Gabrielsson, A. (2001). Emotions in strong experiences with music. In Juslin, P. N. & Sloboda, J. A. (Eds.), Music and emotion: Theory and research (pp. 431449). Oxford University Press.Google Scholar
Gabrielsson, A. (2002). Emotion perceived and emotion felt: Same or different? Musicae Scientiae, 5(1), 123147. http://doi.org/10.1177/102986490601000203CrossRefGoogle Scholar
Gabrielsson, A., & Lindström, E. (2010). The role of structure in the musical expression of emotions. In Juslin, P. N. & Sloboda, J. A. (Eds.), Handbook of music and emotion: Theory, research, applications (pp. 367400). Oxford University Press.Google Scholar
Garardi, G. M., & Gerken, L. (1995). The development of affective response to modality and melodic contour. Music Perception, 12, 279290. https://doi.org/10.2307/40286184CrossRefGoogle Scholar
Gregory, A., Worral, L., & Sarge, A. (1996). The development of emotional responses to music in young children. Motivation and Emotion, 20, 341349. http://doi.org/10.1007/BF02856522CrossRefGoogle Scholar
Hailstone, J. C., Omar, R., Henley, S. M. D., Frost, C., Kenward, M. G., & Warren, J. D. (2006). It’s not what you play, it’s how you play it: Timbre affects perception of emotion in music. Quarterly Journal of Experimental Psychology, 62(11), 21412155. http://doi.org/10.1080/17470210902765957CrossRefGoogle Scholar
Husain, G., Thompson, W. F., & Schellenberg, E. G. (2002). Effects of musical tempo and mode on arousal, mood, and spatial abilities. Music Perception, 20, 151171. http://doi.org/10.1525/mp.2002.20.2.151CrossRefGoogle Scholar
Isaac, A. M. C. (2018). Prospects for timbre physicalism. Philosophical Studies, 175, 503529. http://doi.org/10.1007/s11098-017-0880-yCrossRefGoogle Scholar
Juslin, P. N., & Laukka, P. (2003). Communication of emotions in vocal expression and music performance: Different channels, same code. Psychological Bulletin, 129(5), 770814. http://doi.org/10.1037/0033-2909.129.5.770CrossRefGoogle ScholarPubMed
Juslin, P. N., & Sloboda, J. A. (2013). 15. Music and Emotion. In Deutsch, D. (Ed.), The psychology of music (3rd Ed., pp. 583645). Academic Press.CrossRefGoogle Scholar
Kastner, M. P., & Crowder, R. G. (1990). Perception of the major/minor distinction: IV. Emotional connotations in young children. Music Perception, 8, 189202. http://doi.org/10.2307/40285496CrossRefGoogle Scholar
Kawakami, A., Furukawa, K., Katahira, K., Kamiyama, K., & Okanoya, K. (2013). Relations between musical structures and perceived and felt emotion. Music Perception: An Interdisciplinary Journal, 30(4), 407417. http://doi.org/10.1525/MP.2013.30.4.407CrossRefGoogle Scholar
Kousta, S.-T., Vigliocco, G., Vinson, D. P., Andrews, M., & Del Campo, E. (2011). The representation of abstract words: Why emotion matters. Journal of Experimental Psychology: General, 140(1), 1434. http://doi.org/10.1037/a0021446CrossRefGoogle ScholarPubMed
Kousta, S.-T., Vinson, D. P., & Vigliocco, G. (2009). Emotion words, regardless of polarity, have a processing advantage over neutral words. Cognition, 112(3), 473481. http://doi.org/10.1016/j.cognition.2009.06.007CrossRefGoogle ScholarPubMed
Kraus, N., Skoe, E., Parbery-Clark, A., & Ashley, R. (2009). Experience-induced malleability in neural encoding of pitch, timbre, and timing implications for language and music. Annals of the New York Academy of Sciences , 1169, 543557. http://doi.org/10.1111/j.1749-6632.2009.04549.xCrossRefGoogle Scholar
Krumhansl, C. L. (2002). Music: A link between cognition and emotion. Current Directions in Psychological Science, 11, 4550. http://doi.org/10.1111/1467-8721.00165CrossRefGoogle Scholar
Laukka, P., Eerola, T., Thingujam, N. S., Yamasaki, T., Beller, G., Laukka, P., Eerola, T., Thingujam, N. S., Yamasaki, T., & Beller, G. (2013). Universal and culture-specific factors in the recognition and performance of musical affect expressions. Emotion, 13(3), 434449. http://doi.org/10.1037/a0031388CrossRefGoogle ScholarPubMed
Livingstone, S. R., Mühlberger, R., Brown, A. R., & Loch, A. (2007). Controlling musical emotionality: An affective computational architecture for influencing musical emotions. Digital Creativity, 18(1), 4353. http://doi.org/10.1080/14626260701253606CrossRefGoogle Scholar
Lundqvist, L.-O., Carlsson, F., Hilmersson, P., & Juslin, P. N. (2009). Emotional responses to music: experience, expression, and physiology. Psychology of Music, 37, 6190. http://doi.org/10.1177/0305735607086048CrossRefGoogle Scholar
Nantais, K. M., & Schellenberg, E. G. (1999). The Mozart effect: An artifact of preference. Psychological Science, 10, 370373. https://doi.org/10.1111/1467-9280.00170CrossRefGoogle Scholar
Nawrot, E. S. (2003). The perception of emotional expression in music: Evidence from infants, children and adults. Psychology of Music, 31(1), 7592. http://doi.org/10.1177/0305735603031001325CrossRefGoogle Scholar
Pell, M. D. (2005). Nonverbal emotion priming: Evidence from the "facial affect decision task". Journal of Nonverbal Behavior, 29(1), 4573. http://doi.org/10.1007/s10919-004-0889-8CrossRefGoogle Scholar
Schellenberg, E. G., & Trehub, S. E. (1996). Natural musical intervals: Evidence from infant listeners. Psychological Science, 7(5), 272277. http://doi.org/10.1111/j.1467-9280.1996.tb00373.xCrossRefGoogle Scholar
Webster, G. D., & Weir, C. G. (2005). Emotional responses to music: Interactive effects of mode, texture, and tempo. Motivation and Emotion, 29(1), 1939. http://doi.org/10.1007/s11031-005-4414-0CrossRefGoogle Scholar
Zentner, M., Granjean, D., & Scherer, K. R. (2008). Emotions evoked by the sound of music: Characterization, classification, and measurement. Emotion, 8(4), 494521. http://doi.org/10.1037/1528-3542.8.4.494CrossRefGoogle Scholar
Figure 0

Table 1. Participants’ Demographic Information

Figure 1

Figure 1. Experimental Procedure in Experiments 1 and 2.

Figure 2

Table 2. Numbers of Selections in Experiment 1

Figure 3

Figure 2. Percentages of Participants who Chose Various Felt Emotions in Response to Given Musical Segments by (a) Young Children and by (b) Young adults, under Different Tempo-mode Consistencies in Experiment 1

Note.N = No significant positive or negative felt emotion was elicited.
Figure 4

Table 3. Number of Selections in Experiment 2

Figure 5

Figure 3. Percentages of Different Felt Emotions Reported by (a) Young Children and by (b) Young Adults, under Different Timbre-mode Consistencies in Experiment 2

Note. In Experiment 2, perceived emotions represented by tempo and by mode were always consistent. N = No significant positive or negative felt emotion was elicited.
Figure 6

Table 4. Numbers of Selections in Experiment 3

Figure 7

Figure 4. Percentages of Felt Emotion Types Elicited by the Given Musical Segments in (a) Young Children and (b) Young Adults, under Different Timbre-mode Consistencies in Experiment 3.

Note. Perceived emotions represented by tempo and by mode were always inconsistent. N = No significant positive or negative felt emotion was elicited.