The socio-relational framework of expressive behaviors (SRFB) proposes that expressed emotions are socially learned responses to external stimuli, especially to other social agents. In such a view, the central function of expressed emotions is to motivate other individuals to respond to the expresser. For instance, SRFB assumes that a smile systematically aims to motivate reactions in perceivers that will in turn enhance the smiler's fitness. Although this is undoubtedly one key function of facial expression, I am not comfortable with the strict view that expressive behaviors (among which are expressed emotions) are purely social in nature. There exist two important lines of research showing (1) that individuals (even congenitally blind people) express emotion even in the absence of an audience and (2) that facial expressions can also play another role in emotional life, which is to serve as the grounding for the processing of emotional information (Barsalou Reference Barsalou1999). When taken together, I propose that such findings suggest that facial expressions also constitute a cognitive support used to reflect on or to access the affective meaning of a given emotional situation or emotion concept.
As a first body of evidence, the social psychology literature shows that individuals express emotion even when other individuals are not present to perceive it. In other words, people express emotion for themselves. Consistent with this notion, Matsumoto and Willingham (Reference Matsumoto and Willingham2006) found that 72% of the coded expressions of judo athletes occurred when the athletes were not directly facing anyone (facing towards the Tatami), as soon as 2.5 seconds after match completion. Of importance, too, Matsumoto and Willingham (Reference Matsumoto and Willingham2006) found no cultural (i.e., social) differences in the first expressions at match completion, which support the universality of these expressions, and it was instead on the podium (during medal ceremony) that cultural differences in expression were observed. Crucially, there were also no differences between congenitally blind and sighted athletes in spontaneous expression (Matsumoto & Willingham Reference Matsumoto and Willingham2009). Collectively, these findings demonstrated that spontaneous expressions of emotion are not only dependent on observational (social) learning. Matsumoto and Willingham (Reference Matsumoto and Willingham2006) conclude that the initial expressions were probably not displayed because of the social nature of the event but were, rather, reflections of the athletes' emotional responses to the outcome of the match. This is fully in line with a second body of evidence coming from the embodied cognition literature.
In the growing embodied or grounded cognition literature (e.g., Barsalou Reference Barsalou1999; Reference Barsalou2008), research has demonstrated that individuals use simulations to represent knowledge. The simulations can occur in different sensory modalities (e.g., van Dantzig et al. Reference van Dantzig, Pecher, Zeelenberg and Barsalou2008; Vermeulen et al. Reference Vermeulen, Corneille and Niedenthal2008) and in affective systems (Niedenthal Reference Niedenthal2007; Niedenthal et al. in press; Vermeulen et al. Reference Vermeulen, Niedenthal and Luminet2007). Thus, expressed emotion (such as facial expression) might also have the function of providing a grounded support of emotional knowledge (for a review, see Niedenthal Reference Niedenthal2007). Such a view is consistent with the observation that people automatically mimic a perceived facial expression (Dimberg Reference Dimberg1982; Reference Dimberg1990). The embodied cognition view suggests that mimicry constitutes part of the simulation (emotional mirroring) of perceived emotion to facilitate its comprehension. Such an interpretation can account for the fact that covert experimental manipulation of facial expressions (facial feedback hypothesis) influences emotional judgments. For instance, Strack et al. (Reference Strack, Martin and Stepper1988) instructed their participants to place a pen in their mouth (as if they would write with it) either between the teeth (to produce a smiling face) or between the lips (to produce a sad face) while they assessed cartoons. The findings showed that smile induction increased positive ratings of the cartoons, compared to conditions where the smile was hampered (for further demonstrations, see also Niedenthal et al. Reference Niedenthal, Brauer, Halberstadt and Innes-Ker2001). In addition, the results of a study using electromyography (EMG) clearly confirm that the moderating impact of the facial manipulation was related to the muscular activity (Oberman et al. Reference Oberman, Winkielman and Ramachandran2007).
Interestingly, recent studies show that the necessity to access the emotional meanings of words triggers discrete muscular activity in the face (Niedenthal et al., in press). Specifically, Niedenthal and colleagues found that their experimental participants expressed emotion when trying to represent discrete emotional content such as that related to disgust. For instance, when participants had to indicate whether the words slug or vomit were related to an emotion, they expressed disgust on their faces, as measured by the contraction of the levator labialis (used to wrinkle one's nose). Importantly, a follow-up experiment showed further that the blocking of facial activation (e.g., using a manipulation that requires holding a pen laterally between one's lips and teeth; Niedenthal et al. Reference Niedenthal, Brauer, Halberstadt and Innes-Ker2001) disrupted the emotional judgment. This latter finding suggests a causal role (rather than simply a correlational role) of facial activation observed in emotion word processing (Niedenthal et al., in press).
Collectively, the aforementioned literature provides good evidence that perceiving and thinking about emotionally significant information involves the re-experience (i.e., embodiment) of this emotion. And this re-experience often involves the display of a facial expression of emotion.
The SRFB relies in part on the findings that females and males do not express emotions the same way. However, gender differences in expressed emotions might also be a demonstration of gender differences in the conceptual organization of emotions. These may be related to previously demonstrated innate structural gender differences in brain activation during emotional situations (e.g., Aleman & Swart Reference Aleman and Swart2008; Gur et al. Reference Gur, Gunning-Dixon, Bilker and Gur2002). Furthermore, individual and cultural differences in emotional expression (e.g., Elfenbein & Ambady Reference Elfenbein and Ambady2002) can be comfortably accounted for in theories of embodied cognition (e.g., Niedenthal & Maringer Reference Niedenthal and Maringer2009). In sum, while the specifics of the appearance and timing of facial expressions are unquestionably influenced by social learning (and context), the precise developmental and functional proposals of the SRFB do not appear to me to account for all of the findings in the vast literature on the facial expression of emotion.