Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-02-06T16:16:54.252Z Has data issue: false hasContentIssue false

Differential impairment in recognition of emotion across different media in people with severe traumatic brain injury

Published online by Cambridge University Press:  01 July 2005

SKYE MCDONALD
Affiliation:
School of Psychology, University of New South Wales, Sydney, Australia
JENNIFER CLARE SAUNDERS
Affiliation:
School of Psychology, University of New South Wales, Sydney, Australia
Rights & Permissions [Opens in a new window]

Abstract

Recent evidence suggests that there may be dissociable systems for recognizing emotional expressions from different media including audio and visual channels, and still versus moving displays. In this study, 34 adults with severe traumatic brain injuries (TBI) and 28 adults without brain injuries were assessed for their capacity to recognize emotional expressions from dynamic audiovisual displays, conversational tone alone, moving facial displays, and still photographs. The TBI group were significantly impaired in their interpretation of both audio and audiovisual displays. In addition, eight of the 34 were significantly impaired in their capacity to recognize still facial expressions. In contrast, only one individual was impaired in the recognition of moving visual displays. Information processing speed was not found to play a significant role in producing problems with dynamic emotional expression. Instead the results suggest that visual moving displays may enlist different brain systems to those engaged with still displays, for example, the parietal cortices. Problems with the processing of affective prosody, while present, were not clearly related to other emotion processing problems. While this may attest to the independence of the auditory affective system, it may also reflect problems with the dual demands of listening to conversational meaning and affective tone. (JINS, 2005, 11, 392–399.)

Type
Research Article
Copyright
© 2005 The International Neuropsychological Society

INTRODUCTION

Evidence has been steadily accruing that a significant proportion of people with severe traumatic brain injury (TBI) are impaired when required to judge the emotional state of others. Deficits in emotion recognition have been reported on the basis of photographs (Croker & McDonald, submitted; Green et al., 2004; Jackson & Moffat, 1987; Milders et al., 2003; Prigatano & Pribram, 1982; Spell & Frank, 2000), audiotaped remarks (Marquardt et al., 2001; McDonald & Pearce, 1996; Milders et al., 2003; Spell & Frank, 2000), and audiovisual displays (McDonald et al., 2003; McDonald & Flanagan, 2004). Such emotion processing deficits have clear implications for the social functioning of these individuals and need to be a target for rehabilitation. Despite this, relatively little research has been conducted to date, to examine the neuropsychological underpinnings of these deficits in TBI in any detail.

Evidence for the neuroanatomical basis of emotion perception disorders derives mainly from studies of focal neurological populations. Earlier work focused upon the hemispheric lateralization of emotional processes (Borod, 1993; Borod et al., 1986; Cicone et al., 1980). Although asymmetry between hemispheres continues to be apparent (Adolphs, 2002; Angrilli et al., 1999; Tranel et al., 2002), more recent studies emphasize bilateral structures in the processing of emotionally significant stimuli such as faces, including the amygdalae, anterior cingulate gyri and basal ganglia, in association with parts of the ventral and orbitomedial prefrontal cortices (e.g., Adolphs, 2002; Adolphs et al., 1996; Phillips et al., 2003). In addition, it has been argued that the right somatosensory cortices including S-1, S-11, the insular and supramarginal gyrus are critical to emotion recognition, enabling the observer to access the sensory qualities of the observed expression “as-if” the expression were their own (Adolphs et al., 1996, 2000, 2003). This theoretical model accords well with the normal adult literature on emotion processing. When observing facial expressions in others, adults experience faint movement of their own face mirroring the facial expression (McHugo & Smith, 1996).

Within neurological populations there is a preponderance in the reporting of deficits in certain emotional categories. In particular, deficits in processing negative emotions such as disgust, fear, and sadness have been the most frequently reported. This has lead to speculation that the brain encompasses a number of relatively independent systems evolutionarily developed to process particular emotions (Adolphs et al., 1999; Adolphs & Tranel, 2004; Broks et al., 1998; Buchanan et al., 2004; Sprengelmeyer et al., 1996).

Traumatic brain injuries typically result in multifocal injuries concentrated in the orbitomedial frontal and temporal lobes (Adams et al., 1985; Levin et al., 1987) with attendant, diffuse, axonal damage (Adams et al., 1989). There is clearly overlap between the areas of vulnerability in TBI and limbic and associated structures implicated in neurological populations with emotion processing disorders. The case has also been made that diffuse axonal injury, especially prevalent in acute TBI, may lead to disconnection between limbic structures and the somatosensory cortices resulting in additional emotion recognition deficits (Adolphs et al., 2000; Green et al., 2004). Consequently, it should be no surprise that individuals with TBI have difficulties recognizing emotion. However, the picture is complicated by growing evidence within focal neurological populations that the media of presentation may enlist different neurological systems.

Sensitivity to affective prosody has been reported to be differentially impaired in people with focal lesions, for example, right and left temporal lobectomy (Adolphs et al., 2001) or right hemisphere cortical lesions (Ross & Mesulam, 1979; Ross et al., 1997; Wertz et al., 1998). Like facial expressions, failure to recognize affective prosody is prevalent in those with injuries affecting the temporal and frontal regions (Adolphs et al., 2002). The right frontoparietal cortex, bilateral frontal poles, and left frontal operculum have been strongly implicated in affective prosody processing with other structures such as the amygdalae and basal ganglia also possibly involved (Adolphs et al., 2002; Breitenstein et al., 1998). Similar regions are implicated in facial processing although the possibility that the left and right amygdalae contribute differentially to facial versus prosodic information has been raised (Adolphs et al., 2002). Behaviorally, evidence for a dissociation between the ability to recognize emotion in voice and face in people with brain lesions has been reported (Hornak et al., 1996) suggesting that the two depend upon distinct, if overlapping, systems (Adolphs et al., 2002).

In addition, there is good reason to consider the processes involved in the recognition of static facial expressions as separate to those engaged when processing dynamic, facial expressions. The vast majority of research studies has used still, black and white photographs, typically taken from the Eckman and Freisen series (Eckman & Freisen, 1976). However, static images of facial expressions differ from real-life dynamic displays in important ways. On the one hand, they provide the viewer with a significant amount of time to contemplate the fixed expression. This does not occur in real-life where facial expressions evolve rapidly placing significant demands upon the information processing capacity of the observer. On the other hand, the cues available from facial movement can, in themselves, provide additional information to assist with emotion identification. Indeed, in normal adults, movement of the face alone can be sufficient to recognize some emotions (Bassili, 1978). Additionally, dissociations have been found whereby individuals with focal brain lesions have difficulty with static images but not dynamic (Adolphs et al., 2003; Humphrey et al., 1993). On the basis of these findings it has been argued that there may be further fractionization of the neural systems underpinning emotion recognition. Adolphs and colleagues (2003) have argued that dynamic emotional expressions may be processed via parietal cortical systems whereas the limbic system and associated prefrontal structures may be essential for static images.

AIMS

These reported findings in neurological patients lead to a range of hypotheses concerning the relative performance of adults with TBI when recognizing emotions across different media. While deficits across the range of media have been reported, there has not been, to date, a systematic examination of relative competencies across visual, audio, and mixed media or an examination of relative competencies on static versus moving images. This was the main aim of this study. Consistent with previous research, it was hypothesized that people with TBI would have deficits in all media. However, their relative competencies in one media versus another was less certain. Firstly, the purported independence of systems underlying facial and prosodic affect may lead to differential disturbances in one versus the other, although the direction of such a difference is unknown. Where comparisons have been made these have contrasted photographs to audiotapes and have reported either no differences between the two (Milders et al., 2003) or greater competence in one over the other depending on the age of the target face (Spell & Frank, 2000). Arguably, a more suitable comparison of audio and visual media would be between audio and video presentations.

Secondly, the relative competency of people with TBI when judging emotional expressions from still versus dynamic, visual displays is unknown. Diffuse axonal injuries may account for, not only some emotion processing deficits (Green et al., 2004), but also reduced information processing speed (van Zomeren & Brouwer, 1987) as is frequently reported in this population (Tate et al., 1991). Should reduced information processing be a significant component in problems with emotion recognition, dynamic emotional displays might be expected to be more difficult for people with TBI than still displays. If, on the other hand, dynamic displays enlist different neural systems, in particular the parietal cortices as opposed to temporofrontal systems, then the reverse case might be expected. Classical work outlining the major areas of trauma following closed head injury depict a pattern that not only implicates the orbitomedial aspects of the temporal and frontal lobes, but spares the parietal and posterior temporal cortices (e.g., Courville, 1945).

Finally, while there is evidence that people with TBI have deficits in processing emotional states in others when these are conveyed via audiovisual displays (McDonald et al., 2003; McDonald & Flanagan, 2004), the relative contribution of problems processing the audio or visual channels to such difficulties has not been examined.

PARTICIPANTS

Thirty-four adults, nine females and 25 males aged between 21 and 64 years (mean age 41 years) with severe traumatic brain injuries, were recruited from the outpatient records of three metropolitan Brain Injury Units in NSW, Australia for this project. Participants were selected according to the following criteria: they had suffered a severe traumatic brain injury resulting in altered consciousness of one day or greater, they were at least one year postinjury, discharged from hospital and living in the community; they were fluent English speakers, they did not suffer aphasia and had normal sight and hearing. These participants were also involved in a related study examining the influence of emotion and mentalizing on pragmatic understanding (McDonald & Flanagan, 2004) and a subset of that data is also presented here. The clinical details of the participants with TBI are provided in Table 1.

Clinical details of TBI participants

The TBI group had a mean length of Post Traumatic Amnesia (PTA) of 76 days (S.D. = 59) which is typical, being comparable to that seen in a consecutive series of people in rehabilitation for severe TBI (Tate et al., 1989). Testing occurred on average 9.5 years postinjury (S.D. = 8). As is apparent from Table 1, the group was characterized by heterogeneity of injuries sustained in terms of pathology (contusions, hemorrhages, etc.) and location of injury within the brain. The initial evidence of cerebral pathology is only a crude measure of the nature and extent of injuries sustained with many microscopic lesions and neuronal shearing undetectable with the clinical tools available at the time. Nevertheless, the preponderance of frontal lobe lesions reported in these participants in the initial stages of their injuries is also typical for this group.

On average, the TBI participants had achieved 13 years of education (S.D. = 3.1). Two (6%) of the group had been unemployed prior to their injuries. The remainder had been employed in occupations ranging from unskilled (18%), through to skilled trade/clerical (48%) to professional/managerial (21%) or student (9%). After their injuries they experienced a significant loss of employment status. At the time of recruitment, 79% were either unemployed or working as volunteers. Only two (6%) had maintained jobs in professional or clerical areas, while three (9%) had found work as unskilled workers and two were students. This drop to approximately 20% employed postinjury accords with estimates in independent outcome studies (e.g., Brooks et al., 1987; McMordie et al., 1990; Ponsford et al., 1995; Tate et al., 1989) suggesting, once again, that this group represents a profile not dissimilar to that of severe TBI outcome generally.

A group of 28 adults (22 males and 6 females) without neurological impairment were recruited from the general community. The average age of the control group was 40.7 (S.D. = 11.8) and not dissimilar to the TBI group. On the other hand, despite efforts to recruit people with similar educational backgrounds to the TBI participants, the control group were significantly better educated (M = 15.4, S.D. = 2.1, t = −3.957, unequal variance, p = .000). Consequently, all subsequent analyses were conducted using education as a covariate.

MATERIALS

Materials comprised emotional stimuli using four different media. Each of these were developed by using stimuli from The Awareness of Social Inference Test (TASIT) (McDonald et al., 2002). Part 1: The Emotion Evaluation Test (EET) comprises 28 videoed vignettes of actors engaged in an ambiguous or neutral conversation while depicting one of six basic emotions (happy, surprised, angry, sad, fearful, disgusted) or, alternatively, a neutral state. There are four exemplars of each emotion. In order to ensure the emotions were as authentic as possible, professional actors trained in the “method” school of acting were employed for the making of the videos. Consistent with this method each actor induced the requisite emotion in him or herself prior to enacting the script. Thus, while not completely spontaneous, the video recordings provided a reasonable approximation of a real emotional state. The EET has alternate forms which are comparable in terms of overall difficulty level and which produce high agreement in normal adults as to the depicted emotion (McDonald et al., 2003). Form A was used in its usual presentation format to test audiovisual recognition, as well as to produce the “still” photographs. Form B was modified for the dynamic (visual) and audio presentations.

  1. Audiovisual presentation. The 28 stimuli of Form A of Part 1 (EET) of TASIT were presented to test audiovisual recognition. Each vignette was shown to the participants who was asked to choose the emotional state of the speaker from seven written category labels: “happy,” “surprised,” “sad,” “angry,” “fearful,” “disgusted,” and “neutral.”
  2. Audio only presentation. Fourteen stimuli of Form B of the EET from TASIT (2 exemplars for each emotional category) were presented with the visual channel turned off. Participants selected the emotional category from a list of seven, as above.
  3. Visual only (dynamic) presentation. The remaining 14 stimuli from Form B of TASIT: EET (2 exemplars for each emotion category) were presented with the audio channel turned off. Participants were, once again, asked to choose the emotion from a list of seven descriptors.
  4. Visual only (still) presentation. Fourteen “stills” were taken from Form A of EET. These were selected from the dynamic display to catch the actor at a point where they were clearly displaying the requisite emotion. While the same stimuli were thus used for Condition 1 (the “audiovisual” task) and Condition 4, the resemblance between the “still” presentations and the dynamic vignette from which they derived was low especially given the order of presentations for the stills was altered and the same actors appeared repeatedly across all tasks.

PROCEDURE

The four sets of stimuli were presented in counterbalanced order across all participants. Each participant was tested individually in one session.

RESULTS

The average number of items correctly labeled for each of the four tasks is detailed in Table 2.

Means (and standard deviations) for accuracy labeling different emotional states based on still and moving faces, voice alone and audiovisual cues combined

In order to determine the relative competencies of the two groups across the different media, a 2 × 4 ANCOVA was conducted comparing the two groups across the four presentation types, that is, “audiovisual,” “audio,” “moving facial expression,” and “still facial expression.” Education was entered as a covariate but was subsequently found to have no significant influence on task performance. The TBI group performed differently to controls across the different presentation types (presentation × group interaction: F(56,3) = 3.644, p = .018). Follow-up comparisons based on Bonferroni adjusted 95% confidence intervals indicated that there was no difference between the two groups in their ability to detect emotion from either “still” or “moving” facial expressions. On the other hand, the TBI group was significantly impaired relative to controls in their capacity to identify emotion from either prosody alone (the “audio” task) or a combination of prosody and dynamic visual information (the “audiovisual” task). Separate ANCOVAS were conducted on the different emotion categories for the “audio” and “audiovisual” tasks. There was no significant interaction between type of emotion and subgroup, that is, the TBI participants were equally poor, relative to controls across the six emotions and the neutral category on both types of task.

The finding that the TBI participants, on average, were not poorer than controls on either facial expression recognition test is surprising in the light of recent published reports. However, given the heterogeneity of neuropathology and associated dysfunction following TBI, group averages may mask individual differences. Individual performances within each task were therefore scrutinized. Indeed, eight of the TBI participants performed at an abnormally low level relative to the control group (i.e., more than two standard deviations below the control mean) on the “still” expression task. In contrast, only one participant was similarly impaired on the “moving” task. This suggests that, as reflected in the group mean, moving images did not present an obstacle for the vast majority of TBI participants in this study. On the other hand, the static images were problematic for a significant proportion of the TBI group. Interestingly, a different pattern emerged for the TBI performance on the “audio” task. On this task, while the group was, on average, poorer than controls, this appeared to mainly reflect a relative loss of efficiency, with only three of the TBI group demonstrating significant impairment. Finally, not only was the group as a whole significantly impaired when asked to judge audiovisual displays, but 13 out of the 34 TBI participants had abnormally low scores relative to the controls. Poor performance on the emotion tasks was not related to initial evidence of side of injury or presence of anterior pathology.

In order to determine whether poor performance on one emotion task was associated with poor performance on the others partial correlations (controlling for education) were conducted. Using a Bonferroni corrected probability level of .008 (i.e., .05/6) to adjust for inflated error associated with multiple comparisons, it was found that scores on the two visual and the audiovisual tasks were significantly associated with each other (“still” vs. “dynamic,” r = .522, p = .002, “still” vs. “audiovisual,” r = .475, p = .005, “dynamic” vs. “audiovisual,” r = .542, p = .001). The scores on the “audio” task were not associated with any other task. In order to determine the extent to which each component skill in emotion processing contributed independently to overall competency judging the emotional expression in audiovisual displays, the three single modality tasks (“still,” “moving,” and “audio”) were entered as predictors into a simultaneous regression analysis with scores on the “audiovisual” task as the dependent variable. Subgroup was also entered in order to determine whether group membership made an independent contribution over and above these component skills. Education was also entered as a covariate. The model was significant (adjusted R square = 0.31, F(5,55) = 6.403, p = .000). The ability to recognize both still and moving faces contributed to accuracy on the audiovisual task (“still”: β = .283, p = .025, “moving”: β = .294, p = .050). As before, the ability to recognize emotional expression in voice was not related. The presence of TBI was also an independent predictor (β = .373, p = .009) suggesting that the group with TBI had difficulties interpreting the “audiovisual” task that were over and above those related to problems understanding emotion from the separate channels.

In order to consider the impact of slowed information processing upon the ability to perform emotion categorization tasks, especially the dynamic tasks (“moving” facial expressions and “audio” emotion discriminations), the role of information processing speed was evaluated. The majority of participants had undergone neuropsychological assessments for clinical purposes. Neuropsychological data for 25 of the 34 participants were available on information processing speed as indexed by the Trail Making Test Part A (psychomotor speed) and Part B (psychomotor speed and cognitive switching). Data on a measure of working memory (WAIS–III Digit Span Scaled Score) was available for 29 of the participants. Partial correlations controlling for education indicated that working memory per se (Digit Span) was uncorrelated with any emotion labelling task. Simple psychomotor speed (TMTA) was also uncorrelated. On the other hand, there was a near-significant relationship between information processing involved in maintaining and alternating two attentional sets (i.e., TMTB) and emotion processing in each of the three separate modalities (“still” expressions, r = −.501, p = .015, “dynamic” expressions, r = −.482, p = .020, “audio,” r = −.443, p = .034; critical Bonferroni corrected p value = .05/4 = .0125). No such relationship existed for the audiovisual discriminations. Because neuropsychological data for three of the participants had been collected during the acute phase of recovery (within one year), the analyses were re-run excluding their data. The pattern of results was unchanged. In order to determine whether there was a unique relationship between information processing speed as indexed by TMTB and any one of the three single modality emotion processing task, a regression analysis was conducted using TMTB as the dependent variable and scores on the “still,” “dynamic,” and “audio” tasks as independent predictors. Although the overall model was significant (adjusted R square = .266, F(3,20) = 3.783, p = .27), no single task had a unique relationship with information processing speed over and above the others.

DISCUSSION

The results of this study confirm that a significant proportion of people with TBI have deficits when recognizing emotional expressions in others. This was clearly the case when they were asked to recognize emotions from realistic audiovisual displays and also when asked to categorize emotional expression on the basis of conversational tone of voice. While the group, on average, was not impaired when categorizing emotions based upon visual displays, a significant proportion of the group were abnormally poor when asked to judge “still” expressions. The performance of this subgroup accords with other reports of TBI performance when judging photographs, although the lack of overall group differences does not. This may reflect the makeup of this particular TBI sample, with relatively fewer individuals with deficits in this area. It may also reflect the difference in the type of stimuli used. The majority of research to date that has reported deficits in facial recognition using still stimuli has used faces from the Eckman series (Jackson & Moffat, 1987; Milders et al., 2003; Prigatano & Pribram, 1982) (although not all, e.g., Green et al., 2004; Spell & Frank, 2000). The Eckman photographs are not only black and white but visually dated. Thus, they may represent a higher level of abstraction and therefore a greater degree of difficulty for people with TBI than do contemporary, color photographs. Indeed, 22 of the participants with TBI reported here also performed an emotion labelling task based on the Eckman series [see Croker & McDonald, in press] and were found to be significantly impaired on this task, relative to an independent group of non-brain-injured control participants.

In contrast to performance on the “still” visual displays, there was little indication that the TBI participants as a group, or individually, had significant difficulty with “dynamic” visual displays in the absence of audio information. While slowed information processing as indexed by the Trail Making (Part B) test was marginally associated with poor performance on emotion recognition, this was true for all modalities, including the “still” presentations. There was no evidence that the dynamic visual task made a unique demand on information processing speed. The absence of evidence for (1) group or individual impairment on dynamic recognition and (2) a unique relationship between performance on this task and information processing speed suggests that dynamic emotion recognition is not reliant on generic cognitive processes but, rather, represents a qualitatively distinct skill that is relatively spared in TBI. This would be expected if, as argued by Adolphs and colleagues (Adolphs et al., 2003), it enlisted a different neural system to that involved in static processing. In particular, these results are consistent with the position that dynamic facial expressions are mediated by the parietal cortices.

TBI average performance on the “audio” only task was particularly poor, especially when compared to their performance on the “dynamic” visual displays. Not only this, but competency in the judgement of emotional expression on the basis of conversational tone was not associated with competency in either of the single visual modalities, or with performance on the audiovisual task. This suggests that processing of the affective quality of voice was an independent skill, not related to other efforts to judge emotional expression. While the group were, on average, poorer than controls when estimating emotion from tone of voice, only three individuals within the group were abnormally poor relative to non-brain-damaged controls. It would appear that, for the majority of the group, processing of emotional prosody was less efficient than normal rather than clearly defective.

The lack of association between judgements of emotion from visual and auditory channel accords with recent arguments that these entail different neural systems (e.g., Adolphs et al., 2002). However, it is also possible that the reason that people with TBI failed to accurately gauge emotional tone of voice was because they were focused upon the content of the conversation to the relative exclusion of its affective quality, despite the fact that, in all stimuli, the content was carefully constructed to be neutral. The executive and information processing deficits commonly exhibited by people with severe TBI are well known (Tate et al., 1991; van Zomeren & Brouwer, 1987) and the fact that these translate into a concrete, stimulus bound information processing style is also (Lezak, 1978). In particular, people with TBI are often overly literal in their interpretation of conversational remarks (e.g., McDonald & Flanagan, 2004; McDonald & Pearce, 1996; Pearce et al., 1998), failing to ignore the semantic content in order to appreciate pragmatic inference. In a similar fashion, the ability to process the semantic content of conversational remarks at the same time as monitoring the affective quality of the spoken voice, may constitute a dual processing task of some complexity. In partial support of this position, there was a weak association between information processing speed on a dual processing task (TMTB) and poor performance on the “audio” task, although this was not unique. Clearly, the effect of TBI on affective prosody requires further examination. No study to date has examined affective prosody in the absence of semantic content.

Finally, this study has suggested that, despite the dynamic, colorful, contemporary, and bi-channel nature of the audiovisual displays of emotion used in this study, these were the most difficult for people with TBI to interpret. Not only was the group, on average, significantly poorer than control participants but 13 of the 34 people with TBI were significantly impaired. Poor performance on the audiovisual task to some extent reflected the loss of competency in assessing visual information concerning expressions. However, it was not related to a failure to interpret the audio information. The reasons for this are unclear. It may be that in the situation where both channels are available, the TBI participants focused upon visual cues alone. Alternatively, it may suggest that performance on the “audio” only task, called into play a set of strategies that are not normally used when both channels are available. The regression analyses also indicated that poor performance on the audiovisual task was mediated by other characteristics of the brain injury group not captured by these single modality tasks. What these features are awaits further study. What is clear, however, is that for many people with TBI, natural displays of emotion are particularly difficult to interpret relative to simple, stripped down, visual displays.

CONCLUSION

In conclusion, this study has confirmed earlier suggestions that people with TBI have difficulty recognizing emotional expressions. It has extended previous research by examining competency across different types of presentation. According to this study, audiovisual, dynamic displays such as those which occur in everyday settings are the most difficult for people with TBI to interpret accurately. Judging emotion from conversational tone is also difficult, although the reasons for this are unclear and may relate to a failure to process tone and content simultaneously. Consistent with previous research, problems were also apparent for a significant proportion of this TBI sample in the interpretation of still emotional expressions. On the other hand, moving facial expressions were processed relatively efficiently, lending support to the notion that dynamic expressions are processed separately to still expressions, via the parietal cortex.

Finally, it would appear that competency in judging dynamic audiovisual displays of emotion relies upon a variety of skills including, but not restricted to, visual processing. These results add to a growing literature that suggests that social perception deficits are not only a relatively common consequence of TBI, but may reflect a complex interplay of a variety of deficits, depending on the media of the display and whether it is moving or still.

ACKNOWLEDGMENTS

Collection of data for this study was facilitated by a Discovery Grant (DP0218141) from the Australian Research Council. We would also like to express our gratitude to the staff of Liverpool, Ryde and Westmead Brain Injury Units for facilitating access to patients within their service. Finally, we are indebted to the people with TBI and their families as well as our community participants who gave willingly of their time to make this research possible.

References

REFERENCES

Adams, J.H., Doyle, D., Ford, I., Graham, D.I., McGee, M., & McLellan, D.R. (1989). Brain damage in fatal non-missile head injury in relation to age and type of injury. Scottish Medical Journal, 34(1), 399401.Google Scholar
Adams, J.H., Doyle, D., Graham, D.I., Lawrence, A.E., McLellan, D.R., Gennarelli, T.A., Pastuszko, M., & Sakamoto, T. (1985). The contusion index: A reappraisal in human and experimental non-missile head injury. Neuropathology and Applied Neurobiology, 11(4), 299308.Google Scholar
Adolphs, R. (2002). Neural systems for recognizing emotion. Current Opinion in Neurobiology, 12(2), 169177.Google Scholar
Adolphs, R., Damasio, H., & Tranel, D. (2002). Neural systems for recognition of emotional prosody: A 3-D lesion study. Emotion, 2(1), 2351.Google Scholar
Adolphs, R., Damasio, H., Tranel, D., Cooper, G., & Damasio, A.R. (2000). A role for the somatosensory cortices in the visual recognision of emotions as revealed by three dimensional lesion mapping. Journal of Neuroscience, 20, 26832690.Google Scholar
Adolphs, R., Damasio, H., Tranel, D., & Damasio, A.R. (1996). Cortical systems for the recognition of emotion in facial expressions. Journal of Neuroscience, 16(23), 76787687.Google Scholar
Adolphs, R., Russell, J.A., & Tranel, D. (1999). A role for the human amygdala in recognizing emotional arousal from unpleasant stimuli. Psychological Science, 10(2), 167171.Google Scholar
Adolphs, R. & Tranel, D. (2004). Impaired judgments of sadness but not happiness following bilateral amygdala damage. Journal of Cognitive Neuroscience, 16(3), 453462.Google Scholar
Adolphs, R., Tranel, D., & Damasio, A.R. (2003). Dissociable neural systems for recognizing emotions. Brain and Cognition, 52(1), 6169.Google Scholar
Adolphs, R., Tranel, D., & Damasio, H. (2001). Emotion recognition from faces and prosody following temporal lobectomy. Neuropsychology, 15(3), 396404.Google Scholar
Angrilli, A., Palomba, D., Cantagallo, A., Maietti, A., & Stegagno, L. (1999). Emotional impairment after right orbitofrontal lesion in a patient without cognitive deficits. Neuroreport, 10(8), 17411746.Google Scholar
Bassili, J.N. (1978). Facial motion in the perception of faces and of emotional expression. Journal of Experimental Psychology: Human Perception and Performance, 4, 373379.Google Scholar
Borod, J.C. (1993). Cerebral mechanisms underlying facial, prosodic and lexical emotional expression: A review of neuropsychological studies and methodological issues. Neuropsychology, 7(4), 445463.Google Scholar
Borod, J.C., Koff, E., Lorch, M.P., & Nicholas, M. (1986). The expression and perception of facial emotion in brain-damaged patients. Neuropsychologia, 24(2), 169180.Google Scholar
Breitenstein, C., Daum, I., & Ackermann, H. (1998). Emotional processing following cortical and subcortical brain damage: Contribution of the fronto-striatal circuitry. Behavioural Neurology, 11(1), 2942.Google Scholar
Broks, P., Young, A.W., Maratos, E., Coffey, P.J., Calder, A.J., Isaac, C.L., Mayes, A.R., Hodges, J.R., Montaldi, D., Cezaylirli, E., Roberts, N., & Hadley, D. (1998). Face processing impairments after encephalitis: Amygdala damage and recognition of fear. Neuropsychologia, 36(1), 5970.Google Scholar
Brooks, N., Campsie, L., Symington, C., Beattie, A., & Campsie, L. (1987). The effects of severe head injury on patient and relative within seven years of injury. Journal of Head Trauma Rehabilitation, 2(3), 113.Google Scholar
Buchanan, T.W., Tranel, D., & Adolphs, R. (2004). Anteromedial temporal lobe damage blocks startle modulation by fear and disgust. Behavioral Neuroscience, 118(2), 429437.Google Scholar
Cicone, M., Wapner, W., & Gardener, H. (1980). Sensitivity to emotional expressions and situations in organic patients. Cortex, 16, 145158.Google Scholar
Courville, C.B. (1945). Pathology of the nervous system (2nd ed.). Mountain View, California: California Pacific Press.
Croker, V. & McDonald, S. (in press). Recognition of emotion from facial expression following traumatic brain injury. Brain Injury.
Eckman, P. & Freisen, W.V. (1976). Pictures of facial affect. Palo Alto, California: Consulting Psychological Press.
Green, R.E.A., Turner, G.R., & Thompson, W.F. (2004). Deficits in facial emotion perception in adults with recent traumatic brain injury. Neuropsychologia, 42, 133141.Google Scholar
Hornak, J., Rolls, E.T., & Wade, D. (1996). Face and voice expression identification in patients with emotional and behavioural changes following ventral frontal lobe damage. Neuropsychologia, 34(4), 247261.Google Scholar
Humphrey, G.W., Donnelly, N., & Riddoch, M.J. (1993). Expression is computed separately from facial identity and is computed separately for moving and static faces: Neuropsychological evidence. Neuropsychologia, 31, 173181.Google Scholar
Jackson, H.F. & Moffat, N.J. (1987). Impaired emotional recognition following severe head injury. Cortex, 23, 293300.Google Scholar
Levin, H.S., High, W.M., Goethe, K.E., Sisson, R.A., Overall, J.E., Rhoades, H.M., Eisenberg, H.M., Kalisky, Z., & Gary, H.E. (1987). The neuro-behavioural rating scale: assessment of the behavioural sequalae of head injury by the clinician. Journal of Neurology, Neurosurgery and Psychiatry, 50, 183193.Google Scholar
Lezak, M.D. (1978). Living with the characterologically altered brain-injured patient. Journal of Clinical Psychology, 39, 592598.Google Scholar
Marquardt, T.P., Rios-Brown, M., Richburg, T., Seibert, L.K., & Cannito, M.P. (2001). Comprehension and expression of affective sentences in traumatic brain injury. Aphasiology, 15(10–11), 10911101.Google Scholar
McDonald, S. & Flanagan, S. (2004). Social perception deficits after Traumatic Brain Injury: The interaction between emotion recognition, mentalising ability and social communication. Neuropsychology, 18, 572579.Google Scholar
McDonald, S., Flanagan, S., & Rollins, J. (2002). The Awareness of Social Inference Test. San Antonio: Psychological Corporation: Harcourt Assessment.
McDonald, S., Flanagan, S., Rollins, J., Kinch, J. (2003). TASIT: A new clinical tool for assessing social perception after traumatic brain injury. Journal of Head Trauma Rehabilitation, 18, 219238.Google Scholar
McDonald, S. & Pearce, S. (1996). Clinical insights into pragmatic language theory: The case of sarcasm. Brain and Language, 53, 81104.Google Scholar
McHugo, G.J. & Smith, C.A. (1996). The power of faces: A review of John Lanzetta's research on facial expression and emotion. Motivation and Emotion, 20, 85120.Google Scholar
McMordie, W.R., Barker, S.L., & Paolo, T.M. (1990). Return to work (RTW) after head injury. Brain Injury, 4, 5790.Google Scholar
Milders, M., Fuchs, S., & Crawford, J.R. (2003). Neuropsychological impairments and changes in emotional and social behaviour following severe traumatic brain injury. Journal of Clinical and Experimental Neuropsychology, 25(2), 157172.CrossRefGoogle Scholar
Pearce, S., McDonald, S., & Coltheart, M. (1998). Ability to process ambiguous advertisements after frontal lobe damage. Brain and Cognition, 38, 150164.Google Scholar
Phillips, M.L., Drevets, W.C., Rauch, S.L., & Lane, R. (2003). Neurobiology of emotion perception I: The neural basis of normal emotion perception. Society of Biological Psychiatry, 54, 504514.Google Scholar
Ponsford, J.L., Olver, J.H., & Curran, C. (1995). A profile of outcome: 2 years after traumatic brain injury. Brain Injury, 9(1), 110.Google Scholar
Prigatano, G.P. & Pribram, K.H. (1982). Perception and memory of facial affect following brain injury. Perceptual and Motor Skills, 54, 859869.Google Scholar
Ross, E.D. & Mesulam, M.-M. (1979). Dominant language functions of the right hemisphere? Prosody and emotional gesturing. Archives of Neurology, 36(3), 144148.Google Scholar
Ross, E.D., Thompson, R.D., & Yenkosky, J. (1997). Lateralization of affective prosody in brain and the callosal integration of hemispheric language functions. Brain and Language, 56(1), 2754.Google Scholar
Spell, L.A. & Frank, E. (2000). Recognition of nonverbal communication of affect following traumatic brain injury. Journal of Nonverbal Behavior, 24(4), 285300.Google Scholar
Sprengelmeyer, R., Young, A.W., Calder, A.J., Karnat, A., Lange, H., Homber, G.V., Perrett, D.I., & Rowland, D. (1996). Loss of disgust: Perception of faces and emotion in Huntingdon's disease. Brain, 119, 16471665.Google Scholar
Tate, R.L., Fenelon, B., Manning, M.L., & Hunter, M. (1991). Patterns of neuropsychological impairment after severe blunt head injury. Journal of Nervous and Mental Disease, 179, 117126.Google Scholar
Tate, R.L., Lulham, J., Broe, T., Strettles, B., & Pfaff, A. (1989). Psychosocial outcome for the survivors of severe blunt head injury: The results from a consecutive series of 100 patients. Journal of Neurology, Neurosurgery and Psychiatry, 52, 11281134.Google Scholar
Tranel, D., Bechara, A., & Denburg, N.L. (2002). Asymmetric functional roles of right and left ventromedial prefrontal cortices in social conduct, decision making and emotional processing. Cortex, 38(4), 589612.Google Scholar
van Zomeren, A.H. & Brouwer, W.H. (1987). Head injury and concepts of attention. In H.S. Levin, J. Grafman, & H.M. Eisenberg (Eds.), Neurobehavioural recovery from head injury (pp. 398415). Oxford: Oxford University Press.
Wertz, R.T., Henschel, C.R., Auther, L.L., Ashford, J.R., & Kirshner, H.S. (1998). Affective prosodic disturbance subsequent to right hemisphere stroke: A clinical application. Journal of Neurolinguistics, 11(1–2), 89102.Google Scholar
Figure 0

Clinical details of TBI participants

Figure 1

Means (and standard deviations) for accuracy labeling different emotional states based on still and moving faces, voice alone and audiovisual cues combined