Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-06T04:13:45.806Z Has data issue: false hasContentIssue false

Affective facial and lexical expression in aprosodic versus aphasic stroke patients

Published online by Cambridge University Press:  21 October 2005

LEE X. BLONDER
Affiliation:
Sanders-Brown Center on Aging and Dept. of Behavioral Science of the University of Kentucky, Lexington Brain Research and Rehabilitation Center (BRRC), VA Medical Center, Gainesville, Florida
KENNETH M. HEILMAN
Affiliation:
Brain Research and Rehabilitation Center (BRRC), VA Medical Center, Gainesville, Florida Department of Neurology, University of Florida, Gainesville, Florida
TIMOTHY KETTERSON
Affiliation:
Brain Research and Rehabilitation Center (BRRC), VA Medical Center, Gainesville, Florida Department of Clinical and Health Psychology, University of Florida, Gainesville, Florida
JOHN ROSENBEK
Affiliation:
Brain Research and Rehabilitation Center (BRRC), VA Medical Center, Gainesville, Florida Department of Communicative Disorders, University of Florida, Gainesville, Florida
ANASTASIA RAYMER
Affiliation:
Brain Research and Rehabilitation Center (BRRC), VA Medical Center, Gainesville, Florida Early Child, Speech-Language Pathology and Special Education, Old Dominion University, Norfork, Virginia
BRUCE CROSSON
Affiliation:
Brain Research and Rehabilitation Center (BRRC), VA Medical Center, Gainesville, Florida Department of Clinical and Health Psychology, University of Florida, Gainesville, Florida
LYNN MAHER
Affiliation:
Brain Research and Rehabilitation Center (BRRC), VA Medical Center, Gainesville, Florida Department of Physical Medicine and Rehabilitation, Baylor College of Medicine, Houston, Texas Houston VA Medical Center, Houston, Texas
ROBERT GLUECKAUF
Affiliation:
Department of Medical Humanities and Social Sciences, Florida State University, Tallahasee, Florida
LESLIE GONZALEZ ROTHI
Affiliation:
Brain Research and Rehabilitation Center (BRRC), VA Medical Center, Gainesville, Florida Department of Neurology, University of Florida, Gainesville, Florida
Rights & Permissions [Opens in a new window]

Abstract

Past research has shown that lesions in the left cerebral hemisphere often result in aphasia, while lesions in the right hemisphere frequently impair the production of emotional prosody and facial expression. At least 3 processing deficits might account for these affective symptoms: (1) failure to understand the conditions that evoke emotional response; (2) inability to experience emotions; (3) disruption in the capacity to encode non-verbal signals. To better understand these disorders and their underlying mechanisms, we investigated spontaneous affective communication in right hemisphere damaged (RHD) stroke patients with aprosody and left hemisphere damaged (LHD) stroke patients with aphasia. Nine aprosodic RHD patients and 14 aphasic LHD patients participated in a videotaped interview within a larger treatment protocol. Two naïve raters viewed segments of videotape and rated facial expressivity. Verbal affect production was tabulated using specialized software. Results indicated that RHD patients smiled and laughed significantly less than LHD patients. In contrast, RHD patients produced a greater percentage of emotion words relative to total words than did LHD patients. These findings suggest that impairments in emotional prosodic production and facial expressivity associated with RHD are not induced by affective–conceptual deficits or an inability to experience emotions. Rather, they likely represent channel-specific nonverbal encoding abnormalities. (JINS, 2005, 11, 677–685.)

Type
Research Article
Copyright
© 2005 The International Neuropsychological Society

INTRODUCTION

Disorders of emotional communication, such as loss of emotional prosody and facial expression, have long been known to accompany neurologic disease (see Heilman et al., 2003, for a review). For example, the Norwegian neurologist Monrad-Krohn, (1947) coined the term aprosody to describe loss of prosody following neurologic illness, and associated it with disorders of the extrapyramidal motor system. Tucker et al. (1977) published the first experimental study documenting that patients with discrete lesions of the right hemisphere often have an inability to express affective intonation in speech. Ross and Mesulam (1979) described 2 additional patients with right hemisphere strokes who displayed a loss of ability to express affective prosody. Since these early studies, a large body of clinical research has shown reductions in emotional prosodic expression associated with damage to the right cerebral hemisphere (Blonder et al. 1995; Borod et al. 1985; Charbonneau et al. 2003; Heilman et al. 2004, Hughes et al. 1983; Ross, 1981, 1997; Weintraub et al. 1981). Acoustic analyses have substantiated listener impressions in showing abnormalities such as decreased variation in fundamental frequency in right hemisphere damaged patients' speech (e.g. Behrens, 1989; Blonder et al. 1995; Kent & Rosenbek, 1982; Pell, 1999; Ross et al., 1988).

Similarly, numerous studies have shown loss of emotional facial expressivity associated with focal lesions in the right hemisphere. In one of the earliest experimental studies of spontaneous facial expression in brain damaged patients, Buck & Duffy (1980) videotaped patients suffering from right hemisphere damage (RHD), left hemisphere damage (LHD), and Parkinson's disease while each individual watched a set of emotionally laden slides. Coders then rated participants' expressivity on a 7-point scale. Results showed that RHD and Parkinson patients were rated as significantly less expressive than LHD patients and normal controls. Borod et al. (1985, 1986) conducted several experiments that further documented diminished emotional facial expressivity in RHD patients. Blonder et al. (1993) compared RHD and LHD patients and normal controls during videotaped interviews conducted in the home. Results of observer ratings indicated that RHD patients showed reduced facial expressivity in comparison to both LHD and neurologically intact participants during spontaneous conversation. In particular, RHD patients demonstrated significantly less smiling and laughter than LHD patients and normal volunteers.

In a recent review of the literature on emotional processing deficits in patients with unilateral brain damage, Borod et al. (2002) reported that six of seven studies of spontaneous prosody and nine of 13 studies of spontaneous facial expressivity found deficits associated with RHD. Although some investigators have suggested that emotional valence (positive, negative) influences hemispheric side of processing (see Davidson, 1984, Fox, 1991, Silberman & Weingartner, 1986), Borod et al. (2001, 2002) concluded that the majority of studies support right hemisphere predominance irrespective of emotional valence.

There are several mechanisms that may account for loss of emotional prosody and facial expressivity in RHD patients. These include a failure to understand the conditions that evoke emotional response, an inability to experience emotions, and a disruption in the capacity to produce non-verbal emotional signals.

Studies of verbal affective expression in RHD patients provide some support for the hypotheses that nonverbal deficits represent either affective conceptual dysfunction or a loss of ability to experience emotions. For example, Bloom et al. (1990) found that RHD patients were judged to produce words of lower emotional intensity in response to emotionally laden slides than LHD patients and normal volunteers. In a second study, Bloom et al. (1992) found that RHD patients produced fewer emotional content elements than visual–spatial or neutral content elements on a picture story task. There were, however, no statistically significant differences in emotional content elements between RHD and LHD patients. Cimino et al. (1991) found that RHD patients produced autobiographical memories that were judged as less emotional than those of normal volunteers, but LHD patients were not assessed. Borod et al. (1996) had raters judge emotionality in monologues produced by RHD, LHD, and normal volunteers. RHD patients' monologues were rated as significantly less emotional than the monologues of normal controls. There was also a trend for RHD patients' monologues to be rated as less emotional than LHD patients.

These studies suggest that reductions in both non-verbal and verbal emotional expressivity are associated with RHD. If true, loss of prosody and facial expression are less likely to reflect non-verbal channel specific encoding disorders, and more likely to signal either disruption of the patient's capacity to understand the conditions that evoke an emotional response, or lack of ability to experience emotion. However, in a study of spontaneous expression during videotaped interviews, Langer et al. (2000) showed that the messages of RHD patients were judged to be more positive in verbal content than in facial expression. LHD patients' messages showed the opposite pattern. These results reveal a discrepancy in the emotional content of the verbal versus nonverbal message. As such, they lend support to the hypothesis that communicative impairments following RHD may reflect specific disruption of the ability to encode and communicate feelings via non-verbal channels.

The major objective of this study was to better define the mechanisms that underlie disorders of non-verbal affective expression in patients with unilateral brain damage. Many past studies of emotional communication in RHD patients used experimental paradigms to elicit verbal and/or nonverbal responses. In this study, we sought to evaluate verbal-propositional and nonverbal communication of emotion simultaneously during spontaneous dyadic interaction. Thus, the specific aims were: (1) to examine the ability of RHD patients with aprosody versus LHD patients with aphasia to express emotions using facial behavior and propositional messages during conversational discourse; and (2) elucidate the possible mechanisms that might account for these disorders (i.e., affective–conceptual dysfunction, loss of ability to experience emotions, non-verbal encoding abnormalities).

METHODS

Research Participants

Unilateral stroke patients were recruited from the University of Florida Medical Center, the Gainesville Veterans' Administration Medical Center (VAMC), Old Dominion University, Baylor College of Medicine, and the Houston VAMC, as part of a multi-site clinical research center investigating treatments for aphasic and aprosodic communication disorders associated with hemispheric strokes (Center for Treatment of Aphasia and Related Disorders). This center consists of four subprojects and three cores. Three of the four subprojects are devoted to the treatment of aphasic disorders while the fourth subproject consists of treatments for expressive aprosody following right hemisphere stroke. The participants included in the present study represent a subset of patients who were enrolled in these four treatment protocols. Left hemisphere damaged patients who did not have aphasia and RHD patients who did not manifest expressive aprosody were not eligible for the treatment protocols and hence were not among the pool of potential participants screened for inclusion in this study.

The pool of potential participants that we screened included 70 unilateral stroke patients (55 LHD aphasics and 15 RHD aprosodics) who qualified for the treatment protocols and who participated in the Outcome Core pre-treatment video assessment. To be eligible for inclusion in the present study, patients had to meet the following criteria: (1) radiologic documentation of a unilateral cerebrovascular event involving either the right or left cerebral cortex (patients with bilateral disease or strokes limited to the basal ganglia were excluded); (2) right-handed; (3) native speaker of English; (4) minimum age of 40 years (to restrict age-related variability in cognitive function); (5) no history of substance abuse, neurological disease other than stroke, severe sensory impairment, or major medical or psychiatric co-morbidities; and (6) data regarding mood status (e.g., completion of the Geriatric Depression Scale).

Using these criteria, we identified 23 chronic stroke patients who qualified for the present study. Nine of these individuals had suffered a right hemisphere stroke accompanied by expressive aprosody (3 women, 6 men), and 14 individuals had suffered a left hemisphere stroke accompanied by aphasia (4 women, 10 men). Expressive aprosody is defined as a reduction in a speaker's ability to convey emotion via intonation and stress. Emotional intonation and stress are represented acoustically by variation in fundamental frequency, intensity, and temporal pattern over the length of an utterance (Kent and Rosenbek, 1982). Presence of expressive aprosody was determined by four experienced clinicians who listened to tape recordings of each participants' performance on the Expressive Emotional Communication Battery (see Rosenbek et al., 2004). This battery is under development at the Cognitive Neuroscience Laboratory at the University of Florida and comprises a series of tests that assess the ability to imitate emotional and syntactic prosody and produce emotional and syntactic prosody to command. If three or more clinicians independently judged the individual to have expressive aprosody, she or he was offered enrollment in the treatment protocol. All LHD aphasic patients suffered from anomia or agrammatism, as determined by performance on the Western Aphasia Battery and the Boston Naming Test. Four of the LHD patients suffered from a fluent aphasia and 10 had a non-fluent aphasia.

Table 1 gives characteristics by group of the final sample of 9 RHD aprosodic patients and the 14 LHD aphasic patients who met these criteria. The table includes demographic data, neurologic symptomatology, and means and standard deviations on the Western Aphasia Battery aphasia quotient, Boston Naming test, and the Geriatric Depression Scale, a self-report measure that emphasizes ideational rather than somatic symptoms, and requires simple, yes/no responses (Yesavage et al., 1982). We also provide information on lesion location extracted from CT or MRI scans. Unfortunately, we do not have sufficient neuroimaging data to perform more detailed lesion mapping or to calculate lesion volume. The lack of data on lesion volume is a limitation of the study as we are unable to examine the contribution of lesion size to the pattern of results.

Characteristics of aprosodic RHD and aphasic LHD groups

All patients gave informed consent under institutionally approved protocols. As noted previously, the data we present were collected prior to the initiation of the treatments.

Protocol

Each patient was interviewed using a standardized protocol. In the first half of the interview the caregiver of the patient asked the patient a series of predetermined questions while they consumed a snack together. Caregivers were given cue cards with a combination of open and closed-ended questions such as “Do you want some pudding” or “What did you do today?” The second half of the interview consisted of an examiner asking the patient a series of standardized questions regarding the events or people represented in a set of photographs that included family members, famous people (e.g., Bill Clinton, Michael Jordan), and the moon landing. Both interview segments were designed to elicit conversational discourse. Each interview was videotaped from a frontal view. There were seven female and two male examiners in all across several research sites (University of Florida and the Gainesville VAMC, Old Dominion University, Norfolk, VA, and Baylor College of Medicine, Houston, TX). All examiners had obtained a minimum of a Masters degree in Speech Pathology, Psychology, or Rehabilitation Science. All were trained to follow the same protocol. Three of the nine examiners interviewed patients in both RHD and LHD groups. One examiner interviewed 2 RHD patients only, and four additional examiners interviewed the remaining 8 LHD patients. The use of several interviewers across protocols, while introducing variability, also insures that no systematic interviewer-introduced bias is present.

A research assistant unaware of hypotheses and diagnoses used an Apple iMac computer running iMovie software to edit each patient's videotaped interview into approximately 60 10-s clips (10 min total of rateable videotape). Each 10-s clip was separated by 8 s of blank tape labeled “Rate Clip x now.” The research assistant was instructed to extract 30 continuous 10-s clips from the beginning of the interview, starting with the first word spoken by the patient. The research assistant would then fast-forward to the beginning of the second half of the interview (examiner showing patient photographs) and extract the second set of 30 continuous 10-s clips beginning with the first word spoken by the patient. The edited videoclips did not include sound. In seven cases, we were not able to obtain 60 rateable clips. In 5 of 7 cases (2 LHD and 3 RHD), this was due to transient problems with the video that obscured the facial expression. In 2 cases (both RHD patients) the interview duration was short (about 6 min) and this prevented us from obtaining 60 clips. For these 7 cases, the number of ratable clips ranged from 30 to 59 (M = 47).

Two additional research assistants, naïve to hypotheses and diagnoses, rated facial expressivity in the videotapes using rating scales adapted from past work (see Blonder et al., 1993). Specifically, we used a Facial Expressivity Scale and a Facial Behavior Checklist. The Facial Expressivity scale is a 5-point unidimensional Likert-type scale. In order to facilitate training, each rater was instructed to assign a rating from 1 to 5 using the following criteria: 1 = absent or minimal facial expressivity, blank stare, little eye contact; 2 = slightly restricted facial expressivity with some movement (i.e., mouth, around eyes); 3 = moderate facial expressivity, i.e., attentive, slight smile, eyebrow raise/squint/wink, crinkles nose; 4 = the participant smiles, frowns, looks sad (i.e., obvious tears or crying,), shows facial animation (e.g., eyebrow raise/squint/wink, crinkles nose); 5 = extreme facial expressivity. The Facial Behavior Checklist was used to assess the presence or absence of certain facial behaviors such as smiling, laughing, tears in the eyes, weeping and eye contact. Raters were trained to criterion reliability (r > .75; coefficient Kappa >.61) on videotapes of stroke patients and normal volunteers involved in a previous study. These research assistants then rated every videotape. Each rater rated the tapes in a different random order and raters alternated the order with which they used the two rating scales such that one rater used the Facial Expressivity Scale first for a block of tapes and the other rater would use the Facial Behavior Checklist first for the same group of tapes. Inter-rater reliabilities were recalculated based on ratings of the study tapes (see Data Analysis section below).

In order to examine verbal affect production, the audio portion of the interview was transcribed into orthographic English and subjected to text analysis using the Linguistic Inquiry and Word Count software program (LIWC; Pennebaker et al. 2001). The LIWC was used to calculate the percentage of total affect words, positive affect words, and negative affect words in each patient's transcript. This program analyzes text files word by word. It contains a dictionary that is composed of 2,300 words and word stems that are grouped into categories. Of the 615 affect words included in the LIWC emotion or affective sub-dictionaries, 261 are positive (e.g., happy, pretty, good) and 345 are negative (e.g., hate, worthless, enemy). Pennebaker et al. (2001) based the affective subdictionaries on words from several sources, including common emotion rating scales, such as the Positive and Negative Affect Scale (Watson et al.1988), Roget's Thesaurus, and standard English dictionaries. We also used the LIWC to calculate the percent of unique words in the transcripts in order to obtain a general index of semantic production among patients with aprosody versus patients with aphasia.

For descriptive purposes, we also wanted to characterize the non-affective aspects of discourse production in the transcripts. We used the Systematic Analysis of Language Transcripts (SALT; Miller & Chapman, 2002), a software program that was originally developed to examine language production in children. The SALT enabled us to examine production of words per turn, percent one word responses, percent yes/no responses, percent questions, and percent maze words—words that constitute filled pauses, repetitions, or revisions. We expected that RHD patients with aprosody would produce more words per turn and fewer yes/no responses, one-word responses, and maze words than LHD patients with aphasia. We did not have specific predictions regarding the percentage of questions in RHD versus LHD patients' interviews.

Data Analysis

Inter-rater reliability

Inter-rater reliability for the Facial Expressivity Scale was calculated using Pearson's r. The correlation between the two raters was .81 (p < .0001). The approximately 60 individual ratings made by each rater for each patient were then averaged and this averaged rating was used as a dependent variable in data analysis. Inter-rater reliability for the Behavior Checklist was calculated using the Kappa Coefficient (Landis & Koch 1977). Kappa coefficients greater than or equal to .61 (substantial) were considered reliable. Kappa coefficients revealed almost perfect agreement: smile: .89; chuckle/laugh: .92; teary-eyed: 1.0; weep: .98; eye contact: .86. Raters then reviewed each discrepantly rated clip and agreed on its coding. Because 7 subjects did not have 60 ratable clips, we computed each facial behavior dependent variable (e.g., smile, laugh) by dividing the total number of video clips in which the behavior was observed by the total number of clips. This produced a rate of occurrence (30 smiles in 60 clips = .50, or 50% of clips contained a smile). The occurrences of tears in the eyes and weeping were extremely infrequent and we did not use these variables in subsequent analyses.

Statistical analyses

Due to the small sample size and the fact that not all measures were normally distributed, we compared RHD and LHD patients' demographic and communicative variables using the Mann-Whitney U test (two-tailed). The primary dependent variables consisted of overall average rating on the Facial Expressivity Scale, the mean percent of patients' video clips that contained smiling, laughing, and eye contact, and LIWC calculations of percent of patients' total words in the interview that were affect words, including percent positive and percent negative affect words (see Tables 3 and 4). We also report between-group comparisons of non-affective conversational discourse features (Table 2). In addition, about 60% of RHD and LHD patients were taking anti-depressants and the average score of each group on the Geriatric Depression Scale was in the mildly depressed range (see Table 1). We therefore performed Spearman's rank order correlations between the GDS scores and the affective expression measures to examine potential associations between dysphoric mood and facial and verbal affect production. Finally, Table 1 shows that a higher percentage of RHD aprosodics had subcortical involvement as compared to LHD aphasics. Given that past research has shown an association between basal ganglia disease and non-verbal communication deficits, we performed chi-square analysis to examine whether the difference in frequency of subcortical involvement was statistically significant.

Discourse production in aprosodic versus aphasic patients' transcripts

Pre-treatment comparisons between aprosodic and aphasic patients: Facial expression in interviews

Pre-treatment comparisons between aprosodic and aphasic patients: Affective lexical communication in interviews

RESULTS

We found no statistically significant differences between RHD and LHD patients in age, education, gender distribution, months since stroke, or frequency of subcortical involvement. Furthermore, there were no statistically significant differences between RHD and LHD patients' self-ratings on the Geriatric Depression Scale. As expected, discourse analysis showed that LHD aphasic patients produced significantly fewer words per turn and a significantly higher percentage of one-word responses, yes/no responses, and maze words (repetitions, revisions, filled pauses) than RHD patients. There were no differences between patients with aphasia and patients with aprosody in percent of utterances that were questions. RHD patients produced a significantly higher percentage of unique words in the interviews than LHD patients. To estimate the magnitude of the statistically significant effects, we used Cohen's d (Cohen 1988). According to Cohen's (1988) suggested criteria, these effect sizes are large (i.e., Cohen's d of .80 or greater), with the exception of the discourse variable words per turn, which demonstrated a moderate effect size. The means, standard deviations, alpha level, and effect sizes associated with statistically significant comparisons, are given in Table 2. For additional discussion of the use of a two-step process wherein effect sizes are reported for those comparisons that achieve statistical significance, see Levin and Robinson (1999).

When we compared nonverbal communication as expressed during the videotaped interviews, we found that RHD patients produced significantly less smiling and laughter than LHD patients (see Table 3). These effect sizes are large according to Cohen's (1988) criteria. There was also a trend for RHD patients to be rated as less expressive than LHD patients on the facial expressivity scale (p < .10). There were no between-group differences in eye contact. As shown in Table 1, 83% of LHD patients had facial paresis whereas 33% of RHD patients did. We believe that if presence of facial paresis had influenced the ratings, the results would have gone in the opposite direction; i.e., the LHD group would have been rated as exhibiting reduced facial expressivity relative to the RHD group.

Figure 1 illustrates the distribution of smiling in RHD versus LHD patients. Five out of 9 RHD patients smiled zero to four percent of the time, while only 2 out of 14 LHD patients smiled this infrequently. In contrast, 4 out of 14 LHD patients smiled more than 30% of the time, whereas no RHD subjects smiled this often.

Bar graph displaying the percent of videoclips that contain smiling in aprosodic versus aphasic patients' interviews. The y-axis represents percent of patients in each group and the x-axis represents the percent of videoclips during which patients smiled.

When we compared the percent of total spoken words that were classified as affective by the LIWC dictionary, we found that RHD patients produced a significantly higher percentage of affect words than did LHD patients (see Table 4). In order to ascertain whether affective words were differentially produced depending upon valence, we compared positive and negative affective words in RHD versus LHD patients' transcripts. Results indicated that RHD patients produced a significantly greater percent of both positive and negative affective words than LHD patients. The magnitude of these effects are large according to Cohen's (1988) criteria. When we compared positive versus negative affect word production within group, both RHD and LHD patients produced significantly more positive than negative emotion words. The lexical affect production means, standard deviations, alpha level, and effect sizes (Cohen's d) are given in Table 3.

In order to ascertain whether dysphoria might contribute to these findings, we performed Spearman's rank order correlations between Geriatric Depression Scale scores and the results on the affective expression measures (smiling, laughing, 5-point facial expressivity scale, percent total affect words, percent positive affect words, percent negative affect words). None of these correlations was significant.

DISCUSSION

This study shows that RHD patients with expressive aprosody display reduced affective facial expressivity and increased lexical expressivity during spontaneous conversation in comparison to aphasic LHD patients. In particular, these aprosodic RHD patients produced significantly less smiling and laughter during the interviews than aphasic LHD patients. Negative emotions such as crying were so infrequently expressed by patients that we were unable to discern whether the reduction in facial expressivity among RHD aprosodics is limited to positive affect or includes both positive and negative emotion. Nevertheless, decreased smiling and laughter are sufficient to account for impressions of so-called flat or blunted affect that clinicians often associate with RHD. The present design did not allow us to determine the prevalence of these non-verbal communicative disorders in RHD patients, nor are we able to determine the frequency of co-occurrence of expressive aprosody and loss of facial expressivity in RHD patients, although this study and previous work by Borod et al. (1985) suggest that they tend to co-occur. Moreover, we lack the neuroimaging data to enable us to map the specific brain regions in the right hemisphere that are associated with expressive aprosody and loss of facial expression. Nevertheless, we note that a slightly larger percentage of RHD patients had infarcts that extended subcortically as compared with LHD patients, although these differences were not statistically significant. Studies of patients with Parkinson's disease suggest that the basal ganglia contribute to both affective prosodic production and facial expressivity (Blonder et al. 1989, Cancelliere and Kertesz 1990, Jacobs et al., 1995, Kent and Rosenbek, 1982 Smith et al. 1996). Therefore, it is possible that differential damage to subcortical regions in the RHD versus LHD groups might have influenced the pattern of results.

In contrast to several prior studies of lexical emotional expression in patients with unilateral brain damage, we found that this group of RHD patients produced a significantly higher proportion of both positive and negative affective words compared with LHD patients. These results are not consistent with experiments by Bloom et al. (1990, 1992) and Borod et al. (1996) who found diminished ability to express emotion verbally in RHD patients when compared with LHD patients and normal controls. There are several differences between our study and prior research that might account for these discrepancies. First, our LHD patients all qualified for aphasia treatment protocols and thus may have had more severe language disturbances than LHD patients in past studies. Second, we did not compare our patients' discourse to that of neurologically normal control participants. It is possible that the RHD patients in our study would have produced significantly fewer emotion words than normal controls, had we made this comparison. Third, all of our RHD patients suffered from expressive aprosody. We do not know the prevalence of expressive aprosody in the RHD patients reported in past studies, nor do we know whether the presence of aprosody moderates lexical expression of emotion by RHD patients. Fourth, we used a computer software program to determine percent of affect words in interview transcripts. Prior studies have used subjective ratings of verbal affect production in speech rather than calculations of affect words produced. Finally, we based a portion of our analyses on patients' conversational discourse, rather than descriptions of emotional pictures or autobiographical memories. It is possible that conversational interaction facilitates lexical emotional expression.

Our findings do not support prior conclusions that the verbal affect lexicon is lateralized to the right hemisphere. If this were true, then in spite of being aphasic, our LHD patients should have produced a higher proportion of emotion words than RHD patients. The majority of our aprosodic patients sustained large middle cerebral artery infarcts involving frontal, temporal, and parietal regions (see Table 1). In several patients, these lesions extended subcortically. It is unlikely that the verbal affect lexicon would be stored in some common area in the right hemisphere that was spared by these extensive lesions. A more parsimonious explanation is that the verbal affect lexicon is mediated by lexico-semantic processing systems that are located in the left hemisphere. This is indirectly supported by the finding that RHD aprosodic patients displayed greater lexical diversity in general than LHD aphasic patients, as manifested by a significantly greater percent of unique words produced during the interviews.

The finding that our RHD patients spontaneously expressed emotions using verbal–propositional speech, is not consistent with the postulates that their failure to use emotional prosody or facial expression represents an affective–conceptual defect or a loss of ability to experience emotions. Rather, these results suggest that RHD disrupts patients' ability to encode or communicate emotions via non-verbal signals. This conclusion is consistent with prior findings that the right hemisphere may house a non-verbal affect lexicon, i.e., an internal representation of non-verbal communicative signals coupled with information regarding the emotional significance of these displays (see Blonder et al. 1991, Bowers et al. 1993). In past research, Blonder et al. (1991) showed that RHD patients lacked the ability to infer the emotion communicated by verbal descriptions of non-verbal expressions (e.g., He smiled) but had no difficulty correctly interpreting the emotional message of verbal descriptions of evocative events (e.g., Children tracked dirt over your new white carpet).

It is important to note that we found no effect of hemispheric side of stroke on the valence of expressed emotion words. RHD patients produced a significantly higher percentage of positive and negative affect words than LHD patients, and within-group analyses showed that both groups produced significantly more positive emotion words than negative. This may reflect the nature of the interview, in that we did not specifically focus on the elicitation of negative emotions. These findings may also be related to patients' preserved knowledge of socio-cultural values that favor the expression of positive or so-called approach emotions. It also suggests that reduced smiling and laughter among RHD patients with aprosody is channel-dependent and not reflective of a restricted capacity to experience emotion. As described in the Introduction, Langer et al. (2000) found that normal controls produced messages in which the affective content of the facial and verbal channel were judged as consistent, while unilateral stroke patients' messages were rated as inconsistent, in that one channel was judged as more positive than the other. In the case of RHD patients, the verbal message was judged as more positive than the facial message and in the case of LHD patients, the facial channel was rated as more positive than the verbal. Taken together, these results suggest that patients with hemispheric damage may use the unimpaired channels to communicate emotion. Thus, aprosodics' increased production of affect words may be in part a compensatory strategy: impairments in nonverbal communication of affect may result in enhanced expression via the unimpaired lexical channel. Patients with aphasia, on the other hand, use the preserved non-verbal channel to express emotion. Future studies that include a normal control group will help clarify the extent to which these unilateral brain damaged patients are in fact actively compensating for deficits in the impaired channel of communication. Finally, these results have implications for therapeutic intervention. While treatments aimed at increasing patients' ability to communicate via the impaired channel have proven to be efficacious, our results suggest that another important strategy may be to train patients to use the unimpaired channel to compensate for deficits. Future research will be directed toward creating and implementing such therapeutic interventions.

ACKNOWLEDGMENTS

This study was supported by NIDCD Grant P50DC03888 and Research Service VA RR&D. We thank Shannon Braden, Teri Cox, Maribel Ciampitti, Len Ewen, Graham Hunter, Michelle Jessup Lineberry, Julia Kordenbrock, Richard Kryscio, Ph.D., Susan Leon, Jane Meara, Steve Nadeau, M.D., Daniel Pekich, Gordon Roberts, Amy Rodriguez, Brenda Stidham, John Wilson, Ph.D., and the staff of the Brain Research and Rehabilitation Center, for their assistance during various stages of this research.

References

REFERENCES

Behrens, S.J. (1989). Characterizing sentence intonation in a right hemisphere-damaged population. Brain and Language, 36, 181200.CrossRefGoogle Scholar
Blonder, L.X., Gur, R.E., & Gur, R.C. (1989). The effects of right and left hemiparkinsonism on prosody. Brain and Language, 36, 193207.Google Scholar
Blonder, L.X., Bowers, D., & Heilman, K.M. (1991). The role of the right hemisphere in emotional communication. Brain, 114, 11151127.Google Scholar
Blonder, L.X., Burns, A.F., Bowers, D., Moore, R.W., & Heilman, K.M. (1993). Right hemisphere facial expressivity during natural conversation. Brain and Cognition, 21, 4456.CrossRefGoogle Scholar
Blonder, L.X., Pickering, J.E., Heath, R.L., Smith, C.D., & Butler, S.M. (1995). Prosodic characteristics of speech pre- and post-right hemisphere stroke. Brain and Language, 51, 318335.Google Scholar
Bloom, R., Borod, J.C., Obler, L., & Koff, E. (1990). A preliminary characterization of lexical emotional expression in right and left brain-damaged patients. International Journal of Neuroscience, 55, 7180.Google Scholar
Bloom, R.L., Borod, J.C., Obler, L.K., & Gerstman, L.J. (1992). Impact of emotional content on discourse production in patients with unilateral brain damage. Brain and Language, 42, 153164.CrossRefGoogle Scholar
Borod, J.C., Bloom, R.L., Brickman, A.M., Nakhutina, L., & Curko, E.A. (2002). Emotional processing deficits in individuals with unilateral brain damage. Applied Neuropsychology, 9, 2336.CrossRefGoogle Scholar
Borod, J.C., Koff, E., Lorch, M.P., & Nicholas, M. (1985). Channels of emotional expression in patients with unilateral brain damage. Archives of Neurology, 42, 345348.Google Scholar
Borod, J.C., Koff, E., Lorch, M.P., & Nicholas, M. (1986). Expression and perception of facial emotion in brain-damaged patients. Neuropsychologia, 24, 169180.Google Scholar
Borod, J.C., Rorie, K.D., Haywood, C.S., Andelman, F., Obler, L.K., Welkowitz, J., Bloom, R.L., & Tweedy, J.R. (1996). Hemispheric specialization for discourse reports of emotional experiences: Relationships to demographic, neurological, and perceptual variables. Neuropsychologia, 34, 351359.Google Scholar
Borod, J.C., Zgaljardic, D., Tabert, M., & Koff, E. (2001). Asymmetries of emotional communication in normal subjects. In F. Boller, J. Grafman, & G. Gainotti (Eds.), Handbook of neuropsychology: Emotional behavior and its disorders (pp. 181105). Oxford, UK: Elsevier.
Bowers, D., Bauer, R.M., & Heilman, K.M. (1993). The Nonverbal affect lexicon: Theoretical perspectives from neuropsychological studies of affect perception. Neuropsychology, 7, 433444.CrossRefGoogle Scholar
Buck, R. & Duffy, R.J. (1980). Nonverbal communication of affect in brain-damaged patients. Cortex, 16, 351362.Google Scholar
Cancelliere, A.E. & Kertesz, A. (1990). Lesion localization in acquired deficits of emotional expression and comprehension. Brain and Cognition, 13, 133147.Google Scholar
Charbonneau, S., Scherzer, B.P., Aspirot, D., & Cohen, H. (2003). Perception and production of facial and prosodic emotions by chronic CVA patients. Neuropsychologia, 41, 605613.Google Scholar
Cimino, C., Verfaelie, M., Bowers, D., & Heilman, K.M. (1991). Autobiographical memory: Influence of right hemisphere damage on emotionality and specificity. Brain and Cognition, 15, 106118.Google Scholar
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.
Davidson, R.J. (1984). Affect, cognition, and hemispheric specialization. In C.E. Izard, J. Kagan, & R. Zajonc (Eds.), Emotions, cognition, and behavior (pp. 320365). Cambridge, UK: Cambridge University Press.
Fox, N.A. (1991). If it's not left, it's right. Electroencephalograph asymmetry and the development of emotion. American Psychologist, 46, 863872.Google Scholar
Heilman, K.M., Blonder, L.X., Bowers, D., & Valenstein, E. (2003). Emotional disorders associated with neurological diseases. In K.M. Heilman & E. Valenstein (Eds.), Clinical neuropsychology (4th ed., pp. 447478). Oxford, UK: Oxford University Press.
Heilman, K.M., Leon, S.A., & Rosenbek, J.C. (2004). Affective aprosody from a medial frontal stroke. Brain and Language, 89, 411416.Google Scholar
Hughes, C.P., Chan, J.L., & Su, M.S. (1983). Aprosodia in Chinese patients with right cerebral hemisphere lesions. Archives of Neurology, 40, 732736.Google Scholar
Jacobs, D.H., Shuren, J., Bowers, D., & Heilman, K.M. (1995). Emotional facial imagery, perception, and expression in Parkinson's disease. Neurology, 45, 16961702.Google Scholar
Kent, R.D. & Rosenbek, J.C. (1982). Prosodic disturbance and neurologic lesion. Brain and Language, 15, 259291.Google Scholar
Landis, J.R. & Koch, G.G. (1977). The measurement of interobserver agreement for categorical data. Biometrics, 33, 159174.Google Scholar
Langer, S.L., Pettigrew, L.C., Wilson, J.F., & Blonder, L.X. (2000). Channel-consistency following unilateral stroke: An examination of patient communication across verbal and non-verbal domains. Neuropsychologia, 38, 337344.Google Scholar
Levin, J.R. & Robinson, D.H. (1999). Further reflections on hypothesis testing amd Editorial Policy for the Primary Research Journals. Educational Psychology Review, 11, 143155.Google Scholar
Miller, J. & Chapman, R. (2002). Systematic Analysis of Language Transcripts (SALT) for Windows®, Version 7.0 [computer software]. University of Wisconsin-Madison: Language Analysis Lab, Waisman Research Center.
Monrad-Krohn, G. (1947). The prosodic quality of speech and its disorders. Acta Psychologica Scandinavia, 22, 225265.Google Scholar
Pell, M.D. (1999). Fundamental frequency encoding of linguistic and emotional prosody by right hemisphere-damaged speakers. Brain and Language, 69, 161192.Google Scholar
Pennebaker, J.W., Francis, M.E., & Booth, R.J. (2001). Linguistic inquiry and word count (liwc) software program [computer software]. Hillsdale, NJ: Lawrence Erlbaum & Associates.
Rosenbek, J.C., Crucian, G.P., Leon, S.A., Hieber, B., Rodriguez, A.D., Holiway, B., Ketterson, T.U., Ciampitti, M., Heilman, K., & Gonzalez-Rothi, L. (2004). Novel treatments for expressive aprosodia: A phase 1 investigation of cognitive linguistic and imitative interventions. Journal of the International Neuropsychological Society, 10, 786793.Google Scholar
Ross, E.D. (1981). The aprosodias. Functional–anatomic organization of the affective components of language in the right hemisphere. Archives of Neurology, 38, 561569.Google Scholar
Ross, E.D. (1997). Right hemisphere syndromes and the neurology of emotion. In S.C. Schacter & O. Devinsky (Eds.), Behavioral neurology and the legacy of Norman Geschwind (pp. 183191). Philadelphia: Lippincott-Raven.
Ross, E.D., Edmondson, J.A., Seibert, G.B., & Homan, R.W. (1988). Acoustic analysis of affective prosody during right-sided Wada test: A within-subjects verification of the right hemisphere's role in language. Brain and Language, 33, 128145.CrossRefGoogle Scholar
Ross, E.D. & Mesulam, M.M. (1979). Dominant language functions of the right hemisphere? Prosody and emotional gesturing. Archives of Neurology, 36, 144148.Google Scholar
Silberman, E.K. & Weingartner, H. (1986). Hemispheric lateralization of functions related to emotion. Brain and Cognition, 5, 322353.CrossRefGoogle Scholar
Smith, M.A., Smith, M.K., & Ellgring, H. (1996). Spontaneous and posed facial expression in Parkinson's disease. Journal of the International Neuropsychological Society, 2, 383391.Google Scholar
Tucker, D.M., Watson, R.T., & Heilman, K.M. (1977). Discrimination and evocation of affectively intoned speech in patients with right parietal disease. Neurology, 27, 947950.Google Scholar
Watson, D., Clark, L.A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect. The PANAS scales. Journal of Personality and Social Psychology, 54, 10631070.Google Scholar
Weintraub, S., Mesulam, M.M., & Kramer, L. (1981). Disturbances in prosody. A right-hemisphere contribution to language. Archives of Neurology, 38, 742744.Google Scholar
Yesavage, J.A., Brink, T.L., Rose, T.L., Lum, O., Huang, V., Adey, M., & Leirer, J. (1982). Development and validation of a geriatric depression screening scale: A preliminary report. Psychiatry Research, 17, 3749.CrossRefGoogle Scholar
Figure 0

Characteristics of aprosodic RHD and aphasic LHD groups

Figure 1

Discourse production in aprosodic versus aphasic patients' transcripts

Figure 2

Pre-treatment comparisons between aprosodic and aphasic patients: Facial expression in interviews

Figure 3

Pre-treatment comparisons between aprosodic and aphasic patients: Affective lexical communication in interviews

Figure 4

Bar graph displaying the percent of videoclips that contain smiling in aprosodic versus aphasic patients' interviews. The y-axis represents percent of patients in each group and the x-axis represents the percent of videoclips during which patients smiled.