Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-02-06T15:30:39.954Z Has data issue: false hasContentIssue false

Emotional and nonemotional facial expressions in people with Parkinson's disease

Published online by Cambridge University Press:  01 July 2004

GWENDA SIMONS
Affiliation:
Centre for the Study of Emotion, Department of Psychology, University of Portsmouth, Portsmouth PO1 2DY, UK
MARCIA C. SMITH PASQUALINI
Affiliation:
Centre for the Study of Emotion, Department of Psychology, University of Portsmouth, Portsmouth PO1 2DY, UK
VASUDEVI REDDY
Affiliation:
Centre for the Study of Emotion, Department of Psychology, University of Portsmouth, Portsmouth PO1 2DY, UK
JULIA WOOD
Affiliation:
Centre for the Study of Emotion, Department of Psychology, University of Portsmouth, Portsmouth PO1 2DY, UK
Rights & Permissions [Opens in a new window]

Abstract

We investigated facial expressivity in 19 people with Parkinson's disease (PD; 14 men and 5 women) and 26 healthy controls (13 men and 13 women). Participants engaged in experimental situations that were designed to evoke emotional facial expressions, including watching video clips and holding conversations, and were asked to pose emotions and imitate nonemotional facial movements. Expressivity was measured with subjective rating scales, objective facial measurements (Facial Action Coding System), and self-report questionnaires. As expected, PD participants showed reduced spontaneous facial expressivity across experimental situations. PD participants also had more difficulty than controls posing emotional expressions and imitating nonemotional facial movements. Despite these difficulties, however, PD participants' overall level of expressivity was still tied to emotional experience and social context. (JINS, 2004, 10, 521–535.)

Type
Research Article
Copyright
2004 The International Neuropsychological Society

INTRODUCTION

Many people with Parkinson's disease (PD), a degenerative disorder that affects dopamine-producing neurons of the substantia nigra and related extrapyramidal system structures, find it difficult to use facial expressions to communicate how they feel. In an attempt to clarify the nature of the impairment of facial expressions in people with PD, we investigated spontaneous and voluntary, emotional and non-emotional expressions in people with PD and healthy controls. In addition, we studied the association between experienced and expressed emotion, and the influence of social context on facial expressivity.

Although people with PD reportedly can recognize emotional facial expressions (Adolphs et al., 1998), they may show a profound reduction in the production of spontaneous facial expressions (Buck & Duffy, 1980; Katsikitis & Pilowsky, 1988, 1991; Smith et al., 1996). As a result they are often misunderstood (Ellgring et al., 1993; Macht et al., 1999) and negatively evaluated, even by health professionals (Pentland et al., 1987, 1988). One specific problem people with PD have is smiling; when they smile spontaneously, their smiles are often perceived to be “unfelt,” because of a lack of accompanying cheek raises (Pitcairn et al., 1990).

An impairment of spontaneous facial expressivity in people with PD compared to healthy controls has been shown while watching video clips (Smith et al., 1996), smelling odors (Simons et al., 2003a) and watching humorous slides (Katsikitis & Pilowsky, 1988, 1991). These difficulties with spontaneous expressions are consistent with neuroanatomical evidence that impulses for these expressions arise from the extrapyramidal motor system, which is known to be affected in PD (Rinn, 1984). Most previous studies of facial expressivity, however, have compared people with PD and controls in only one or two situations (e.g., Katsikitis & Pilowsky, 1988), ignoring the social context. Here, we investigated spontaneous expressions by varying both the task (watching an amusing video and having a conversation), and the social context (testing participants alone or with the spouse or experimenter present).

Given reportedly normal levels of emotional experience (Madeley et al., 1995; Smith et al., 1996), the implication seems to be that facial expression has become dissociated from emotional experience in people with PD. Because this assumption has not been systematically tested, we looked at patterns of expression and self-reports of feelings to examine whether such a dissociation is present.

In contrast to research on spontaneous expressions, posed (voluntary) expressions are believed to originate from the cortical motor strip, and traditionally this system has been thought to be intact in people with PD. Although as we reported previously, posed emotional expressions appear to be less affected by PD than spontaneous expressions (Simons et al., 2003a; Smith et al., 1996), there is evidence that these posed expressions are nevertheless significantly impaired (Jacobs et al., 1995; Madeley et al., 1995). In addition, people with PD may have difficulty making specific muscle movements, such as raising the eyebrows (Simons et al., 2003a) and executing mouth and voluntary eyelid movements (Griffin & Greene, 1994). In order to clarify these inconclusive findings we also investigated the ability to pose both emotional and nonemotional facial expressions.

We predicted that (1) spontaneous facial expressivity would be lower for the PD group than for the control group across situations; (2) the presence of another person would enhance facial expressivity for both groups (e.g., Jakobs et al., 1999); and (3) facial expressions would be less highly associated with self-reported feelings in the PD group than in the control group. We further predicted that (4) quality and intensity scores for posed emotional expressions and imitated, nonemotional, facial movements would be poorer for the PD group than for the control group, although these differences were not expected to be as great as differences in level of spontaneous expressivity. Finally, we explored the relationships between spontaneous and posed expressivity and investigated whether PD participants and healthy controls were aware of their own levels of expressivity.

METHODS

Research Participants

Nineteen people diagnosed with idiopathic PD (14 men and 5 women) and 26 healthy controls (13 men and 13 women) took part in the study. The mean age of the participants was 63.4 years (SD = 8.0). All participants took part as a couple for the video-watching and conversation tasks. For the PD couples (i.e., one spouse had PD), however, only the data from the spouse with PD were included in the present analysis. Participants were recruited from local branches of the Parkinson's Disease Society, through articles in a local newspaper and a health care magazine, and in response to an announcement on a local radio station. Each couple received £10 towards travel expenses. Exclusion criteria for both groups included age less than 49 years, neurological diseases other than PD, and possible cognitive decline, as suggested by scores of 21 or lower on the Mini Mental Status Examination (MMSE; Folstein et al., 1975).

PD and control groups were compared on demographic factors and screening tests. The results for age, cognitive status and depression by group and gender can be found in Table 1. A statistical comparison between the genders for each of the demographic factors was not feasible due to the low number of female PD participants.

Demographic factors and health status reported by group and gender

All couples were married (PD: M = 36.39 years, SD = 15.48; control: M = 31.73 years, SD = 11.31) and living together. More control than PD participants had completed university (including postgraduate studies). Eight controls (31% of total) and 2 of the PD participants (10%) were employed at the time of this research.

PD symptoms

Table 1 also shows the number of participants for each of the Hoehn and Yahr stages as rated by the experimenters. All PD participants reported experiencing slowness of movement. Both tremor and rigidity were prominent symptoms for 15 of the 19 PD participants (in various degrees and for various limbs and sides of the body); 2 reported tremor alone and 1 reported rigidity as the main symptom. Sixteen PD participants reported a slight to moderate reduction of their facial expression. Five reported a tremor in their face and 10 participants reported slight to moderate rigidity of the face.

Medication

Table 2 shows an overview of the medication used by the participants. All PD participants were tested while optimally medicated. In addition to PD medication, 13 PD participants were taking medication for other conditions such as high blood pressure, heart problems, diabetes, arthritis and asthma. Eight of the control participants were receiving medication for similar health problems. Five participants with PD, but none of the controls, were receiving anti-depressant medication.

Medication use

Questionnaires

Demographic and health questionnaire

This questionnaire was designed for the present study and consisted of general questions about age, gender, occupation, marital status, and living situation as well as questions about health related subjects such as medication use and in the case of the PD participants, the main PD symptoms.

Berkeley Expressivity Questionnaire (BEQ)

This is a 16-item self-report scale measuring emotional expressivity (Gross & John, 1995, 1997). Participants specified to what degree each item was true for them on a 7-point scale ranging from strongly disagree (1) to strongly agree (7). The scale can be divided into three subscales: Impulse Strength; Negative Expressivity and Positive Expressivity. The BEQ has been reported to be a reliable and valid measure of emotional expressivity (Gross & John, 1995, 1997).

Self-Rating Depression Scale (SDS)

This scale consists of 20 statements concerning psychological and physical well-being (Zung, 1965). Participants rated their present state on a 4-point scale. A rating of 40 or more on the scale is suggestive of depression.

Emotion Rating Scale (ERS)

This self-report rating scale lists 12 different emotions and emotional feelings each accompanied by a 100-point Likert scale (based on Jakobs et al., 1996; Simons et al., 2003b). Participants were asked to indicate the extent to which they were experiencing each emotion (0 = not at all, 100 = very strongly). Participants completed this scale at the beginning of the experiment and following several of the interventions.

Interaction Rating Scale (IRS)

This questionnaire was designed for the present study and consists of eleven 7-point scales to rate each of the two conversations that formed part of the experiment. The most important scales for the present analysis were a scale on which participants rated how satisfied they were with the conversation from not at all satisfied (1) to very satisfied (7), and two scales on which participants rated how expressive they were compared to normal (for them) and compared to the average person, from not at all expressive (1) to very expressive (7).

Hoehn and Yahr scale

This five-stage scale (Hoehn & Yahr, 1967) gives an indication of the severity of PD (Stage I = unilateral disease and mild symptoms to Stage V = severe disability, either bed or wheelchair confined). Scores in this study were based on observations by the experimenters.

Mini-Mental Status Examination (MMSE)

This is a nine-part screening test for dementia (Folstein et al., 1975). A score below 21 indicates possible cognitive impairments.

Stimuli

Three different video clips were used to elicit emotional and facial reactions. Two amusing video clips each featured parts of an episode from Fawlty Towers (The Hotel Inspectors; Cleese et al., 1998). In one clip the hotel owner has a dispute about pens with his wife followed by another dispute with a customer; in the second clip the hotel owner discovers the true identity of one of the guests and acts nastily until corrected by his wife. The third amusing clip featured Rowan Atkinson posing as an actor in a classical play wearing tights (Pink tights and plenty of drops, Rowan Atkinson Live; Ptaszynski & Schlamme, 1992). The clips were each approximately 2 min long. Each of the video clips was selected from a larger sample of clips on the basis of self-reported ratings of amusement and on the magnitude of facial reactions they elicited in several pilot samples ranging from 8 to 20 participants, who varied in age between 18 and 55 years.

Procedure

The participants came to the laboratory as a couple and both spouses completed all parts of the study, as shown by the timeline in Table 3.

Timeline for the experiment

Once the participants were seated in the laboratory, a short overview of the experiment was given. Participants signed the informed consent form and indicated their willingness for the whole procedure to be videotaped. At that moment two wall-mounted cameras in each room were switched on. Participants were then introduced to the first part of the study and filled out the ERS to record their present feelings. One of the spouses was then taken to an adjoining room by one of the experimenters. The following tasks were administered:

1. Watching video clips alone, together with the experimenter and together with the spouse: Participants were shown three amusing video clips. The order in which the video clips were shown was counterbalanced across participants. They watched one clip while alone in a room, one together with an experimenter and one together with their spouse. After each clip participants were asked to rate their emotional reactions during the video clip on the ERS.

2. Social interaction with the spouse: After the clips, both spouses were seated opposite each other at a table and were asked to have a conversation for about 5 min. They were given topics of conversation of a pleasant content (e.g., holidays, pets), but were told that they could talk about any topic that was enjoyable to both participants. At the end of the conversation participants were asked to fill out the IRS.

3. Social interaction with a stranger: After a short break, one of the spouses was introduced to a “stranger,” a woman he or she did not know (eight different strangers were used; all were employees of the university) and was asked to have a 5 min conversation with this person. Both interactants were given topics similar to those in the spouse conversation, and completed the IRS after the conversation. During this time the other spouse filled out the SDS and was screened for dementia with the MMSE. Then the sequence was reversed and the other spouse spoke to the same stranger.

Once both spouses had completed their conversation with the stranger, one spouse proceeded to the posed expression tasks.

1. Posed expression tasks: Participants were asked to pose a face toward the camera as if they were experiencing a certain emotion (e.g., “Please look now as if you are happy”). This was repeated for anger, fear, surprise, disgust and sadness, always in that order.Participants were then trained to make specific facial muscle movements with the help of verbal descriptions, an instruction videotape, verbal feedback and a mirror. These movements, or Action Units (AUs), were based on the Facial Action Coding System (FACS; Ekman & Friesen, 1978) and the instructions were adapted from the Requested Facial Action Test (Ekman et al., 1980). Participants received instructions such as, “Please raise your eyebrows,” and “Please wrinkle your nose.” They saw the facial movement performed by one of the experimenters (a certified FACS coder) on videotape and were asked to make the movement. They then received feedback on their performance, after which they were asked to try again with the help of a mirror. Once the participants performed the movement to the best of their ability, the experimenter moved on to the next facial movement. The participants were trained to make nine different muscle movements.1

1The following AUs and AU combinations were imitated: Brow lowerer (AU 4), nose wrinkler (AU 9), upper lip raiser (AU 10 + 25), dimpler (AU 14), lip stretcher (AU 20), lip presser (AU 24), jaw drop (AU 26), inner and outer eyebrow raiser (AU 1 + 2), and cheek raiser and lip corner puller (AU 6 + 12).

After the training of facial movements, participants watched a short excerpt from one of the video clips shown at the beginning of the experiment. Participants were asked to convince the experimenter by their facial expressions that they were disgusted with the clip, in spite of its amusing content.

2. Questionnaires: While one spouse completed the posed emotion tasks, the other spouse filled out the BEQ and was interviewed by the experimenter. The roles were then reversed. At the end of the experiment the spouses were debriefed together and asked to sign a data consent form.

Measurement of Variables

Video clip watching situations

Segments of videotaped facial behavior for each participant, for each experimental situation, were selected by an “editor.” For each of the video watching situations the editor selected the 10 s during which the participant was most expressive (most changes, strongest expression or both). In those cases where no facial expression was shown during the whole video clip, 10 s were captured around a trigger scene. For the shortened video clip used for the incongruent posing situations, one segment of 10 s was selected around a predefined trigger scene. To assess the editor's reliability, a second person also selected segments for 5 participants (11% of total). Both editors selected the same clip (at least 80% overlap in time; the same major event) in 75% of the cases.

A trained rater subsequently scored the participants' facial expressions on videotape, by rating specific emotional reactions and overall facial expressivity for each participant in each segment selected on 100-point Likert Scales. Table 4 gives an overview of all rating scales used to rate the expressive behavior in the different experimental situations. The rater was blind to the content of the video clips that the participants were watching and did not know that in one segment the participants were actually not spontaneously reacting to a video clip, but rather posing an incongruent expression.

Rating scales used to code facial behavior and duration of segment coded

Social interactions

For each conversation (with spouse and with stranger) a 45 s period was selected toward the end of the conversation; this period was chosen assuming that participants would be more relaxed and the conversation would be more natural towards the end. If these last 45 s were not suitable (e.g., only one interactant was speaking; one of the interactants had moved out of view; there were long pauses where neither talked) another 45 s were selected as near as possible towards the end of the 5 min period. The rater scored these segments of conversation for valence of the expressions (the extent to which the participant displayed negative and positive expressions), and overall expressivity on 100-point Likert Scales.

Posed expressions

Each posed emotional expression was edited as a single event, which lasted up to 3 s. Four groups of 10–12 naive raters (undergraduate psychology students) each coded 25% of the total of 264 posed emotional expressions for “emotion expressed,” on a forced choice rating scale that listed the six target emotions. The percentage of raters who identified the posed emotion correctly was calculated. In a second round of scoring the identity of the emotions the participant was asked to pose was disclosed to a trained rater, who then scored the intensity and the quality of all 264 posed emotions on 100-point Likert Scales.

The imitated facial movements, both before and after experimenter feedback, were coded with the FACS (Ekman & Friesen, 1978) by a certified FACS coder. The coder compared the participant's AU pattern with the requested AU pattern and then gave each imitation a score from 1 to 6, taking into account quality, intensity and length of the expression. A score of 1 meant a perfect imitation of the facial movement and 6 meant no visible movements.

Intercoder Reliability

For the videotape watching situations, the conversations and the posed emotions, a second trained rater rated 12% of the segments. Intercoder reliability was calculated as the correlations between the scores given by both coders. With one exception, these correlation coefficients ranged between .62 and .95 with a mean of .77. The exception was the rating scale for negative emotions displayed during the conversations. Because the correlation coefficient was only .36, the ratings on this scale were excluded from further analysis.

For the imitations of facial movements, a second certified FACS coder scored the facial movements of 6 participants (3 PD participants and 3 controls). The intercoder agreement ratio on the scores was 41% with K = .274, which is considered an agreement of fair strength (Landis & Koch, 1977, as cited in Everitt, 1996). Further inspection of the data showed that the second coder was consistently stricter in his allocation of scores (e.g., where the first coder gave a score of 1, the second coder gave a 2). The scores given by each coder, therefore, correlated highly for each trial (Trial 1, r = .67, p < .001; Trial 2, r = .71; p < .001). These correlations suggest that the first coder had scored in a consistent way. With the exception of the second FACS coder, all editors, coders and raters used in this study were female.

RESULTS

Baseline Emotion Ratings

At the start of the experiment, PD and control groups rated themselves as almost equally happy (PD: M = 47.37, SD = 27.47; control: M = 54.93, SD = 23.19), excited (PD: M = 36.53, SD = 25.57; control: M = 37.46, SD = 20.79), and amused (PD: M = 28.21, SD = 32.36; control: M = 18.73, SD = 19.89) on the ERS. PD participants rated themselves as significantly more surprised than controls [PD: M = 22.63, SD = 34.69; control: M = 5.19, SD = 7.61; t(19.27) = 2.154, p = .044].

Spontaneous Facial Expressivity for Video-Watching Situations

Expression of emotions during video clips

Observer ratings of expressed amusement correlated highly with observer ratings of overall expressivity during the video watching situations for both groups (rs > .63 and p < .01, for both groups and for all three situations). We therefore report only expressivity ratings, as these ratings were obtained across situations.

Facial expressivity during video clips

Figure 1 gives an overview of the expressivity ratings for the video watching situations for both groups. When the expressivity ratings for the video watching situations were entered in a 2 × 3 (Group × Situation) mixed ANOVA, significant main effects were found for group [F(1,43) = 11.66, p = .001] and situation [F(2,86) = 4.89, p = .010] and for the Group × Situation interaction [F(2,86) = 5.51, p = .006]. Controls had significantly higher expressivity ratings than the PD group. The situation during which participants watched a video clip together with the experimenter had the highest mean expressivity rating. Pairwise comparisons with Sidak's t test showed that expressivity while watching a video clip with the experimenter was significantly higher than expressivity while watching the video clip with the spouse (p = .011). This difference was due entirely to simple effects of situation for the PD group, however; levels for the controls were consistent across situations.

Mean facial expressivity scores for each spontaneous situation plus mean self-rated amusement for the video watching situations. Note: VA = watching clip alone; VT = watching clip together with experimenter; VS = watching clip together with spouse.

Subjective emotional reactions to video clips

As shown in Figure 1, while watching a video clip alone, the mean rating for feelings of amusement on the ERS was 59.58 (SD = 38.76) for the PD group and 77.08 (SD = 20.92) for the control group. When they watched a video clip with the experimenter, the mean rating for feelings of amusement was 73.26 (SD = 27.69) for the PD group and 75.69 (SD = 24.98) for the control group. When watching a video clip with the spouse, the mean rating was 56.47 (SD = 30.68) for the PD group and 72.46 (SD = 25.82) for the control group. When amusement scores for each video clip were entered in a 2 × 3 (Group × Situation) mixed design ANOVA, a significant main effect was found for situation [F(2,86) = 4.32, p = .016] and the Group × Situation interaction approached significance [F(2,86) = 2.92, p = .060]. No significant main effect was found for group [F(1,43) = 2.58, p = .116]. Pair-wise comparisons with Sidak's t test showed that ratings of amusement while watching the video with the experimenter were significantly higher than the ratings in the alone condition or while watching the clip with the spouse. Again, this was due almost entirely to higher mean amusement ratings for the PD participants, whereas the controls' amusement ratings were fairly consistent across situations. In addition to amusement, most participants reported feeling happy while watching the video clips in the various situations. No other emotions were consistently reported.

The self-report amusement ratings for both groups correlated highly with the expressivity ratings in the video watching situations (rs varied between .38 and .59; ps varied between .007 and .057), except in one condition: when participants watched a clip alone, correlations were low for both groups (PD, r = .25, p = .305; control, r = .03, p = .871)

Spontaneous Facial Expressivity for Conversations

Expression of emotion during conversations

A 2 × 2 (Group × Conversation) ANOVA was conducted on the extent of positive expression shown during the conversations with the spouse and with a stranger. A significant main effect for conversation was found [F(1,39) = 17.42, p < .001] and a Group × Conversation interaction approached significance [F(1,39) = 3.32, p = .076], with no significant main effect for group [F(1,39) = 2.56, p = .118]. During the conversation with the stranger more positive expression was shown than during the conversation with the spouse. The control group showed more positive expression than the PD group during the conversation with the stranger (control: M = 49.17, SD = 17.92; PD: M = 33.16, SD = 20.83) but not during the conversation with the spouse (control: M = 18.54, SD = 18.21; PD: M = 21.58, SD = 23.69).

Overall facial expressivity during conversations

A 2 × 2 (Group × Conversation) ANOVA was conducted on the expressivity ratings for the conversations with the spouse and the stranger. Significant main effects were found for conversation [F(1,39) = 22.20, p < .001] and group [F(1,39) = 24.59, p < .001]. The Group × Conversation interaction was not significant [F(1,39) = .030, p = .863]. The control group showed more facial expressivity than the PD group during the conversations and both groups displayed more expressivity during the conversation with the stranger than during the conversation with the spouse (see also Figure 1).

Subjective rating of satisfaction with conversations

A 2 × 2 (Group × Conversation) mixed design ANOVA was conducted on ratings of satisfaction with the conversation (IRS). No main effects were found; however, there was a significant Group × Conversation interaction [F(1,43) = 4.28, p = .045]. PD participants rated the spouse conversation as less satisfactory (M = 5.32, SD = 1.42) than did controls (M = 6.15, SD = 1.16). Ratings of satisfaction for the conversation with the stranger were almost identical for the two groups (PD: M = 6.00, SD = 1.05; control: M = 6.01, SD = 1.05). The correlations between the satisfaction ratings and the expressivity scores were low and non-significant for both groups (rs varied between −.19 and .18; ps between .374 and .827).

Posed Expressions

Posed incongruent expressions

When asked to pose the incongruent expression of disgust while watching an amusing video clip, 7 PD participants showed only amusement and no disgust, 3 displayed both disgust and amusement, 2 displayed neither, 6 displayed only disgust and the data for 1 PD participant were not available. In contrast, 3 controls showed amusement instead of disgust, 3 displayed both and one displayed neither. The data for 3 controls were not available but the rest (n = 19) displayed disgust only. A chi-square test with Yates correction conducted on the eight cells, a 2 (groups) × 4 (possible expression combinations) design, yielded a value of 7.01, with an approximate p value of .075, which is just under the critical value of 7.82 at a p = .05 level.

Observer ratings of expressivity were significantly higher for the control group (M = 55.43, SD = 13.89) compared to the PD group [M = 37.78, SD = 16.11; t(39) = −3.77, p = .001] for the posing of incongruent disgust.

Posed emotional expressions

Table 5 gives an overview of the mean percentage of naive raters who correctly identified each posed emotional expression, as well as the mean intensity and quality ratings by the trained rater. Given six choices, the probability of a correct identification by chance alone was 16.7%. The recognition rate for fear was not significantly better than chance for either group; therefore, fear expressions were excluded from further analyses. The posed surprise and disgust expressions were identified significantly more often for the control group than for the PD group. The intensity and quality ratings for all posed emotions were significantly better for the control group than for the PD group, with the exception of the quality rating of anger.

Posed emotional expressions

Imitations

For the two imitation trials, the overall mean quality scores were calculated for the nine facial movements together. Because of missing values (face not visible) the mean quality scores could not be calculated for all participants. An ANOVA for repeated measures carried out on the mean scores for the two trials with group as a between-subject factor showed main effects for trial [F(1,37) = 113.10, p < .001] and group [F(1,37) = 11.75, p =.002], but no significant interaction. The mean scores for the second trial were better (lower)2

2A lower score is better on the scale, which runs from 1 (very good imitation) to 6 (no movement).

than those for the first trial. The control group had a better overall quality score on both trials (Trial 1: M = 1.98, SD = 0.36; Trial 2: M = 1.44, SD = 0.29) compared to the PD group (Trial 1: M = 2.48, SD = 0.46; Trial 2: M = 1.85, SD = 0.52). Participants with PD had particular problems with AU 4 (brow lowerer), AU 9 (nose wrinkler), AU 10 (upper lip raiser), AU 1 + 2 (brow raiser) and AU 6 + 12 (cheek raise plus lip corner pull) on one or both trials.

In order to evaluate the contribution of intensity of movement to these results, we looked at the specific FACS coding for the imitations in the first trial. All PD participants had difficulty performing the requested movement with great enough intensity for at least one or more of the nine movements. Seven of the 18 PD participants (one complete data set missing) had insufficient intensity on four or more of the movements. Of the 26 controls, 23 had insufficient intensity on at least one movement; however, only 1 control participant had difficulty with as many as four movements.

Relationship Between Spontaneous and Posed Facial Expressivity

To investigate the relationship between spontaneous and posed facial expressivity, we correlated expressivity ratings for the spontaneous situations with intensity ratings for the posed emotions, expressivity ratings for the incongruent expressions and quality scores for the imitations, by group, as shown in Table 6. In both groups, for the three video watching situations, all but one expressivity rating correlated significantly (or correlations approached significance) with the expressivity ratings for posed incongruent disgust (rs varied between .41 and .61; ps varied between .090 and .007); the exception was expressivity ratings for the PD group while watching a video with the spouse. For the PD group, the expressivity ratings for watching a clip with the experimenter and with the spouse correlated significantly with the intensity ratings for posed anger, and the expressivity score in the alone video watching situation correlated significantly with the intensity rating for disgust.

Pearson correlations between intensity and quality ratings for the posed expressions and expressivity ratings for the spontaneous situations

Additional significant or nearly significant correlations for the PD group were found between the expressivity score for the stranger conversation and the intensity scores for posed surprise and disgust, and the quality scores for the first imitation trial. In contrast, the only nearly significant correlation for the control group was between the expressivity score for the conversation with a stranger and the intensity score for posed sadness.

Self-Reported Emotional Facial Expressivity

Berkeley Expressivity Questionnaire

The mean score on the BEQ (Gross & John, 1995, 1997) was 69.39 for the PD group (range = 45–95, SD = 12.95, n = 18) and 74.87 for the control group (range = 41–103, SD = 13.52, n = 23), with no significant difference between groups.

Table 7 gives the correlations between the expressivity scores for the spontaneous situations and the BEQ sum scores and subscale scores for both groups. In addition, for the PD group, the expressivity during the posed incongruent disgust condition correlated significantly with the BEQ sum score (r = .57, p = .018) and with the BEQ subscale Impulse Strength (r = 57, p = .016). The correlation with the BEQ subscale Negative Expressivity approached significance (r = .42, p = .093). The BEQ subscale Positive Expressivity correlated significantly with the intensity score for posed anger (r = .71, p = .001).

Pearson correlations between expressivity ratings for the spontaneous situations and scores on the BEQ

For the controls, the correlation between the BEQ subscale Negative Expressivity and the expressivity score for posed incongruent disgust approached significance (r = .40, p = .077), and the BEQ subscale Positive Expressivity correlated significantly with the intensity score for posed anger (r = .48, p = .013).

IRS self-rating of expressivity

Two 2 × 2 (Group × Conversation) mixed design ANOVAs on the self-ratings of “how expressive your face was compared to how you usually are” and “how expressive your face was compared to the average person” were performed. No significant effects were found for the former. There was a main effect of conversation on the latter with higher ratings for the conversation with the stranger (M = 4.31, SD = 1.14) compared to the conversation with spouse (M = 3.69, SD = 1.26), but no significant effect of group and no interaction.

Possible Confounding Factors

Order of presentation of video clips

One-way ANOVAs with order of video clips as an independent variable were carried out on both the expressivity scores and the self-ratings of amusement for the three different videotape watching situations for both groups. The order in which the video clips were shown had a significant effect only on the expressivity scores for the alone video-watching situation for the PD group [F(3,18) = 5.89, p = .007]. Post-hoc testing was not feasible, as for one of the possible four orders data from only one participant were available. However, an investigation of the means showed that it was one particular order of video clips that gave a low expressivity score in the alone condition (which for the participants with PD was always the first situation). Because there were no other significant main effects, the order of the video clips was not taken into account for any of the analyses.

Because of practical constraints half of the control participants watched an amusing video clip together with the experimenter before watching a clip alone, whereas all other participants watched a video clip alone before watching one with the experimenter. No significant differences in expressivity scores were found between the 13 controls who watched a video clip alone first and the 13 who watched a clip alone after watching a clip together with the experimenter first.

Gender

ANOVAs with gender and group as between-subject factors showed that gender did not have a significant main effect or interaction effect on facial behaviors in any of the situations or on the self ratings of amusement for the three video watching situations.

Similar ANOVAs conducted on the BEQ sum scores and subscale scores showed no overall main effect for group or gender, but did show significant Group × Gender interactions for the BEQ sum score [F(1,37) = 9.052, p = .005]; the subscale Negative Expressivity [F(1,37) = 5.402, p = .026] and for the subscale Positive Expressivity [F(1,37) = 9.442, p = .004]. Male PD participants had lower scores on each subscale than male controls and female PD participants had higher scores than female controls.

Age

The influence of age on expressivity during spontaneous situations and on quality and intensity of the voluntary expressions was investigated by looking at the correlation matrices for both groups. No significant correlations were found between any of these measures and age for the PD group. For the control group, however, age correlated negatively with expressivity ratings during the incongruent posing of disgust (r = .50, p = .015), suggesting that the older the participants, the lower the expressivity in this situation. The correlation for the controls between age and the overall quality score of the imitations during the first trial was nearly significant (r = .38, p = .053). The older the participants, the less well they tended to do on the imitations; however, this was only the case on the first trial.

Depression

To check for the influence of depression on measures of spontaneous and posed expressivity we performed mixed ANOVAs with the SDS sum score as covariate. No differences in the pattern of results were found by controlling for depression in this way.

Medication

Due to the small number of participants who used anti-depressants it was not possible to establish whether there was a statistically significant influence of medication use on facial expressivity. On the basis of descriptive statistics, however, we do not believe they had a large effect. The effect of anti-Parkinson medications on facial expressivity was not possible to assess as all but 2 PD participants were using them.

Non-independence of spouse data

For the controls, the data of both spouses were analyzed, meaning that their data were potentially not independent, especially for the spouse conversation. To check for this possibility we correlated the expressivity scores of one spouse with the scores from the other spouse for each of the spontaneous situations. All correlations were low and not statistically significant (rs varied between −.05 and .33; ps varied between .312 and .931).

Stranger effect

Because different volunteers were used for the stranger conversation it was possible that individual differences of these strangers could have lead to differences in expressivity in the participants. A one-way ANOVA with identity of the stranger as covariate, however, revealed no significant effect of the stranger on the expressivity ratings.

DISCUSSION

Link Between Experienced and Expressed Emotion

As expected, people with PD showed less spontaneous facial expressivity than healthy controls, consistent with prior research. Nevertheless, facial expressivity during the video watching situations mirrored self-ratings of amusement, for both groups. The relationship between measures of facial expressivity and self-ratings of emotions for the PD participants suggest that despite reduced overall expressivity, the fundamental link between expression and feelings can still be found, at least in situations similar to the video-watching conditions. The findings, however, might be restricted to people with mildly to moderately severe PD.

Influence of Social Context

Main effects on expressivity were found for situation (reflecting differences in social context), for the video watching conditions and for the conversations. For video watching, these effects of social context were greater for the PD participants than for the controls, who seemed to have reached a ceiling across conditions. The expressivity of the PD group while watching a video clip together with an experimenter was closer to that of the control group than in any of the other situations.

For the conversations, both groups showed higher levels of overall expressivity and of positive expressivity while talking with the stranger than while talking with the spouse. Ratings of positive expressivity were lower than ratings of overall expressivity, suggesting that expressions of a negative valence were also shown. We could not interpret the data from the negative expressivity scale, however, due to low inter-rater reliability.

Given that PD participants reached relatively high levels of expressivity while watching a video with the experimenter, and while talking to a stranger, it is possible that the presence of a person they wanted to please or for whom they felt they needed to be expressive in order to conform, led them to voluntarily move their muscles into appropriate facial expressions. Wagner and Lee (1999) suggested that the conformity to social rules might be stronger when the other person present in a situation is observing or is the experimenter.

If we accept the explanation that higher levels of expressivity in these conditions were due at least in part to participants' voluntary movement of facial muscles, however, it is difficult to explain the contradictory findings of the PD group for posed movements. The ability to pose emotional and nonemotional expressions was markedly impaired in the PD group compared to controls, in contrast to what some researchers have previously suggested (e.g., Rinn, 1984). Of particular relevance were our findings that PD participants' posed expressions of happiness tended to be recognized less often than those of controls, and intensity and quality ratings of posed happiness were significantly lower. PD participants also had difficulty imitating a smile plus cheek raise on both imitation trials. These findings suggest that it is unlikely that PD participants would have been able to intentionally increase their smiling. A more likely explanation, therefore, may be that the increased levels of experienced emotions in the presence of an attentive and friendly person (e.g., feelings of amusement in the presence of the experimenter while watching the video) “drew out” greater levels of expressivity for the PD participants, as will be discussed in more detail under treatment implications.

Effects of PD on Voluntary Facial Expressions

The impairments of voluntary facial expression for PD participants were not limited to their difficulties with smiling and cheek raises. The posed expressions of surprise and disgust were not as easily recognized in PD participants as they were in controls. Further, the intensity ratings for all six posed emotional expressions and the quality ratings for all posed emotional expressions except anger were significantly lower for PD participants compared to the controls. It was not the case that quality scores of participants with PD for anger were high; rather those of the controls were low. Neither group could pose a recognizable expression of fear.

The PD group also had significantly worse overall quality scores than controls for other posed imitation trials. Particular problems with raising the eyebrows (AU 1 + 2) and raising the upper lip (AU 10) were consistent with difficulties we have found previously (Simons et al., 2003a). Additionally, many PD participants found it difficult to mask spontaneous expressions of amusement with a disgusted expression. In our previous study we showed that people with PD had difficulty masking a negative facial expression with a positive expression (Simons et al., 2003a); the present results suggest that such masking is disturbed in PD independent of valence.

Our data showing reduced recognition rates of posed disgust expressions and posed incongruent disgust expressions, along with difficulties imitating AU 10 (upper lip raise) in people with PD are particularly interesting in light of research suggesting that the basal ganglia are involved with the emotion of disgust (Calder et al., 2003). For instance, people with Huntington's disease, a genetic disorder that affects the striatal regions of the basal ganglia, show impairment in the recognition of the disgust expression (Gray et al., 1997). Our data suggest the involvement of the basal ganglia in the production of disgust expressions as well.

Relationship Between Spontaneous and Posed Facial Expressivity

When the spontaneous expressivity ratings, intensity scores for posed emotions and quality scores for imitated facial movements were correlated for each group, various large and significant correlations were found, especially between the expressivity score for the incongruent posed reaction to a video clip and spontaneous reactions to video clips. These results are consistent with previous studies in healthy participants, which found a positive relationship between spontaneous facial expressivity and voluntary facial expressivity (Berenbaum & Rotter, 1992; Friedman et al., 1980; Tucker & Riggio, 1988; Zuckerman et al., 1976). The fact that both spontaneous and posed facial expressivity were impaired in PD might also indicate such a relationship. This possibility is explored in greater detail in the next section.

Differences Between Impairments of Spontaneous and Voluntary Facial Expressivity in Parkinson's Disease

The nature of the impairment of posed facial expressivity in PD appeared to be somewhat different from the impairment of spontaneous facial expressivity. While watching video clips and talking to another person, PD participants showed less facial expression than controls and reduced intensity of the movements they did make. With posed emotional expressions they again showed reduced intensity, but also appeared to have problems with the control of the movements, which affected the quality and recognition rate for the movements. From observations it was clear that many PD participants, but not controls, tended to make several short movement attempts before displaying the full extent of the expression for both posed emotions and imitated movements. These observations are consistent with other motor impairments found in PD, such as problems with handwriting, in which movements can become smaller than normal and non-ballistic in character, so that the person with PD makes more (small) movements to compensate (Smith & Fucetola, 1995). If we assume a strict dichotomy between posed movements performed via the cortical motor system, and spontaneous movements performed via the extrapyramidal motor system, we could conclude that the cortical motor system must also be affected in many people with PD. Another possibility, however, is that posing facial expressions in the manner we (and previous researchers) have requested, requires input from both the cortical and the extrapyramidal motor systems in order for the movements to be performed fluently. If so, damage to the extrapyramidal system would lead to impairments of posed expressions as well as spontaneous expressions. The fact that most controls appeared to perform the posed emotion tasks and most of the imitated movement tasks in a “ballistic” manner supports this notion.

This idea is also consistent with Buck's (1984) distinction between spontaneous expressivity on one hand and three different types of voluntary expressions on the other: (1) voluntary expression initiation based on the activation of midbrain mechanisms by a motivational/emotional state (when you imagine yourself experiencing the emotion); (2) voluntary expression initiation based on direct influences upon midbrain mechanisms; and (3) voluntary expression formation analogous to the construction of verbal expression (putting the different elements of the expression together). In the current study it is likely that at least some of the posed expressions were attempted using direct activation of midbrain mechanisms, as participants in both groups took very little time to start trying to show the requested emotional expression. PD participants, however, then appeared to resort to “voluntary expression formation” when initial attempts failed. Studies of people with PD, including the present study, have not measured or manipulated the use of particular strategies that participants used, so it is difficult to compare the results. Nevertheless, our data on posed emotional expressions are similar to those of Jacobs et al. (1995) who found that PD participants' expressions were less easily recognized and of lower intensity than those of controls. Jacobs et al. did not include a nonemotional expression task, but our findings that PD participants had difficulty making nonemotional facial movements suggest that the primary problem is motoric rather than emotional. Further research is needed to clarify this issue.

Self-Report Measures of Expressivity

We found several strong correlations between judges' measures of spontaneous expressivity and self-ratings of global facial expressivity as measured by the BEQ. Similar to Gross and John (1997), we found a significant relationship between expressivity shown while watching a video and the overall BEQ score. The BEQ seems to be a less valid predictor of expressivity in more naturalistic situations, however, as only one of the conversation expressivity scores correlated significantly with the BEQ. The comparison of IRS expressivity ratings showed that PD participants did not perceive themselves to be less expressive than the controls.

Confounding Factors

Although we tried to assure that only persons with idiopathic PD took part in the study, it is possible that some people in fact suffered from another, similar condition. Some research suggests that an accuracy of 90% may be the highest that can be expected when diagnosing PD (Hughes et al., 2001) and misdiagnosis is therefore possible. Further, when we were selecting our participants, we used the MMSE to screen for dementia. The use of the MMSE might be problematic with people with PD because some of the tasks are relatively difficult to perform by people with motor impairments (e.g., the drawing task). None of the participants in our study was a ‘borderline’ case, however, so we are confident that cognitive functioning was sufficiently intact in all our participants to have produced valid results.

Possible confounding effects of age, gender, and depression were ruled out by statistical analysis, with the exception of gender effects on BEQ scores, which were different for the PD and control group. (It is important to note, however, that there were only five female PD participants.) Further research on possible effects of antidepressants and anti-Parkinson medication is needed as we did not systematically control for these variables.

Despite the overall group differences, there was considerable variance within each group, not only with regard to overall levels of expressivity for individuals, but also for how each individual's expressivity scores varied across experimental situations. In addition, our PD participants had mild to moderate PD and our results might not generalize to people with severe PD. Smith et al. (1996) reported differences in facial expressivity between people with mild PD and those with moderate PD, suggesting that expression does progressively decline. Due to the small number of participants for each stage, such a comparison was not possible for our present study. More research is needed to establish how PD symptoms influence facial expression and how the impairments in both posed and spontaneous expression change as the disease progresses.

Limitations of Stimuli and Self-Report Scales

Although self-ratings of amusement during the video clips were slightly lower for the PD participants than for controls, both while watching with the spouse and while watching alone, we do not believe this to be the result of a general reduction in the experience of emotions in PD participants. On the basis of comments made by the participants we believe that certain individuals of the somewhat older PD population simply did not enjoy our video clip(s) as much as controls did. The clips were piloted in a younger population than the actual experimental population and although the clips worked very well for these pilot samples, they might not have been as effective for some of the older participants.

In addition, there are inherent difficulties in the use of self-report scales for experienced emotion. Some people may have more difficulty than others in identifying how they are feeling, or in using the scales to rate those feelings. Further, participants in our study might have felt experimental demands to report “amusement” as that was obviously the emotion the experimenters wanted to induce.

Treatment Implications

The present findings have important implications for treatment of PD. First, because expressivity in PD, while reduced, appears to still be tied to emotional experience, it may be possible for family members, friends, and carers to learn to identify those expressivity cues that are still present in people with PD, rather than focusing on the usual cues that they may have relied on in the past.

Second, if the ability to pose expressions is affected, it would be difficult for people with PD to learn to compensate for reduced spontaneous expressivity by voluntarily making those expressions. It might, however, be beneficial if people with PD were able to increase their awareness of their lack of facial expressivity. Our data suggested that not all PD participants were aware of the extent of the impairment of their facial expression. Awareness of this lack of communication via the face might prompt them to learn to use alternative communicative channels such as the voice, or when the voice is monotone, by stating what they are feeling.

Third, it appears that certain people or certain situations are able to draw out higher levels of expressivity in people with PD. As noted, interacting with an unfamiliar but attentive person in our study seems to have had this effect. In contrast, interacting with a very familiar person seems to reduce expressivity, as our findings (for both groups) from the spouse conversation suggest. If spouses of people with PD were aware of this tendency, they might be able to help their PD spouse by becoming more animated and interactive. This factor could be beneficial in therapeutic contexts as well, for example, if PD carers or therapists could learn to draw out the expressivity in the people they are caring for or working with by being very expressive themselves. Research exploring these possibilities is clearly needed.

In sum, our results suggest that although most people with PD have reduced spontaneous facial expressions, their overall level of expressivity is still tied to emotional experience and the influence of social context. People with PD are also impaired in posing emotional expressions, imitating nonemotional movements, and masking spontaneous facial expressions. These difficulties are similar in character to other voluntary motor impairments found in PD.

ACKNOWLEDGMENTS

We wish to thank the members of the Parkinson's Disease Society and our other volunteers for participating in our study. We further thank Georges Schenk, Krystina Kinkade, Rebecca Ormond and Emma Loveridge for the coding and rating of the video material. This research was conducted with support from the Nuffield Foundation to Marcia Smith Pasqualini (No URB/00393/G). Parts of this research have been presented at the XIIth conference of the International Society for Research on Emotion in Cuenca, Spain (July, 2002) and the Xth conference on Facial Expression: Measurement and Meaning in Rimini, Italy (September, 2003).

References

REFERENCES

Adolphs, R., Schul, R., & Tranel, D. (1998). Intact recognition of facial emotion in Parkinson's disease. Neuropsychology, 12, 253258.CrossRefGoogle Scholar
Berenbaum, H. & Rotter, A. (1992). The relationship between spontaneous facial expressions of emotion and voluntary control of facial muscles. Journal of Nonverbal Behavior, 16, 179190.CrossRefGoogle Scholar
Buck, R. (1984). The communication of emotion. New York: Guilford Press.
Buck, R. & Duffy, R.J. (1980). Nonverbal communication of affect in brain-damaged patients. Cortex, 16, 351362.CrossRefGoogle Scholar
Calder, A.J., Keane, J., Manly, T., Sprengelmeyer, R., Scott, S., Nimmo-Smit, I., & Young, A.W. (2003). Facial expression recognition across the adult life span. Neuropsychologia, 41, 159202.CrossRefGoogle Scholar
Cleese, J.Booth, C. (Writers), Davies, J.H., Argent, D. & Spiers, B. (Directors). (1998). The hotel inspectors [Television episode on video]. In J.H. Davies, D. Argent, & B. Spiers (Producers), Fawlty Towers. London: BBC Worldwide.
Ekman, P. & Friesen, W.V. (1978). Facial Action Coding System: A technique for the measurement of facial movement. Palo Alto, CA: Consulting Psychologists Press.
Ekman, P., Roper, G., & Hager, J.C. (1980). Deliberate facial movement. Child Development, 51, 886891.CrossRefGoogle Scholar
Ellgring, H., Seiler, S., Perleth, B., Frings, W., Gasser, T., & Oertel, W. (1993). Psychosocial aspects of Parkinson's disease. Neurology, 43 [Suppl. 6], S41S44.Google Scholar
Everitt, B.S. (1996). Making sense of statistics in psychology: A second-level course. Oxford, UK: Oxford University Press.
Folstein, M.F., Folstein, S.E., & McHugh, P.R. (1975). “Mini Mental State”: A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12, 189198.CrossRefGoogle Scholar
Friedman, H.S., Prince, L.M., Riggio, R.E., & DiMatteo, M.R. (1980). Understanding and assessing nonverbal expressiveness: The Affective Communication Test. Journal of Personality and Social Psychology, 39, 333351.CrossRefGoogle Scholar
Gray, J.M., Young, A.W., Barker, W.A., Curtis, A., & Gibson, D. (1997). Impaired recognition of disgust in Huntington's disease gene carriers. Brain, 120, 20292038.CrossRefGoogle Scholar
Griffin, W.A. & Greene, S.M. (1994). Social interaction and symptom sequences: A case study of orofacial bradykinesia exacerbation in Parkinson's disease during negative marital interaction. Psychiatry, 57, 269274.CrossRefGoogle Scholar
Gross, J.J. & John, O.P. (1995). Facets of emotional expressivity: Three self-report factors and their correlates. Personality and Individual Differences, 19, 555568.CrossRefGoogle Scholar
Gross, J.J. & John, O.P. (1997). Revealing feelings: Facets of emotional expressivity in self-reports, peer ratings, and behavior. Journal of Personality and Social Psychology, 72, 435448.CrossRefGoogle Scholar
Hoehn, M.M. & Yahr, M.D. (1967). Parkinsonism: Onset, progression, and mortality. Neurology, 17, 427442.CrossRefGoogle Scholar
Hughes, A.J., Daniel, S.E., & Lees, A.J. (2001). Improved accuracy of clinical diagnosis of Lewy body Parkinson's disease. Neurology, 57, 14971499.CrossRefGoogle Scholar
Jacobs, D.H., Shuren, J., Bowers, D., & Heilman, K.M. (1995). Emotional facial imagery, perception, and expression in Parkinson's disease. Neurology, 45, 16961702.CrossRefGoogle Scholar
Jakobs, E., Goederee, P., Manstead, A.S.R., & Fischer, A.H. (1996). De invloed van stimulusintensiteit en sociale context op lachen: Twee meetmethoden vergeleken. [The influence of stimulus intensity and social context on smiling: A comparison of two methods of measurement]. In N.K. de Vries, C.K.W. de Dreu, W. Stroebe, & R. Vonk (Eds.), Fundamentele Sociale Psychologie Deel 10 (pp. 8997). Tilburg, The Netherlands: Tilburg University Press.
Jakobs, E., Manstead, A.S.R., & Fischer, A.H. (1999). Social motives and subjective feelings as determinants of facial displays: The case of smiling. Personality and Social Psychology Bulletin, 25, 424435.CrossRefGoogle Scholar
Katsikitis, M. & Pilowsky, I. (1988). A study of facial expression in Parkinson's disease using a novel microcomputer-based method. Journal of Neurology, Neurosurgery, and Psychiatry, 51, 362366.CrossRefGoogle Scholar
Katsikitis, M. & Pilowsky, I. (1991). A controlled quantitative study of facial expression in Parkinson's disease and depression. Journal of Nervous and Mental Disease, 179, 683688.CrossRefGoogle Scholar
Macht, M., Schwarz, R., & Ellgring, H. (1999). Psychosoziale Belastungen bei Parkinson-Patienten: Ausgewälhlte Ergebnisse einer bundesweiten Befragung [Psychosocial problems in Parkinson's disease patients: Selected results from a national survey]. Sexualmedizin für den Arzt, 2, 1821.Google Scholar
Madeley, P., Ellis, A.W., & Mindham, R.H.S. (1995). Facial expressions and Parkinson's disease. Behavioral Neurology, 8, 115119.CrossRefGoogle Scholar
Pentland, B., Gray, J.M., & Riddle, W.J.R., & Pitcairn, T.K. (1988). The effects of reduced non-verbal communication in Parkinson's disease. British Journal of Disorders of Communication, 23, 3134.CrossRefGoogle Scholar
Pentland, B., Pitcairn, T.K., Gray, J.M., & Riddle, W.J.R. (1987). The effects of reduced expression in Parkinson's disease on impression formation by health professionals. Clinical Rehabilitation, 1, 307313.CrossRefGoogle Scholar
Pitcairn, T.K., Clemie, S., Gray, J.M., & Pentland, B. (1990). Non-verbal cues in the self-presentation of Parkinsonian patients. British Journal of Clinical Psychology, 29, 177184.CrossRefGoogle Scholar
Ptaszynski, A. (Producer) & Schlamme, T. (Director). (1992). Rowan Atkinson Live [Motion Picture]. UK: Tiger Television Limited.
Rinn, W.E. (1984). The neuropsychology of facial expression: A review of the neurological and psychological mechanisms for producing facial expressions. Psychological Bulletin, 95, 5277.CrossRefGoogle Scholar
Simons, G., Ellgring, H., & Smith Pasqualini, M.C. (2003a). Disturbance of spontaneous and posed facial expressions in Parkinson's disease. Cognition and Emotion, 17, 759778.Google Scholar
Simons, G., Smith Pasqualini, M.C., & Reddy, V. (2003b). Is facial expressivity a trait? Emotional facial expressivity in a student sample. Manuscript in preparation, Department of Psychology, University of Portsmouth, Portsmouth, UK.
Smith, M.C. & Fucetola, R. (1995). Effects of delayed visual feedback on handwriting in Parkinson's disease. Human Movement Science, 14, 109123.CrossRefGoogle Scholar
Smith, M.C., Smith, M.K., & Ellgring, H. (1996). Spontaneous and posed facial expression in Parkinson's disease. Journal of the International Neuropsychological Society, 2, 383391.CrossRefGoogle Scholar
Tucker, J.S. & Riggio, R.E. (1988). The role of social skills in encoding posed and spontaneous facial expressions. Journal of Nonverbal Behavior, 12, 8797.CrossRefGoogle Scholar
Wagner, H. & Lee, V. (1999). Facial behavior alone and in the presence of others. In P. Philippot, R.S. Feldman, & E.J. Coats (Eds.). The social context of nonverbal behavior. Cambridge, UK: Cambridge University Press.
Zuckerman, M., Hall, J.A., DeFrank, R.S., & Rosenthal, R. (1976). Encoding and decoding of spontaneous and posed facial expressions. Journal of Personality and Social Psychology, 34, 966977.CrossRefGoogle Scholar
Zung, W.W.K. (1965). A self-rating depression scale. Archives of General Psychiatry, 12, 6370.CrossRefGoogle Scholar
Figure 0

Demographic factors and health status reported by group and gender

Figure 1

Medication use

Figure 2

Timeline for the experiment

Figure 3

Rating scales used to code facial behavior and duration of segment coded

Figure 4

Mean facial expressivity scores for each spontaneous situation plus mean self-rated amusement for the video watching situations. Note: VA = watching clip alone; VT = watching clip together with experimenter; VS = watching clip together with spouse.

Figure 5

Posed emotional expressions

Figure 6

Pearson correlations between intensity and quality ratings for the posed expressions and expressivity ratings for the spontaneous situations

Figure 7

Pearson correlations between expressivity ratings for the spontaneous situations and scores on the BEQ