INTRODUCTION
Facial expressions are complex signals caused by rapid muscular changes that are brief and last only a few seconds. Typically, these signals occur within an interpersonal context and communicate information about intention, motivation, and emotional states (Darwin, 1872; Ekman & Friesen, 1971; Fridlund, 1994; Horstmann, 2003). In humans, a variety of neurologic and psychiatric conditions alter the propensity to use facial signals. One such disorder is Parkinson's disease (PD), a dopaminergic depletion disorder affecting frontostriatal circuitry (for review, see Fahn, 2003). Among the hallmark features of PD, along with tremor, rigidity, slowness, is a “masked” expressionless face.
Monrad-Krohn (1924) was perhaps the first to propose a distinction in the neural circuitry for spontaneous (emotional) versus voluntarily initiated facial expressions. He described five patients who displayed abnormalities of facial expression. Four patients had a unilateral facial weakness (paresis) that became particularly pronounced when they were asked to smile or show their teeth. Their hemifacial weakness completely disappeared when they spontaneously laughed or smiled. In each of these four cases, a unilateral hemispheric lesion involving the frontal motor cortex and/or underlying white matter was presumed to be present. A fifth case showed the exact opposite behavioral pattern and was thought to have a pallidal lesion (basal ganglia) secondary to postencephalitic Parkinson's disease. Based on these clinical observations, Monrad-Krohn (1924) proposed a distinction in the neural circuitry for emotional versus voluntary facial expressions. Namely, unilateral lesions of the frontal motor cortex and classic pyramidal pathways disrupt voluntary movements of the contralateral lower face (i.e., a facial hemiparesis), while leaving spontaneous smiles and other facial emotions unaffected. Conversely, subcortical lesions including those of the basal ganglia diminish spontaneous displays of facial emotion.
Sixty years later, Rinn (1984) in a now classic review article on facial expression, continued this tradition and described Parkinson's disease as the “model system” for subcortical, basal ganglia influences on facial expression. According to Rinn (1984), patients with Parkinson's disease (PD) have little difficulty posing facial emotions when explicitly told to do so. They just fail to do so spontaneously, giving rise to the impression of the “masked face” of Parkinson's disease.
Research bearing on Rinn's proposal regarding a dissociation between spontaneous (impaired) and voluntary (normal) expressions in PD has been limited. To date, most neuropsychological studies of facial expression in PD have focused on spontaneous emotional displays during informal interviews or exposure to brief movies or vignettes. Results from these studies have consistently found that PD patients produce spontaneous facial expressions that are less intense and/or less frequent than those of healthy peers (Buck & Duffy, 1980; Katsikitis & Pilowsky, 1988; Pitcairn et al., 1990; Simons et al., 2004; Smith et al., 1996). In contrast, studies of posed or voluntary expressions in PD patients have yielded conflicting results. Some researchers find no differences between PD patients and normal peers (Borod et al., 1990; Smith et al., 1996). Others, however, report that voluntary expressions are less intense in PD patients (Jacobs et al., 1995; Simons et al., 2004). The basis for the discrepant findings across studies of voluntary emotions is unclear but may relate to methodologic factors such as differences in how facial expressions are rated (i.e., intensity ratings vs. accuracy in showing a particular emotion).
In the present study, we tested the hypothesis that voluntary facial expressions are abnormal in PD in much the same way that other voluntary movements are affected. Specifically facial expressions would be slowed and involve less movement in PD relative to peers. Of particular relevance to voluntary facial expressions is the “motor” circuit of Alexander and colleagues (1986), centering on the supplementary motor area (SMA) and motor regions of the frontal lobes. It is these regions that are particularly involved in the initiation and modulation of intentional movement and that seem particularly sensitive to dopamine depletion (Berardelli et al., 2001; Dick et al., 1989; Jenkins et al., 2000).
To test this hypothesis of slowness and amplitude of facial movement, we used a novel computer imaging methodology that enabled us to quantify dynamic movement changes over the face. This technique, originally developed by Leonard and colleagues (Leonard et al., 1991; also see Richardson et al., 2000), is based on the premise that changes in light reflectance patterns naturally occur over the moving face. In turn, these reflectance changes can be quantified by computing differences in pixel intensity over successive video images and summing these differences over time (see Figure 1). Working from this premise, we videotaped patients with Parkinson's disease and normal controls while they produced “posed” emotional facial expressions at the request of the examiner. These video images were then digitized, frame by frame, and analyzed offline for temporal changes in pixel intensity that occurred from a resting state to a peak facial expression. A quantitative index of movement change, called entropy (Leonard et al., 1991), was used as the index of movement. This method enabled us to examine not only the amount of movement change (entropy) that occurred during a particular expression, but also the time it took to reach a peak expression.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170408002933-81764-mediumThumb-S135561770606111Xfig001g.jpg?pub-status=live)
Original images, subtracted images, and entropy values during the emergence of a smile. Figure 1a shows the original frames of an individual moving from a neutral (baseline) expression through a smile. Figure 1b shows the subtraction images which are derived by subtracting corresponding pixel intensities of adjacent images. Figure 1c depicts a plot of the summed pixel difference changes (or entropy) as the expression unfolds over time.
Thus, the purpose of the present study was to determine whether Parkinson's disease affected voluntary expression of facial emotions by using a digital imaging methodology that enabled us to quantify the amount of movement change and timing parameters of dynamic facial movements. Two predictions were made. First, we expected that the amount of movement change (entropy) during voluntary facial expressions (happy, sad, fear, anger, disgust, surprise) would be diminished in PD patients compared with age- and gender-matched peers. Second, we predicted that the latency to reach a peak expression would be longer in PD patients than controls, due to generalized slowness (i.e., bradykinesia). As such, it would take PD patients longer to produce a maximal facial expression, which itself would be less robust than that of normal controls.
METHODS
Participants
Participants included 12 patients with idiopathic Parkinson's disease recruited from the University of Florida Movement Disorders Center and 12 healthy controls. We specifically recruited PD patients who were in the early-middle stages of the disease, did not display motor dyskinesias or on–off fluctuations, and were not demented or clinically depressed. The normal/healthy control group was recruited from the Gainesville community area and was age, education, and gender-matched to the PD group. Informed consent was obtained from all participants in accordance with University and Federal guidelines and the Declaration of Helsinki.
Demographic and other information about the two groups is shown in Table 1. The Parkinson group was predominantly male (9 males and 3 females), well-educated (15.4 years; range, 9 to 22 years), and between the ages of 47 and 85 years (X = 67.7 years). All were on dopamine replacement medications and were in the middle stages of the disease based on the Hoehn–Yahr classification (Stage 2–3; Hoehn–Yahr, 1967) and ratings from a modified Unified Parkinson's Disease Rating Scale (UPDRS; Fahn et al., 1987) that had been administered by a neurologist. The UPDRS and Hoehn–Yahr staging took place within three months of the face protocol when the patient was “on” dopaminergic medication The duration of Parkinson's disease, from the time of initial diagnosis, ranged from two to seven years (X = 5 years; SD of 2.5). Of the 12 PD patients, 10 were tremor predominant in presentation and 2 were akinetic–rigid. None of the PD patients met criteria for dementia based on neuropsychological screening (Dementia Rating Scale) that had been completed by another investigator within the preceding 3 months. On the day of the face expression study, all participants were further screened for cognitive and mood status using the Mini-Mental State Examination (Folstein et al., 1975) and the Geriatric Depression Scale (GDS; Yesavage et al., 1983). The PD patients scored in the nondemented range on the Mini-Mental State Examination (mean, 28.3; range, 27–30) and, as a group, in the nondepressed range on the GDS (X = 5.8; SD 3.2; range, 1–12). One PD patient obtained a score that was mildly elevated, whereas the remaining participants fell below the clinical cutoff for depression.
Demographic and clinical characteristics of subject groups
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170408002933-07708-mediumThumb-S135561770606111Xtbl001.jpg?pub-status=live)
The Control group consisted of 12 individuals who, like the PD group, were well-educated (X = 15.75 years; range, 12–20 years) and ranged in age from 55 to 84 years (X = 65.5). All performed in the normal, nondemented range on the Mini-Mental State Examination (X = 29.4; range, 27–30) and attained scores in the nondepressed range on the GDS (X = 2.5; range, 0–8). As shown in Table 1, the PD and Control groups did not differ in terms of age, education, or cognitive screening status. However, they did differ in terms of their scores on the GDS (t = 2.23; p < .04).
Procedures
The overall method for evaluating dynamic facial expressions involved three steps: (1) videotaping participants while they made facial expressions; (2) digitizing individual expression sequences using computer software; and (3) analyzing the digitized images using custom software.
Videotaping facial expressions
Testing took place in a quiet room within the Cognitive Neuroscience Laboratory of the McKnight Brain Institute. Participants were told that they were participating in a study of facial expressions and that the examiner would videotape them. They were asked to pose six different emotional expressions (happy, disgust, fear, sad, angry, and surprise) and to display each expression so that others would know how they felt. A black and white Pulnix camera (TM-7CM), Sony videorecorder (SLV R1000), and Panasonic TV monitor were used for video recording. Subjects sat comfortably in a chair, with the camera on a tripod approximately 5 feet in front of them. The participant's head was positioned in an adjustable head restraining device that restricted out of plane movement. Indirect lighting was produced by reflecting two 150-watt tungsten light bulbs into two white photography umbrellas positioned approximately 3 feet from the face. Lighting on each side of the face was balanced within 1 lux of brightness according to a Polaris light meter. Subjects wore a Velcro headband across the top of their forehead. Attached to the headband were two light emitting diodes that were synchronized with a buzzer that signaled the participant to make the target facial expression. Because sound was not recorded on the videotape, onset of the light diodes provided an index of trial onset during subsequent image processing.
For each facial expression, participants were instructed “Without moving your head, show me the most intense expression of (e.g., anger) that you can make when you hear the buzzer.” They were asked not to blink while making the expressions and to look straight into the camera. At the beginning of each trial, the participant was told the “target” emotion (e.g., anger, happiness) but was instructed not to produce it until they heard an auditory cue (i.e., buzzer) that would occur a few moments later (ranged from 2 to 4 s). After making the target expression, participants closed their eyes and relaxed their face for approximately 10 seconds. Each of the six emotional expressions was produced twice to optimize the possibility that one expression would be free of eyeblinks or movement artifact. The order of emotions was randomized and counterbalanced across the two subject groups.
Capturing and digitizing facial expressions
The individual facial expressions were digitized using a Sony video player, a personal computer with an Iscan-PCI video card, and EYEVIEW software (Imaging Technology). Because two exemplars of each target expression (e.g., two sads, two fears, etc.) were produced, the videotapes were initially reviewed, and one exemplar was selected based on the absence of eyeblinks or head movement. If both exemplars were equivalent, then the first expression was selected for digitizing. For each expression, the videotape was advanced, frame by frame, until the onset of the light diodes which indicated the beginning of a trial. Beginning with light onset, a minimum of 30 videoframes (30.75 ms per frame; approximately 900 ms) was captured, digitized, and saved onto the hard drive of the computer using Eyeview software. It was never necessary to capture more than 30 frames for any of the control subject expressions. However, 16% of the trials for the PD patients required the capture of up to 45 frames.
Analyzing facial expressions
Custom software programs written in PV-Wave by one of the authors (D.G.) computed the changes in pixel intensity that occurred, on a frame by frame basis, during the course of the expression (i.e., entropy). These programs were menu driven and involved placing landmarks on the face, extracting the face image, and computing movement change.
Landmarking and regions of interest. The face area was extracted from the videoframe by custom software that relied on a subset of 20 anatomical facial landmarks (see Figure 2a). The selection of landmarks was based on pilot data from 50 unique faces, taking into account different face shapes and sizes (Hiatt & Gartner, 1987; Ras et al., 1995). These anatomic landmarks were placed on the initial digitized image of each expression sequence using a computer mouse (see Figure 2). Once landmarking was completed, our software program used this information to extract the face area from the video frame and then automatically applied these boundaries to all the face images in a particular expression sequence. In the present study, the face region of interest was the entire face.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170408002933-08242-mediumThumb-S135561770606111Xfig002g.jpg?pub-status=live)
Landmarking the face to compute geographic facial region of interest for computing entropy.
Computing movement changes (entropy). Each expression sequence included a series of digitized images, each of which consisted of a 640 × 480 pixel array at 256 levels of gray scale. A quantitative measure of expression change was computed by subtracting the values of corresponding pixel intensities between adjacent frames, summing their differences (e.g., ΣP111 − P112 + P121 − P122…Pijk − 1 − Pijk, where i = horizontal pixel location, j = vertical pixel location, and k = frame number), and dividing by the number of pixels used. This computation was repeated over each pair of successive frames. The total sum of these mean values divided by the number of frames is referred to as the entropy score. Thus, entropy is a measure of pixel intensity change that occurred over the face as it moved during the course of the expression. Because the light sources introduce a uniform intensity distribution over the face, entropy indicates a normalized value with respect to varying face sizes across different individuals.
Figure 1 depicts an expression sequence by showing the original frames, the subtraction or “difference images,” and a plot of entropy over time. The formula for deriving entropy is as follows: Ei(t) = −Σnj(t)/Ni * log(nj(t)/Ni), where i = 1,2…12, index associated with the face region of interest; j = 0,1…255,index associated with individual gray level intensities; Ni = total number of pixels in face region; nj (t) = number of pixels with gray level intensity j, on the image obtained by subtracting frame t − 1 from frame t; and Ei(t) entropy of face region at time t.
Entropy was automatically computed for each of the six emotional expressions by our software. Also computed was the time (in ms) it took an expression to reach its peak entropy value from the onset of each trial as signaled by the buzzer.
RESULTS
The dependent variables included: (1) the amount of facial change that occurred between the onset of each expression trial (baseline) and the peak entropy value; and (2) the latency to reach a peak expression. Each of these variables (entropy, latency) was independently analyzed with a repeated measures analysis of variance using SPSS computer software. For each analysis, Group (Parkinson, Control) was the between-subject variable and Type Expression (happy, sad, anger, fear, disgust, surprise) was the within-subject variable.
The results of the entropy analysis revealed a significant main effect for Group (F(1,22) = 37.2; p < .0001; ηp2 = .628). Regardless of the expression, the PD patients displayed significantly less movement, as indexed by entropy (X = .097; SD .061), during voluntary facial expressions than the controls (X = .367; SD .141). This finding was present for each of the six emotional expressions (see Figure 3). A second finding was that certain emotions were associated with more movement (entropy) than others (F(5,110) = 6.84; p < .001; ηp2 = .237). Post hoc comparisons (least significant difference, LSD) indicated that the expression of surprise and anger induced the most facial movement and resulted in significantly more entropy (p < .05) than all the remaining emotions [Surprise = .315 (SD .287); Anger = .295 (SD .275); Fear = .227 (SD .181); Happy = .217 (SD .172); Disgust = .213 (.189); Sad = .126 (SD .108)]. By contrast, the expression of sadness resulted in significantly less entropy than all the remaining expressions (p < .05). The three remaining expressions (fear, happy, disgust) did not differ from each other, but all induced significantly more movement than sad (p < .05) and significantly less movement than anger or surprise (p < .05).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170408002933-70916-mediumThumb-S135561770606111Xfig003g.jpg?pub-status=live)
Facial movement during voluntary expression of specific emotions by Parkinson's disease patients and normal controls.
Finally, the Group × Expression interaction was also significant [F(5,110) = 3.09; p =.012; ηp2 = .123]. Post hoc comparisons (LSD) indicated that, while facial movement was significantly diminished for all expressions in the PD group, the difference between the two groups was less robust for sadness (p < .05).
The results of the Latency analysis also indicated a significant main effect for Group [F(1,22) = 8.75; p < .01; ηp2 = .315]. Overall, the PD patients were significantly slower in reaching a peak facial expression from the onset of the tone cue than the Controls [PD = 669.0 ms (210.8 SD), Controls = 440.6 ms (128.8 SD)]. Again, there was a significant main effect for Expression type [F(5,110) = 4.57; p < .001; ηp2 = .194], such that certain facial expressions were more rapidly produced than others. Post hoc comparisons (LSD) indicated that expressions of fear (X = 404.6 ms; SD 269.2) reached peak entropy significantly more rapidly than all the remaining emotions [Happy X = 491 ms (SD 187.4); Surprise = 595 ms (SD 345.8), Anger = 588 ms (SD 231.7), Sad = 612 ms (SD 204.3), Disgust = 669 ms (SD 312.9); p < .05]. The Group × Emotion interaction was not significant [F(5,110) = 1.17; p = .33; ηp2 = .058].
Neither the PD group nor the Control group were clinically depressed based on clinical cutoff scores on the GDS; nevertheless, the GDS scores of the PD group (X = 5.88) were significantly higher than those of the Controls (X = 2.75; see Table 1). Because depression has been associated with diminished emotional reactivity (Schwartz et al., 1976; Sloan et al., 2002), we carried out correlational analyses to examine the relationship among scores on the depression scale (GDS) and the entropy and latency measures for each emotion. No significant correlations were found between mood and facial expressivity for any of the emotions (Happy r = .047, p = .86; Sad r = −.304, p = .25; Fear r = .050, p = .85; Surprise r = −.241, p = .369; Anger r = .357, p = .17; Disgust r = .169, p = .53).
Finally, because of our small sample size, we were not able to examine for potential sex differences or for differences between the tremor-predominant versus the akinetic–rigid subtypes of Parkinson's disease.
DISCUSSION
In this study, we wanted to learn whether voluntary expressions of facial emotion were affected in patients with Parkinson's disease. The underlying impetus derived from Rinn's (1984) original proposal that Parkinson's disease represents the model system for impairing spontaneous facial emotions, while leaving voluntary (i.e., cortical) expressions of facial emotion intact. By contrast, we argued intentional movements of the face should be influenced by Parkinson's disease in much the same way that intentional movements of the limbs are affected. As such, we proposed that voluntary facial expressions would be slower and involve less movement in PD. To test this prediction, we used a sophisticated computer imaging methodology that enabled us to quantify dynamic facial expressions among PD patients who were asked to pose various emotions (e.g., fear, anger, happy). A measure of movement change, called entropy, was computed by examining dynamic changes in pixel intensity over the moving face during the course of a voluntary facial expression. Using this measure, our findings were significant and robust. Very simply, less movement (entropy) occurred over the face when PD patients were asked to make a target expression relative to controls. Furthermore, the time it took to achieve a peak pose was longer for PD patients than controls. Thus, PD patients had reduced facial mobility (micro-expressivity) and were significantly slower in reaching a peak expression (bradykinesia) than controls. These parameters correspond to other aspects of motor behavior associated with PD disease such as micrographia, hypometria, and bradykinesia.
Bradykinesia and reduced facial mobility were present across all the various expressions (e.g., anger, fear, sadness, happy, disgust) in the PD group. Although some emotions, like sadness, were associated with less overall facial movement than other emotions, this was observed in both PD patients and controls. Other emotions such as fear were associated with a more rapid rise time for reaching a peak expression than other emotions (such as smiling or frowning). Again, this emotion-specific difference was present to the same extent in both PD patients and controls. Thus, Parkinson's disease did not appear to differentially disrupt the voluntary expression of certain emotions, such as fear or disgust, more so than others. Instead, all emotional expressions were dampened and took longer to execute. Moreover, depressed mood among the PD patients did not appear to play an important role in these findings, as there was no correlation between scores on a standardized mood measure (GDS) and the facial expression/entropy measures.
These findings raise the question as to where the burden of impairment lies. The dampening and slowing of facial expressivity observed in PD patients could arise from impairment at the level of general facial movement execution. Alternatively, the defect might be at the level of modulating or “fine-tuning” facial expressions. We will consider each of these potential explanations in turn, although they are not mutually exclusive.
First, the ability to make facial expressions requires preservation of motor engrams or programs for executing particular movement patterns over the face. The genesis of these “motor programs” is unknown, although they likely originate subcortically given observations that anencephalic infants display simple grimaces and smiles. Ultimately, these motor engrams are responsible for initiating and activating specific muscle groups on the face. This begins at the level of “face representation areas” in the cortex and limbic region (cingulate) and continues downstream via corticobulbar white matter tracts through the internal capsule on to the facial nucleus (cranial nerve VII) in the brainstem. From there, the right and left facial nerves exit the brainstem, and various branches of this nerve activate specific muscle groups of the upper (e.g., frontalis, corrugator, orbicularis oculi) and lower face (e.g., zygomatic, orbicularis oralis, risorius, etc.). Several lines of research, including recent neuroanatomic mapping studies in rhesus monkeys, have identified five cortical “face representation areas” within each hemisphere (Morecraft & Van Hoesen, 1998; Morecraft et al., 2001). Two are located in the cingulate region and three in the mesial and lateral frontal cortex. Each of these face representation areas broadly correspond to a somatotopic motor map of the face, and electrical stimulation from these regions, either directly or via transcranial magnetic stimulation, can elicit motor responses from distinct muscle groups on the face (Liscic & Zidar, 1998; Triggs et al., 2005; Urban et al., 1997). Of particular importance to our study, the two cingulate face areas have been implicated in emotional or spontaneous facial behavior, whereas the frontal face areas (motor, premotor, SMA) relate to voluntary expressions (Morecraft et al., 2001). It is the frontal regions within the lateral and dorsomedial frontal cortex (e.g., motor, premotor, and SMA) that are relevant to the intentional expression of voluntary facial emotions. Thus, commands to produce specific target emotions on the face, as done in the present study, would involve activation of these frontocortical brain regions (see Gosain et al., 2001). Research suggests that it is these same frontal motor areas that are also affected in Parkinson's disease (Alexander et al., 1986).
A second possibility is that neural mechanisms involved in motor execution of facial expressions are intact, but the neural circuitry involved in modulating facial expressions is impaired in patients with PD. There are various modulatory parameters that influence the “strength” or intensity of facial expressions, such as amplitude of facial movement, temporal characteristics of individual expressions (i.e., duration of movement, time to initiate a movement), and the frequency of movements over time. How these various parameters are precisely modulated at the neural level is unclear, but they likely involve complex interactions between subcortical and cortical pathways. Recently, Nambu (2005) introduced a dynamic model of basal ganglia functioning that may be helpful in explaining disruption in movement modulation in PD. According to this model, a voluntary movement involves various cortico-striatal-thalamic loops, which control the activity of the thalamus and cortex so that only the selected motor program is released at the selected time. All competing, unselected motor programs are suppressed. Nambu suggested that, due to reduced levels of dopamine in PD, when a voluntary movement is about to be initiated by cortical mechanisms, signals through the “hyperdirect” pathway [i.e., cortico-subthalamic nucleus (STN)-globus pallidus internus (GPi)/substantia nigra (SNr)] and the indirect pathway (i.e., cortico-striato-globus pallidus externus-STN-GPi/SNr) increase and suppress larger areas of the thalamus and cortex than in the normal state. This process leads to a reduction in signal through the direct (cortico-striato-GPi/SNr) pathway. The net result is that both unselected and selected motor programs are suppressed. Thus, the timing of the release of selected movements is, in effect, suppressed, leading to the bradykinesia or akinesia characteristic of PD. This mechanism is one potential explanation for the bradykinesia we observed in the time to reach peak facial expression in our study. Amplitude or intensity of facial expression may be affected in an analogous way, that is, intensity is suppressed, or “dampened,” just as timing of movement is dampened.
Our findings do not enable us to distinguish between these two possible levels of impairment: a more general defect at the level of the motor engrams (i.e., facial movement execution) versus a defect in the “fine tuning” or modulation of facial expression movements. Although we prefer the “modulatory” explanation, the distinction becomes theoretically murky because abnormal modulation could reflect corruption of some aspect of a motor program or its execution. All that we can strictly infer is that our findings are consistent with the view that the basal ganglia play a role in altering intentional, voluntary facial movements of expression. Perhaps this occurs because of diminished efficiency and/or activation of face representation areas in the frontal cortical regions (i.e., motor, premotor, and SMA) or because of movement based suppression as suggested by Nambu (2005).
Our study has several limitations. First and foremost, our sample size was small and limited to a homogenous group of Parkinson patients who were moderately affected by their disease (i.e., Hoehn–Yahr scores between 2 and 3). All PD patients were tested on their medication and they did not have “on–off” fluctuations in response to dopamine treatment. Because we did not test PD patients at Hoehn–Yahr stage 1, it remains unknown whether subtle changes in amplitude and slowness of facial movement also exist among PD patients in the very earliest stages of the disease. At the other end of the spectrum, it would be difficult, using our face digitizing system, to examine PD patients with drug-related facial dyskinesias due to the sensitivity of entropy measure to movement artifact. Second, most of the patients in the present study were male (both in the PD group and the Control group), and it is unknown whether potential sex differences might exist for male versus female PD patients. Desai et al. (2001) evaluated normal college students and found no differences between male and females in overall entropy (i.e., movement) during voluntary facial emotions. However, the possibility of sex differences in the course and severity of “masked facies” among PD patients has not been systematically examined. Third, the microexpressivity and bradykinesia of facial movements in our PD group occurred when these patients were taking their normal dopamine-augmenting medications. We did not test patients when they were “off” medication. However, one would expect that facial movement difficulties would be further exacerbated when dopaminergic medications were removed. Finally, the focus of the present study was on voluntary expression of facial emotion and the extent to which it might be influenced by PD. We suspect that the bradykinesia and diminished entropy observed in the present study are not specific to “emotional expressions”, but extend to nonemotional facial movements as well (e.g., open mouth, raise brow). This is an empirical question that could be examined in future studies by comparing emotional and nonemotional facial movements.
Taken together, our findings add to the current literature on facial expressivity in several ways. First, the “masked facies” of Parkinson's disease is not limited to spontaneous facial emotions as suggested by previous researchers, but also involves voluntary or posed facial expressions as well. Second, the use of PD as a model system for the neuroanatomic dissociation between voluntary and spontaneous expressions may be unjustified. Our findings clearly suggest that the expression of voluntary facial emotions is detrimentally affected by Parkinson's disease in terms of slowness and diminished amplitude. This finding may occur because of diminished efficiency and/or activation of face representation areas or because of movement-based suppression. We suspect that spontaneous facial expressions are similarly affected and direct comparisons of posed and spontaneous expressions could be examined in future studies. Finally, the use of techniques such as the one described in this study may prove particularly useful in treatment and other outcome studies that require more precise quantification of facial movement.
ACKNOWLEDGMENTS
This work was funded in part by grants from NIH (R01-MH62639, R01-NS50633) and from the University of Florida Opportunity Fund. We are appreciative and grateful to our patients, to Dr. Laura Grande for her assistance with subject recruitment, and to Dr. Catherine Price for her comments on an earlier draft of this paper. Preliminary data from this study were presented at the 2003 annual meeting of the International Neuropsychology Society.