Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-02-06T05:00:09.093Z Has data issue: false hasContentIssue false

The recognition of emotional expression in prosopagnosia: Decoding whole and part faces

Published online by Cambridge University Press:  25 October 2006

BLOSSOM CHRISTA MAREE STEPHAN
Affiliation:
School of Psychology, University of Sydney, Sydney, Australia
NORA BREEN
Affiliation:
Neuropsychology Unit, Royal Prince Alfred Hospital, Sydney, Australia
DIANA CAINE
Affiliation:
School of Psychological Sciences, University of Manchester, United Kingdom
Rights & Permissions [Opens in a new window]

Abstract

Prosopagnosia is currently viewed within the constraints of two competing theories of face recognition, one highlighting the analysis of features, the other focusing on configural processing of the whole face. This study investigated the role of feature analysis versus whole face configural processing in the recognition of facial expression. A prosopagnosic patient, SC made expression decisions from whole and incomplete (eyes-only and mouth-only) faces where features had been obscured. SC was impaired at recognizing some (e.g., anger, sadness, and fear), but not all (e.g., happiness) emotional expressions from the whole face. Analyses of his performance on incomplete faces indicated that his recognition of some expressions actually improved relative to his performance on the whole face condition. We argue that in SC interference from damaged configural processes seem to override an intact ability to utilize part-based or local feature cues. (JINS, 2006, 12, 884–895.)

Type
NEUROBEHAVIORAL GRAND ROUNDS
Copyright
© 2006 The International Neuropsychological Society

INTRODUCTION

Current theories of face recognition typically distinguish between the processing of individual face features and the processing of configural information (i.e., the unique spatial relations among the internal features) with both kinds of information contributing to the structural descriptions of seen faces (Bartlett & Searcy, 1993; Cabeza & Kato, 2000; Martelli et al., 2005; Maurer et al., 2002; Sergent, 1984a,b). Prosopagnosia, a rare neurological condition characterized by the inability to recognize facial identity, is typically described as impairment in configural processing with compensatory reliance on feature processing strategies (Barton et al., 2002, 2003; Boutsen & Humphreys, 2002; Farah et al., 1998; Joubert et al., 2003; Levine & Calvanio, 1989; Marotta et al., 2001; Saumier et al., 2001). Recent studies have shown that the configural deficit can manifest in at least two ways: as a complete loss of face configural effects, or by paradoxical configural effects where the loss of configural processing is incomplete and actively interferences with the application of compensatory part-based processing strategies (de Gelder et al., 1998; de Gelder & Rouw, 2000a, 2000b; Farah et al., 1995a; Rouw & de Gelder, 2002).

Whereas the impact of prosopagnosia on information processing has been extensively studied in identity recognition, relatively little is known about the effect on expression recognition. To what extent are configural or feature-based processes involved in the recognition of emotional facial expression, and how does prosopagnosia impact on the recognition of emotional facial expression? Here we report the results of a series of experiments investigating information processing in face expression recognition in a patient with relatively isolated prosopagnosia.

There is considerable evidence that face recognition and the recognition of facial expression are doubly dissociable and mediated by separate mechanisms (Bruce & Young, 1986; Campbell et al., 1996; Ellis & Young, 1990; Hasselmo et al., 1989; Hoffman & Haxby, 2000; Phillips et al., 1998). Some prosopagnosic patients have been able satisfactorily to interpret facial expression in spite of their inability to recognize familiar faces (Bruyer et al., 1983; Duchaine et al., 2003; Nunn et al., 2001; Shuttleworth et al., 1982; Tranel et al., 1988), whereas other patients have been shown conversely to have difficulty interpreting expression while still being capable of identifying faces correctly (Adolphs et al., 1994; Anderson et al., 2000; Kurucz & Feldmar, 1979; Young et al., 1995, 1993). The implication that the identification of facial expression and facial identity are mediated by separate brain regions has been confirmed by electrophysiological studies in primates and functional neuroimaging in humans (Allison et al., 1999; Haxby et al., 2000; Heywood & Cowey, 1992; Munte et al., 1998; Rolls, 1992; Sergent et al., 1994). Face processing and recognition has been linked to ventral occitotemporal areas (Kanwisher et al., 1997), with a core system involving the inferior occipital gyrus, fusiform gyrus, and the superior temporal sulcus (Haxby et al., 2000, 2002). Further, face selective activation in the left fusiform gyrus is thought to be involved in feature-based processing and in the right with holistic/configural-based processing of faces (Rapcsak et al., 1994; Rossion et al., 2000). Emotion processing and recognition has been linked to a network of limbic structures including the amydgala and insula (Adolphs et al., 2005; Blair et al., 1999; Calder et al., 2001).

Yet, these processes may not be completely independent (Baudouin et al., 2000a,b; Dolan et al., 1996; Schweinberger & Soukup, 1998). Whereas neuroimaging studies have shown that activation can be seen in different regions for identity versus expression recognition, other regions are activated in identity and expression tasks (Ganel et al., 2005; Haxby et al., 2002; Sergent et al., 1994). Differences in extent and location of lesion may therefore give rise to variations in performance by different patients on particular tasks.

To what extent are configural or feature-based processes involved in recognition of facial expression? Working with two prototype expressions (happy and angry), defined in terms of just two facial features, the eyebrows and the corners of the mouth, Ellison and Massaro (1997) found that subjects' responses to whole-face images could be reliably predicted from responses to half-face images, eyebrows or mouth, suggesting a primary role for features in expression recognition. In contrast, Calder et al. (2000) used composites of same or different expressions and found that subjects were slower to identify the expression in either half of the composite aligned images compared with the control condition in which the two halves were misaligned. These authors concluded that configural information was strongly implicated in the recognition of facial expression, but that this did not necessarily preclude a secondary role for feature-based processing (see also, McKelvie, 1973; Oster et al., 1989; Wallbott & Ricci-Bitti, 1993; White, 1999, 2000, 2001).

Which information source predominates may be a function of the emotion in question. Isolated features may be meaningful if uniquely associated with an emotion: an upturned mouth/smile to indicate happiness; whereas wide-open eyes or a wide-open mouth may indicate either surprise or fear. For the latter, information about the conjunctive association between the brow and the mouth may be necessary to facilitate recognition (see also, Smith et al., 2005; Smith & Scott, 1997). Which components of a face carry salient information in expression recognition, particularly in terms of regions of the face is therefore still not completely understood. Here we examine the role of part-based (i.e., eyes and mouth) and configural-based information in the recognition of different emotional expressions.

To our knowledge, although accuracy of expression recognition in the absence of identify recognition has been observed, the information processing strategies involved in recognition of emotional facial expression have not formerly been addressed in the context of prosopagnosia. In this study we present a prosopagnosic patient, SC, whose ability to discriminate and match individual face features was intact but who was incapable of identifying even very familiar faces. He also had difficulty recognizing some, but not all, emotional expressions. Basic level object recognition was normal. SC was asked to identify different emotional expressions from whole and incomplete faces. The aim was twofold; first to determine the information requirements necessary to support accurate recognition of emotional expression, and second, to report the impact of prosopagnosia on information processing strategies involved in recognizing expressive faces. If SC's processing deficit is related to an inability to encode and process configural information common to both identity and expression recognition, and instead he relies on feature processing strategies then he should fail to benefit from whole face expressions and perform no differently across whole and incomplete face conditions. In contrast, if impaired configural processes are disruptive to part processing then we might expect there to be better performance in the incomplete face conditions, since configural information is reduced when parts are presented in isolation. Alternatively, it might be that SC's impairment arises not from a configural deficit per se but rather from having to process multiple features together. In this case, expression recognition performance should increase as less of the stimulus is available. However, any effects observed are expected to be contingent on the degree to which featural information is uniquely associated with each expression, such that part processing may be adequate for the recognition of some expressions (e.g., happiness by an upturned mouth), but not others (e.g., wide open eyes alone may signal either fear or surprise).

METHODS

Participants

Case history

At the time of this investigation SC was a 38-year-old man with a complicated medical history. He had been admitted to Hospital in November 1984, at the age of 22 years, having been involved in a motor vehicle accident in which he sustained a significant head injury. Plain skull x-rays demonstrated a fracture of the right parieto-occipital bone. CT scan of the brain showed a hemorrhagic contusion in the left anterior temporal lobe associated with a left sided acute subdural hematoma, and ischemia of the left parietal and occipital lobes. Following evacuation of the subdural hematoma the brain CT scan showed dilatation of the posterior horn of the left lateral ventricle with low attenuation also present in the posterior aspect of the right occipital lobe. He therefore sustained extensive damage to the posterior cerebral cortex with both hemispheres affected.

In consequence of these injuries he had a right homonymous hemianopia, as well as weakness in the left limbs associated with loss of dexterity. Examination by a speech therapist a month after the event found severe difficulty recognizing details in the environment, including faces, from sight as well as impaired memory. Over succeeding months he progressed from being able to discriminate shapes only to being able to cope with fine detail including small print. In August of 1985 he returned to therapy in relation to a complaint of persisting prosopagnosia.

SC had been an average student at school, which he left at the age of 15 years. Estimated pre-morbid IQ was believed to have been in the low average range. After leaving school he had a few relatively unskilled occupations but had also worked as a motor mechanic.

At the time of this investigation his only visual complaint was his inability to recognize familiar faces. Copying, drawing, reading, and writing were normal. He was unimpaired on tasks of working memory, including the digit span, logical memory (immediate and delayed) and visual reproduction (immediate and delayed) subtests of the Wechsler Memory Scale: Third Edition (WMS–III) (Wechsler, 1997). Uncorrected visual acuity at the time of the neuropsychological examination was 6/6 in the right eye and 6/9 in the left. On the Farnsworth Munsell 100 Hue test (Farnsworth, 1957) he was shown to have an acquired color vision defect in both eyes: total error score of 476 (right eye), 406 (left eye), (>270 considered to be low color discrimination).

Ethical approval was obtained from the University Human Ethics Committee, The University of Sydney, Sydney, Australia.

Face versus Object Recognition

SC showed a mild deficit in within-category recognition of non-face objects when exemplars were from homogeneous categories. He performed below age- and sex-matched controls on tests of naming fruits and vegetables (66% correct) [>2 standard deviations (SD) below the mean], and cars (41% correct). His score was within 1 SD of the mean on the latter task but this was a surprising result because cars were a particular interest of his, whereas that was not necessarily true of the controls. However, in contrast to this, when asked to recognize a mixed set of objects from their prototypical view (Humphreys & Riddoch, 1984) he scored 19/20 correct. Similar within category object deficits have been reported in other prosopagnosic patients where it has been concluded that the dissociation between part-based and configural-based processing for objects versus faces is not complete; discrimination of within category object exemplars may invoke some measure of configural processing used in face and non-face (subordinate) object recognition, although more so in the former than the latter (Farah, 1990).

SC performed well on standard neuropsychological tests of visual processing (Table 1). His drawing to copy was excellent indicating good acuity. He passed all components of the Visual Object and Space Perception Battery (VOSP) (Warrington & James, 1991). He had no difficulty naming objects (living and man-made) from prototypical or unusual views (Humphreys & Riddoch, 1984), and his performance on the Alberts test for visual neglect (Alberts, 1973) was normal, notwithstanding a persisting right homonymous hemianopia.

A summary of SC's performance on tests of object and face perception and recognition

In contrast, SC was severely prosopagnosic. When presented with a series of black and white photographs of immediate family members, famous faces and faces of persons previously unknown to him (matched in age/sex with the previously known faces) and asked which were familiar, he failed to successfully identify a single photograph. He claimed all photographs were unfamiliar. Using the same stimuli in different test sessions, SC was asked to estimated the age of the person and their gender, both of which he could do reliably.

When presented with an array of four black and white photographs, one of which was of a famous face the others unfamiliar faces, and asked to select the famous face he scored at chance (26.7% correct). He did a little better on a two-alternative forced choice (2AFC) version of the task (66.7% correct). He could sort only 4 of 12 well-known faces to occupation from four alternatives, and 6 of the 12 to name. All six errors on the Face-Name matching task were semantic foils. He was able to sort the same 12 famous names by occupation without error, confirming that the famous identities were familiar to him.

He scored in the normal range on the Benton Face Recognition Test (Benton et al., 1994) (43/54) which requires face matching across changes in viewpoint and lighting. On feature matching tasks he could accurately match pairs of features and feature combinations (including whole faces, 100% correct). However, his performance on each of these tasks was exceptionally slow, both his behavior and spontaneous verbalizations suggesting that he was reliant on an idiosyncratic part-based processing strategy.

Tests of Facial Expression Recognition

Recognition of expression from full faces

The first experiment investigated SC's ability to recognize facial expression from whole faces. SC was presented with faces depicting happy, sad, fear, surprise, anger, and disgust drawn from Ekman and Friesen's (1976) “Pictures of Facial Affect,” and the six relevant emotion category labels. There were 10 items per expression. Faces were presented in a random sequence. SC was asked to match each face to a label (e.g., Calder et al., 1996). The norms used were those derived by Ekman and Friesen (1976).

Face Matching Across Expression

Here, SC's ability to match faces across expressions was tested. Ekman and Friesen (1976) faces (happy, sad, surprised, fear, anger and disgust) were now presented in pairs in which either the face, or the expression might differ. The three conditions for this experiment were: (1) neutral expression (same face/different faces); (2) same expression (same face/different faces); and, (3) different expression (same face/different faces). Twenty pairs were constructed for each of the three conditions. With unlimited exposure SC was required to say whether the faces were the “same” or “different,” regardless of expression.

Recognition of expression from incomplete faces

In order to explore the basis of SC's expression judgments the expressions that he identified most successfully in the first experiment (happiness, surprise) and least successfully (fear, anger) were used to assess his ability to identify expression from incomplete faces. The incomplete-face conditions were constructed using Adobe Photoshop by pasting skin pixels over the masked region of the face (eyes or mouth) leaving only the remaining features exposed. This was done to ensure that the stimuli were sufficiently face-like to engage face-processing strategies. An example of the stimuli is shown in Figure 1. There were 10 photographs for each expression. SC was given the same emotion category labels as in the first experiment, and again was asked to match a label to each face.

Example of expression stimuli: whole face and part eyes and mouth faces.

In addition to SC, 11 (5 men and 6 women) staff and students from the School of Psychology at The University of Sydney agreed to participate. All were naïve as to the purpose of the experiment. Their age ranged from 18 to 49 years (mean, 23.9 years).

Test of Facial Identity Recognition

Recognition memory for identity from incomplete neutral faces

Here we investigated SC's feature based face recognition in a recognition memory task using neutral faces. The aim was to assess whether the disruption to information processing strategies would affect both facial expression and facial identity recognition.

In this task SC was presented with 12 unfamiliar neutral faces for learning. He was later required to discriminate old from new faces, in whole and incomplete conditions. There were two phases: learning and test. At learning the 12 target faces (6 men and 6 women) were presented in a random sequence, individually for four seconds. SC was told to remember the faces for subsequent testing. Training verification immediately followed exposure with a criterion of 11/12 correct required. Testing did not begin until this was met. To ensure learning, SC was presented with a 2AFC recognition test. Targets were paired with a distracter of the same gender. SC was required to pick one of the two faces presented as previously seen at study and respond by pressing the keyboard key (left or right) corresponding to the position of his choice on screen. The target face appeared equally often on the left and right side and all stimuli remained on screen until a response was made. Faces continued to be displayed and tested using this procedure until no fewer than 11 faces could be correctly recognized. Each test utilized a different set of distracter faces.

At test SC was presented with pairs of faces under 7 experimental conditions: (1) eyes-nose-mouth (whole-face; E-N-M); (2) eyes-nose (E-N); (3) eyes-mouth (E-M); (4) nose-mouth (N-M); (5) eyes-only (E-ONLY); (6) nose-only (N-ONLY); and, (7) mouth-only (M-ONLY) faces. An example of the test stimuli is shown in Figure 2. In a 2AFC memory test similar to that which followed learning, the paired stimuli were presented simultaneously and SC was required to decide which of the two had been presented previously. Trials involving each of the feature deletion conditions were run in separate blocks. The order of the blocks and the presentation of the target-distracter pairings within each block were determined randomly. There was no time limit, but speed and accuracy was stressed. For further details see Stephan and Caine (submitted).

Example of face memory task stimuli (female). On the left is the whole face condition (E-N-M). On the right are the 6 feature deletion conditions.

Data Analysis

Where published norms were not available SC's scores were compared to the 95-percent confidence interval calculated from the comparison group data for the appropriate condition for each task. This is equivalent to a hypotheses testing procedure with alpha at .05.

RESULTS

Recognition Of Expression from Full Faces

In addition to the difficulty SC experienced identifying familiar faces, he was also quite impaired in his ability to recognize some, but not all, facial expressions (Table 2). He had no difficulty identifying happiness, was a little less certain identifying surprise but was impaired at recognizing sadness, disgust, fear, and anger.

Accuracy, % correct from the picture expression-name matching task (whole faces), for SC

Face matching across expression

SC was able usually to match faces across expression (55/60 correct) however his performance was abnormally slow. He would look back and forward between the faces repeatedly, suggesting a serial (i.e., feature-by-feature) search strategy, before committing to a decision. Thus, although SC could perform the task with limited exposure duration performance may be more impaired.

Recognition of Expression from Incomplete Faces

To evaluate the cues SC may be relying on to interpret facial expression we compared his ability to identify expressions from whole faces with his ability to do so from incomplete faces. This data is represented in Figure 3. Note that the pattern of results for the comparison group is quite consistent, and they suggest that overall whole-face expressions were more accurately recognized than part-face expressions. For SC there was no reliable pattern.

Expression recognition accuracy rates (+1 SD) for (A) SC and (B) the comparison group (n = 11) in the whole and incomplete face conditions.

For the comparison group changes in information availability affected recognition accuracy of only some expressions. Pairwise comparisons (repeated) revealed that happiness and surprise could be reliably recognized from both whole and incomplete (eyes and mouth) faces (F(2,20) = 3.10, p > .05; F(2,20) = 1.47, p > .05, respectively). However, compared to performance from the whole face condition neither fear nor anger could be accurately recognized from the mouth alone (F(1,10) = 35.20, p < .001; F(1,10) = 16.50, p < .005). There was no difference in recognition accuracy from the whole compared to eyes alone faces for either of these expressions (all p > .05).

To assess SC's recognition performance we compared his results to the 95-percent confidence interval for each condition calculated from the comparison data (Table 3). As shown in Fig. 3 and Table 3, compared to the comparison group SC's performance on this task was quite erratic. He was able to recognize happiness and surprise from the whole face, and anger and surprise from the eyes, with performance on the latter actually better than controls. He was much worse than the comparison group at recognizing happiness or fear from the eyes, and anger from the mouth. He did as well as the comparison group in recognizing surprise and fear from the mouth, and at recognizing surprise and anger but not happiness or fear from the eyes.

Number correct for each condition for SC and the 95-percent confidence interval of the mean correct for each condition for the comparison group (n = 11)

In order to assess any decision biases arising from incomplete faces, the percentages for each incorrect label usage were tallied for each condition. These are presented in Table 4 for both: (A) SC; and (B) the comparison group. For both SC and the comparison group, covering the mouth in fearful faces biased participants' responses to labeling them as surprised. This reflects common confusions typically seen in expression recognition (Young et al., 1997). Indeed, in SC's case all errors to fearful eye part faces arose because of incorrect use of the surprise label. Other errors also followed the confusability pattern as proposed by Young et al. (1997); anger part faces were incorrectly labeled as disgust and fearful faces as surprised.

Errors, % of incorrect expression labeling as a function of test condition: (A) SC; and, (B) the comparison group

Recognition Memory for Identity from Incomplete Neutral Faces

It took SC 9 learning trials to reach the performance criterion, whereas the comparison group required an average of 1.4. Therefore some preservation of the ability to process and discriminate faces from their internal features must be inferred. However, given the large number of trials, SC's performance is impaired: incomplete or impaired representations of faces may be elaborated through repeated exposure thus leading to improved memory performance. This contrasts to his impaired performance on the Faces I and II tasks of the WMS-III where there is a single learning trial and test session (see, Table 1).

In order to assess SC's identity recognition performance from incomplete faces, his scores were compared to the 95-percent confidence interval for each condition, calculated from the comparison group. As shown in Figure 4, SC performs better than the comparison group in identifying nose-only faces and much more poorly in identifying faces from all other feature-deletion conditions. Unlike the pattern of improvement from the eyes-available conditions (in particular the eyes-mouth condition) seen for the comparison group, SC shows a mouth-available disadvantage. SC's response times were approximately twice as slow as the comparison groups.

Number correct for each condition for SC and the 95-percent confidence interval of the mean correct for each condition from the comparison group on the face recognition memory task.

DISCUSSION

The goal of this study was to explore facial effect decoding in a prosopagnosic patient. Whereas the relative importance of configural versus featural information has been the subject of much controversy and debate in the area of identity recognition, much less is known about the role of each type of information in expression recognition. The critical questions addressed were the processing strategies involved in the recognition of emotional expressions, and whether damaged configural processes observed in prosopagnosia can be linked to impaired expression recognition?

Assessment of Use of Visual Information: Comparison Group

The results from the expression tasks show that expression recognition is sensitive to modifications in available information. Data from normal participants demonstrated that some expressions are more easily identified from the whole face whereas others can be easily recognized from either whole or part information. For happiness and surprise, information from the eyes or mouth alone was sufficient for accurate recognition, whereas for anger and fear, information from the whole face or eyes was necessary. For the latter a highly specified change in the eyes uniquely signaled each expression, unlike the mouth where the change was less highly specified (e.g., overlap for surprise and fear). The eyes were more informative than the mouth (see also, Adolphs et al., 2005; Baron-Cohen et al., 1997) for all expressions tested.

Whereas configural processes have been implicated in the identification of expression recognition, these results suggest that features alone carry an effective message. Which kind of information is important depends on the emotion in question, and appears to relate to the unique informational change associated with each expression, such that whereas some facial movements retain their meaning in isolation as well as when presented in combination with other facial actions (e.g., upturned mouth or smile to indicate happiness), in other cases isolated facial feature movements do not convey any specific emotional meaning (e.g., taut mouth as part of an angry expression). Here the context of other feature movements is required to form a recognizable expression.

Taken together, the results from the expression tasks suggest that feature based processing of emotional expression may be more salient in the recognition of happiness, whereas configural based processing is more useful in the recognition of fear and anger.

With regard to identity recognition, accuracy was differentially affected by information availability. Consistent with previous findings, and unsurprisingly, availability of the whole face maximized recognition performance (Farah et al., 1998; Moscovitch & Moscovitch, 2000). With regard to features the eyes were the most informative, especially in combination with the mouth (see also, Haig, 1985; O'Donnell & Bruce, 2001; Sadr et al. 2003; Shepherd et al., 1981). The eyes and mouth may together form the key elements of configural processing, but they may also form a meaningful unit due to their concurrent role in speech processing (lip reading and direction of eye gaze) and expression identification (through a combination of brow and mouth movement).

Assessment of Use of Visual Information: SC

In the expression recognition task SC was sometimes able to use individual features in incomplete faces to identify facial expression (e.g., surprise from the eyes, happy and surprise from the mouth) but was, on the whole less successful than normal participants at doing so. More striking, however, was that his performance on the whole face was actually worse than his performance in the incomplete face conditions, for all expressions except happiness. SC fails to make normal use of facial information when recognizing emotional expressions of surprise, fear and anger. Surprisingly, assessment of SC's errors, especially for whole faces, was very similar to that of the comparison group: surprise was typically mistaken as fear (and vice versa) and anger as disgust. This suggests that there may be some qualitative overlap in the information extracted by SC and the comparison group. Errors for the incomplete face conditions also showed similar misattributions, especially of eye information, in the comparison group and SC.

In the test of face recognition, unlike the controls, SC did not show a benefit to recognition accuracy when cued by whole or eyes-available faces. Instead, he showed a detrimental effect of having the mouth cue available, either alone or in combination with other featural information. These results, together with the expression recognition findings support the conclusion that SC does not simply rely on a feature or part-based processing strategy for recognition. Indeed, if SC had completely disregarded the whole face, and recognition had depended entirely on part-based analysis, similar performance would be expected for whole and incomplete faces in both the identity and expression tasks. This challenges the notion that the ability to process configural information is simply lost in prosopagnosia and compensated by reliance on a part-based processing strategy.

Rather SC results are reminiscent of the finding of Farah et al. (1995a) whose prosopagnosic patient performed better at matching inverted faces than upright faces. They argued that this was evidence that configural processing of faces may control face-viewing behavior even when impaired (see also, Boutsen & Humphreys, 2002; de Gelder & Rouw, 2000a, b; Farah et al., 1995b; Rouw & de Gelder, 2002, for findings of paradoxical configural effects in prosopagnosia). The present results provide further support for the notion that configural processing may be so prepotent for face perception that even when it is profoundly impaired, as in the present case, given stimuli that would ordinarily recruit this form of processing, it predominates such that the information conveyed by isolated features becomes unavailable. SC is unable to make use of diagnostic featural information when presented in the whole face configuration. The paradoxical configural effect is here replicated in another case of prosopagnosia, and extended for the first time to expression recognition and new face learning/memory.

Alternatively, one might argue that SC's poor performance on whole faces arises from a deficit in attention allocation (Adolphs et al., 2005; Bukach et al., 2006). For instance, if SC's viewing is constrained to the eyes and/or nose when presented with whole faces, with limited processing of the mouth (or vice versa), then the addition of an unattended cue could act as a distracter, thereby inhibiting recognition. However, this explanation is unlikely to completely explain these findings. For instance, SC was able to recognize happiness equally well from both the whole face and mouth, but not from the eyes, suggesting that the mouth must have been processed in the whole face condition. His impairment may perhaps arise from a combination of both factors. Assessment of whether attentional cuing enhances face recognition and possibly fosters holistic/featural process or alternatively, recording the eye movements of SC during a series of face recognition tasks would evaluate this possibility (cf. Bloom & Mudd, 1991).

Unfortunately because the neuroanatomical damage sustained by SC was so extensive, specific conclusions regarding the underlying neuroanatomy cannot be drawn. However, the findings reported here substantiate a consistent theme in prosopagnosic research, which is that the disorder can be linked to impaired configural processing. Importantly, it is this information that is necessary for both accurate face identity recognition and also the recognition of some emotional expressions (especially fear and anger). Although SC's configural processing is impaired it was demonstrated that it is not completely inoperative, such that the presence of the full face interfered with part recognition. Significantly, this dramatic reversal of the normal pattern highlights the automaticity by which configural processing mechanisms are engaged even when defective, if the visual system is presented with a face-like stimulus.

CONCLUSION

Taken together, although face-parts may carry useful information, the recognition of most face expressions and the recognition of facial identity require the capacity to process configural information available from the whole face stimulus. This challenges standard models of face processing, which are based on the notion that different aspects of a face (i.e., identity, expression, and gender recognition) are processed independently using unique information. Identity and expression seem to share a common processing stage linked to configural information. In prosopagnosia, a deficit in identity and expression recognition, in addition to new face learning may be not because of a complete absence of configural processing, but rather, to impaired configural processing overriding an otherwise intact ability to utilize other processing routes.

ACKNOWLEDGMENTS

The authors sincerely thank SC for his time. This manuscript is original, has not been previously published, and is not under concurrent consideration elsewhere. All authors have consented to submission. This work was approved by the Human Research Ethics Committee at the University of Sydney, Australia. There are no conflicts of interest.

References

REFERENCES

Adolphs, R., Gosselin, F., Buchanan, T.W., Tranel, D., Schyns, P., & Damasio, A.R. (2005). A mechanism for impaired fear recognition after amygdala damage. Nature, 433, 6872.Google Scholar
Adolphs, R., Tranel, D., Damasio, H., & Damasio, A. (1994). Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature, 372, 669672.Google Scholar
Alberts, M.L. (1973). A simple test of visual neglect. Neurology, 23, 658664.Google Scholar
Allison, T., Puce, A., Spencer, D.D., & McCarthy, G. (1999). Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli. Cerebral Cortex, 9, 415430.Google Scholar
Anderson, A.K., Spencer, D.D., Fulbright, R.K., & Phelps, E.A. (2000). Contribution of the anteromedial temporal lobes to the evaluation of facial emotion. Neuropsychology, 14, 526536.Google Scholar
Baron-Cohen, S., Wheelwright, S., & Jolliffe, T. (1997). Is there a “language of the eyes”? Evidence from normal adults, and adults with autism or Asperger syndrome. Visual Cognition, 4, 311331.Google Scholar
Bartlett, J.C. & Searcy, J. (1993). Inversion and configuration of faces. Cognitive Psychology, 25, 281316.Google Scholar
Barton, J.J., Press, D.Z., Keenan, J.P., & O'Connor, M. (2002). Lesions of the fusiform face area impair perception of facial configuration in prosopagnosia. Neurology, 58, 7178.Google Scholar
Barton, J.J., Zhao, J., & Keenan, J.P. (2003). Perception of global facial geometry in the inversion effect and prosopagnosia. Neuropsychologia, 41(12), 17031711.Google Scholar
Baudouin, J.Y., Gilibert, D., Sansone, S., & Tiberghien, G. (2000a). When the smile is a cue to familiarity. Memory, 8, 285292.Google Scholar
Baudouin, J.Y., Sansone, S., & Tiberghien, G. (2000b). Recognizing expression from familiar and unfamiliar faces. Pragmatics and Cognition, 8, 123146.Google Scholar
Benton, A.L., Sivan, A.B., Hamsher, K., Varney, N.R., & Spreen, O. (1994). Contributions to Neuropsychological Assessment. New York: Oxford University Press.
Blair, R.J.R., Morris, J.S., Frith, C.D., Perret, D.I., & Dolan, R. (1999). Dissociable neural responses to facial expressions of sadness and anger. Brain, 122, 883893.Google Scholar
Bloom, L.C. & Mudd, S.A. (1991). Depth of processing approach to face recognition: A test of two theories. Journal of Experimental Psychology: Learning, Memory, & Cognition, 17, 556565.Google Scholar
Boutsen, L. & Humphreys, G.W. (2002). Face context interferes with local part processing in a prosopagnosic patient. Neuropsychologia, 40, 23052313.Google Scholar
Bruce, V. & Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77, 305327.Google Scholar
Bruyer, R., Laterre, C., Seron, X., Feyereisen, P., Strypstein, E., Pierrard, E., & Rectem, D. (1983). A case of prosopagnosia with some covert remembrance of familiar faces. Brain and Cognition, 2, 257284.Google Scholar
Bukach, C.M., Bub, D.N., Gauthier, I., & Tarr, M.J. (2006). Perceptual expertise effects are not all or none: Spatially limited perceptual expertise for faces in a case of prosopagnosia. Journal of Cognitive Neuroscience, 18, 4863.Google Scholar
Cabeza, R. & Kato, T. (2000). Features are also important: Contributions of featural and configural processing to face recognition. Psychological Science, 11, 429433.Google Scholar
Calder, A.J., Lawrence, A.D., & Young, A.W. (2001). Neuropsychology of fear and loathing. Nature Reviews Neuroscience, 2, 352363.Google Scholar
Calder, A.J., Young, A.W., Keane, J., & Dean, M. (2000). Configural information in facial expression perception. Journal of Experimental Psychology: Human Perception & Performance, 26, 527551.Google Scholar
Calder, A.J., Young, A.W., Rowland, D., Perrett, D.I., Hodges, J.R., & Etcoff, N.L. (1996). Face perception after bilateral amygdala damage: Differentially severe impairment of fear. Cognitive Neuropsychology, 13, 699745.Google Scholar
Campbell, R., Brooks, B., de Haan, E., & Roberts, T. (1996). Dissociating face processing skills: Decisions about lip-read speech, expression, and identity. Quarterly Journal of Experimental Psychology A, 49, 295314.Google Scholar
de Gelder, B., Bachoud-Levi, A.C., & Degos, J.D. (1998). Inversion superiority in visual agnosia may be common to a variety of orientation polarised objects besides faces. Vision Research, 38, 28552861.Google Scholar
de Gelder, B. & Rouw, R. (2000a). Configural face processes in acquired and developmental prosopagnosia: Evidence for two separate face systems? Neuroreport, 11, 31453150.Google Scholar
de Gelder, B. & Rouw, R. (2000b). Paradoxical inversion effect for faces and objects in prosopagnosia. Neuropsychologia, 38, 12711279.Google Scholar
Dolan, R.J., Fletcher, P., Morris, J., Kapur, N., Deakin, J.F., & Frith, C.D. (1996). Neural activation during covert processing of positive emotional facial expressions. Neuroimage, 4, 194200.Google Scholar
Duchaine, B.C., Parker, H., & Nakayama, K. (2003). Normal recognition of emotion in a prosopagnosic. Perception, 32, 827838.Google Scholar
Ekman, P. & Friesen, W.V. (1976). Pictures of Facial Affect. Palo Alto, CA: Consulting Psychologist Press.
Ellis, H.D. & Young, A.W. (1990). Accounting for delusional misidentifications. The British Journal of Psychiatry, 157, 239248.Google Scholar
Ellison, J.W. & Massaro, D.W. (1997). Featural evaluation, integration, and judgement of facial affect. Journal of Experimental Psychology: Human Perception & Performance, 23, 213226.Google Scholar
Farah, M.J. (1990). Visual agnosia: Disorders of object recognition and what they tell us about normal vision. Cambridge, MA: MIT Press.
Farah, M.J., Levinson, K.L., & Klein, K.L. (1995a). Face perception and within-category discrimination in prosopagnosia. Neuropsychologia, 33, 661674.Google Scholar
Farah, M.J., Wilson, K.D., Drain, H.M., & Tanaka, J.R. (1995b). The inverted face inversion effect in prosopagnosia: Evidence for mandatory, face-specific perceptual mechanisms. Vision Research, 35, 20892093.Google Scholar
Farah, M.J., Wilson, K.D., Drain, M., & Tanaka, J.N. (1998). What is “special” about face perception? Psychological Review, 105, 482498.Google Scholar
Farnsworth, D. (1957). The Farnsworth–Munsell 100-Hue test for the examination of color vision. Baltimore, MD: Munsell Color Company.
Ganel, T., Goshen-Gottstein, Y., & Goodale, M.A. (2005). Interactions between the processing of gaze direction and facial expression. Vision Research, 45, 11911200.Google Scholar
Haig, N.D. (1985). How faces differ: A new comparative technique. Perception, 14, 601615.Google Scholar
Hasselmo, M.E., Rolls, E.T., & Baylis, G.C. (1989). The role of expression and identity in the face-selective responses of neurons in the temporal visual cortex of the monkey. Behavioural Brain Research, 32, 203218.Google Scholar
Haxby, J.V., Hoffman, E.A., & Gobbini, M.I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4, 223233.Google Scholar
Haxby, J.V., Hoffman, E.A., & Gobbini, M.I. (2002). Human neural systems for face recognition and social communication. Biological Psychiatry, 51, 5967.Google Scholar
Heywood, C.A. & Cowey, A. (1992). The role of the “face-cell” area in the discrimination and recognition of faces by monkeys. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 335, 3138.Google Scholar
Hoffman, E.A. & Haxby, J.V. (2000). Distinct representations of eye gaze and identity in the distributed human neural system for face perception. Nature Neuroscience, 3, 8084.Google Scholar
Humphreys, G.W. & Riddoch, M.J. (1984). Routes to object constancy: Implications from neurological impairments of object constancy. Quarterly Journal of Experimental Psychology, A37, 493495.Google Scholar
Joubert, S., Felician, O., Barbeau, E., Sontheimer, A., Barton, J.J., Ceccaldi, M., & Poncet, M. (2003). Impaired configurational processing in a case of progressive prosopagnosia associated with predominant right temporal lobe atrophy. Brain, 126, 25372550.Google Scholar
Kanwisher, N., McDermott, J., & Chun, M.M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17, 43024311.Google Scholar
Kurucz, J. & Feldmar, G. (1979). Prosop-affective agnosia as a symptom of cerebral organic disease. Journal of the American Geriatric Society, 27, 9195.Google Scholar
Levine, D.N. & Calvanio, R. (1989). Prosopagnosia: A defect in visual configural processing. Brain and Cognition, 10, 149170.Google Scholar
Marotta, J.J., Genovese, C.R., & Behrmann, M. (2001). A functional MRI study of face recognition in patients with prosopagnosia. Neuroreport, 12, 15811587.Google Scholar
Martelli, M., Majaj, N.J., & Pelli, D.G. (2005). Are faces processed like words? A diagnostic test for recognition by parts. Journal of Vision, 5, 5870Google Scholar
Maurer, D., Grand, R.L., & Mondloch, C.J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6, 255260.Google Scholar
McKelvie, S.J. (1973). Meaningfulness and meaning of schematic faces. Perception and Psychophysics, 14, 343348.Google Scholar
Moscovitch, M. & Moscovitch, D.A. (2000). Super face-inversion effects for isolated internal or external features, and for fractured faces. Cognitive Neuropsychology, 17, 201219.Google Scholar
Munte, T.F., Brack, M., Grootheer, O., Wieringa, B.M., Matzke, M., & Johannes, S. (1998). Brain potentials reveal the timing of face identity and expression judgments. Neuroscience Research, 30, 2534.Google Scholar
Nelson, H.E. & Willison, J. (1991). The National Adult Reading Test (NART) manual, 2nd ed. Windsor, UK: NFER-Nelson.
Nunn, J.A., Postma, P., & Pearson, R. (2001). Developmental prosopagnosia: Should it be taken at face value? Neurocase, 7, 1527.Google Scholar
O'Donnell, C. & Bruce, V. (2001). Familiarisation with faces selectively enhances sensitivity to changes made to the eyes. Perception, 30, 755764.Google Scholar
Oster, H., Daily, L., & Goldenthal, P. (1989). Processing facial affect. In A.W. Young & H.D. Ellis (Eds.), Handbook of Research on Face Processing. North-Holland: Elsevier Science Publishers.
Phillips, M.L., Bullmore, E.T., Howard, R., Woodruff, P.W., Wright, I.C., Williams, S.C., Simmons, A., Andrew, C., Brammer, M., & David, A.S. (1998). Investigation of facial recognition memory and happy and sad facial expression perception: An fMRI study, Psychiatry Research, 83, 12738.Google Scholar
Rapcsak, S.Z., Polster, M.R., Comer, J.F., & Rubens, A.B. (1994). False recognition and misidentification of faces following right hemisphere damage. Cortex, 30, 565583Google Scholar
Rolls, E.T. (1992). Neurophysiological mechanisms underlying face processing within and beyond the temporal cortical visual areas. Philosophical Transactions of the Royal Society of London, Series B, Biological Sciences, 335, 1121.Google Scholar
Rossion, B., Gauthier, I., Tarr, M.J., Despland, P.A., Bruyer, R., Linotte, S., & Crommelinck, M. (2000). The N170 occipito-temporal component is enhanced and delayed to inverted faces but not to inverted objects: An electrophysiological account of face-specific processes in the human brain. NeuroReport, 11, 6974.Google Scholar
Rouw, R. & de Gelder, B. (2002). Impaired face recognition does not preclude intact whole face perception. Visual Cognition, 9, 689718.Google Scholar
Sadr, J., Jarudi, I., & Sinha, P. (2003). The role of eyebrows in face recognition. Perception, 32, 285293.Google Scholar
Saumier, D., Arguin, M., & Lassonde, M. (2001). Prosopagnosia: A case study involving problems in processing configural information. Brain and Cognition, 46, 255259.Google Scholar
Schweinberger, S.R. & Soukup, G.R. (1998). Asymmetric relationships among perceptions of facial identity, emotion, and facial speech. Journal of Experimental Psychology: Human Perception & Performance, 24, 17481765.Google Scholar
Sergent, J. (1984a). Inferences from unilateral brain damage about normal hemispheric functions in visual pattern recognition. Psychological Bulletin, 96, 99115.Google Scholar
Sergent, J. (1984b). An investigation into component and configural processes underlying face perception. British Journal of Psychology, 75, 221242.Google Scholar
Sergent, J., Ohta, S., MacDonald, B., & Zuck, E. (1994). Segregated processing of facial identity and emotion in the human brain: A PET study. In V. Bruce & G.W. Humphreys (Eds.), Object and Face Recognition. Special issue of Visual Cognition (Vol. 1, pp. 349369). Hillsdale, New Jersey: Lawrence Erlbaum Associates, Inc.
Shepherd, J.W., Davies, G.M., & Ellis, A.W. (1981). Studies of cue saliency. In G. Davis, H. Ellis, & J. Shepherd (Eds.), Perceiving and Remembering Faces (pp. 105131). New York: Academic Press.
Shuttleworth, E.C., Syring, V., & Allen, N. (1982). Further observations on the nature of prosopagnosia. Brain and Cognition, 1, 302332.Google Scholar
Smith, M.L., Cottrell, G.W., Gosselin, F., & Schyns, S.G. (2005). Transmitting and decoding facial expressions. Psychological Science, 16, 184189.Google Scholar
Smith, C.A. & Scott, H.S. (1997). A Componential Approach to the meaning of facial expressions. In J.A. Russell & J.M. Fernandez-Dols (Eds.), The Psychology of Facial Expression. Studies in Emotion and Social Interaction, 2nd series (pp. 229254). New York, NY, US: Cambridge University Press.
Stephan, B.C.M. & Caine, D. (submitted). The role of featural and configural information in the recognition of unfamiliar neutral and expressive faces.
Tranel, D., Damasio, A.R., & Damasio, H. (1988). Intact recognition of facial expression, gender, and age in patients with impaired recognition of facial identity. Neurology, 38, 690696.Google Scholar
Wallbott, H.G. & Ricci-Bitti, P. (1993). Decoders' processing of emotional facial expression: A top-down or bottom-up mechanism? European Journal of Social Psychology, 23, 427443.Google Scholar
Warrington, E.K. (1984). Manual for the Recognition Memory Test for Words and Faces. Windsor, UK: NFER-Nelson.
Warrington, E.K. & James, M. (1991). Visual Object and Space Perception Battery (VOSP). Bury St. Edmunds: Thames Valley Test Company.
Wechsler, D. (1997). Wechsler Memory Scale-III. San Antonio, TX: The Psychological Corporation.
White, M. (1999). Representation of facial expressions of emotion. American Journal of Psychology, 112, 371381.Google Scholar
White, M. (2000). Parts and wholes in expression recognition. Cognition and Emotion, 14, 3960.Google Scholar
White, M. (2001). Effect photographic negation on matching the expressions and identities of faces. Perception, 30, 969981.Google Scholar
Young, A.W., Aggleton, J.P., Hellawell, D.J., Johnson, M., Broks, P., & Hanley, J.R. (1995). Face processing impairments after amygdalotomy. Brain, 118, 1524.Google Scholar
Young, A.W., Newcombe, F., de Haan, E.H.F., Small, S., & Hay, D.C. (1993). Face perception after brain injury: Selective impairments affecting identity and expression. Brain, 116, 941959.Google Scholar
Young, A.W., Rowland, D., Calder, A.J., Etcoff, N.L., Seth, A., & Perrett, D.I. (1997). Facial expression megamix. Cognition, 63, 271313.Google Scholar
Figure 0

A summary of SC's performance on tests of object and face perception and recognition

Figure 1

Example of expression stimuli: whole face and part eyes and mouth faces.

Figure 2

Example of face memory task stimuli (female). On the left is the whole face condition (E-N-M). On the right are the 6 feature deletion conditions.

Figure 3

Accuracy, % correct from the picture expression-name matching task (whole faces), for SC

Figure 4

Expression recognition accuracy rates (+1 SD) for (A) SC and (B) the comparison group (n = 11) in the whole and incomplete face conditions.

Figure 5

Number correct for each condition for SC and the 95-percent confidence interval of the mean correct for each condition for the comparison group (n = 11)

Figure 6

Errors, % of incorrect expression labeling as a function of test condition: (A) SC; and, (B) the comparison group

Figure 7

Number correct for each condition for SC and the 95-percent confidence interval of the mean correct for each condition from the comparison group on the face recognition memory task.