INTRODUCTION
Current theories of face recognition typically distinguish between the processing of individual face features and the processing of configural information (i.e., the unique spatial relations among the internal features) with both kinds of information contributing to the structural descriptions of seen faces (Bartlett & Searcy, 1993; Cabeza & Kato, 2000; Martelli et al., 2005; Maurer et al., 2002; Sergent, 1984a,b). Prosopagnosia, a rare neurological condition characterized by the inability to recognize facial identity, is typically described as impairment in configural processing with compensatory reliance on feature processing strategies (Barton et al., 2002, 2003; Boutsen & Humphreys, 2002; Farah et al., 1998; Joubert et al., 2003; Levine & Calvanio, 1989; Marotta et al., 2001; Saumier et al., 2001). Recent studies have shown that the configural deficit can manifest in at least two ways: as a complete loss of face configural effects, or by paradoxical configural effects where the loss of configural processing is incomplete and actively interferences with the application of compensatory part-based processing strategies (de Gelder et al., 1998; de Gelder & Rouw, 2000a, 2000b; Farah et al., 1995a; Rouw & de Gelder, 2002).
Whereas the impact of prosopagnosia on information processing has been extensively studied in identity recognition, relatively little is known about the effect on expression recognition. To what extent are configural or feature-based processes involved in the recognition of emotional facial expression, and how does prosopagnosia impact on the recognition of emotional facial expression? Here we report the results of a series of experiments investigating information processing in face expression recognition in a patient with relatively isolated prosopagnosia.
There is considerable evidence that face recognition and the recognition of facial expression are doubly dissociable and mediated by separate mechanisms (Bruce & Young, 1986; Campbell et al., 1996; Ellis & Young, 1990; Hasselmo et al., 1989; Hoffman & Haxby, 2000; Phillips et al., 1998). Some prosopagnosic patients have been able satisfactorily to interpret facial expression in spite of their inability to recognize familiar faces (Bruyer et al., 1983; Duchaine et al., 2003; Nunn et al., 2001; Shuttleworth et al., 1982; Tranel et al., 1988), whereas other patients have been shown conversely to have difficulty interpreting expression while still being capable of identifying faces correctly (Adolphs et al., 1994; Anderson et al., 2000; Kurucz & Feldmar, 1979; Young et al., 1995, 1993). The implication that the identification of facial expression and facial identity are mediated by separate brain regions has been confirmed by electrophysiological studies in primates and functional neuroimaging in humans (Allison et al., 1999; Haxby et al., 2000; Heywood & Cowey, 1992; Munte et al., 1998; Rolls, 1992; Sergent et al., 1994). Face processing and recognition has been linked to ventral occitotemporal areas (Kanwisher et al., 1997), with a core system involving the inferior occipital gyrus, fusiform gyrus, and the superior temporal sulcus (Haxby et al., 2000, 2002). Further, face selective activation in the left fusiform gyrus is thought to be involved in feature-based processing and in the right with holistic/configural-based processing of faces (Rapcsak et al., 1994; Rossion et al., 2000). Emotion processing and recognition has been linked to a network of limbic structures including the amydgala and insula (Adolphs et al., 2005; Blair et al., 1999; Calder et al., 2001).
Yet, these processes may not be completely independent (Baudouin et al., 2000a,b; Dolan et al., 1996; Schweinberger & Soukup, 1998). Whereas neuroimaging studies have shown that activation can be seen in different regions for identity versus expression recognition, other regions are activated in identity and expression tasks (Ganel et al., 2005; Haxby et al., 2002; Sergent et al., 1994). Differences in extent and location of lesion may therefore give rise to variations in performance by different patients on particular tasks.
To what extent are configural or feature-based processes involved in recognition of facial expression? Working with two prototype expressions (happy and angry), defined in terms of just two facial features, the eyebrows and the corners of the mouth, Ellison and Massaro (1997) found that subjects' responses to whole-face images could be reliably predicted from responses to half-face images, eyebrows or mouth, suggesting a primary role for features in expression recognition. In contrast, Calder et al. (2000) used composites of same or different expressions and found that subjects were slower to identify the expression in either half of the composite aligned images compared with the control condition in which the two halves were misaligned. These authors concluded that configural information was strongly implicated in the recognition of facial expression, but that this did not necessarily preclude a secondary role for feature-based processing (see also, McKelvie, 1973; Oster et al., 1989; Wallbott & Ricci-Bitti, 1993; White, 1999, 2000, 2001).
Which information source predominates may be a function of the emotion in question. Isolated features may be meaningful if uniquely associated with an emotion: an upturned mouth/smile to indicate happiness; whereas wide-open eyes or a wide-open mouth may indicate either surprise or fear. For the latter, information about the conjunctive association between the brow and the mouth may be necessary to facilitate recognition (see also, Smith et al., 2005; Smith & Scott, 1997). Which components of a face carry salient information in expression recognition, particularly in terms of regions of the face is therefore still not completely understood. Here we examine the role of part-based (i.e., eyes and mouth) and configural-based information in the recognition of different emotional expressions.
To our knowledge, although accuracy of expression recognition in the absence of identify recognition has been observed, the information processing strategies involved in recognition of emotional facial expression have not formerly been addressed in the context of prosopagnosia. In this study we present a prosopagnosic patient, SC, whose ability to discriminate and match individual face features was intact but who was incapable of identifying even very familiar faces. He also had difficulty recognizing some, but not all, emotional expressions. Basic level object recognition was normal. SC was asked to identify different emotional expressions from whole and incomplete faces. The aim was twofold; first to determine the information requirements necessary to support accurate recognition of emotional expression, and second, to report the impact of prosopagnosia on information processing strategies involved in recognizing expressive faces. If SC's processing deficit is related to an inability to encode and process configural information common to both identity and expression recognition, and instead he relies on feature processing strategies then he should fail to benefit from whole face expressions and perform no differently across whole and incomplete face conditions. In contrast, if impaired configural processes are disruptive to part processing then we might expect there to be better performance in the incomplete face conditions, since configural information is reduced when parts are presented in isolation. Alternatively, it might be that SC's impairment arises not from a configural deficit per se but rather from having to process multiple features together. In this case, expression recognition performance should increase as less of the stimulus is available. However, any effects observed are expected to be contingent on the degree to which featural information is uniquely associated with each expression, such that part processing may be adequate for the recognition of some expressions (e.g., happiness by an upturned mouth), but not others (e.g., wide open eyes alone may signal either fear or surprise).
METHODS
Participants
Case history
At the time of this investigation SC was a 38-year-old man with a complicated medical history. He had been admitted to Hospital in November 1984, at the age of 22 years, having been involved in a motor vehicle accident in which he sustained a significant head injury. Plain skull x-rays demonstrated a fracture of the right parieto-occipital bone. CT scan of the brain showed a hemorrhagic contusion in the left anterior temporal lobe associated with a left sided acute subdural hematoma, and ischemia of the left parietal and occipital lobes. Following evacuation of the subdural hematoma the brain CT scan showed dilatation of the posterior horn of the left lateral ventricle with low attenuation also present in the posterior aspect of the right occipital lobe. He therefore sustained extensive damage to the posterior cerebral cortex with both hemispheres affected.
In consequence of these injuries he had a right homonymous hemianopia, as well as weakness in the left limbs associated with loss of dexterity. Examination by a speech therapist a month after the event found severe difficulty recognizing details in the environment, including faces, from sight as well as impaired memory. Over succeeding months he progressed from being able to discriminate shapes only to being able to cope with fine detail including small print. In August of 1985 he returned to therapy in relation to a complaint of persisting prosopagnosia.
SC had been an average student at school, which he left at the age of 15 years. Estimated pre-morbid IQ was believed to have been in the low average range. After leaving school he had a few relatively unskilled occupations but had also worked as a motor mechanic.
At the time of this investigation his only visual complaint was his inability to recognize familiar faces. Copying, drawing, reading, and writing were normal. He was unimpaired on tasks of working memory, including the digit span, logical memory (immediate and delayed) and visual reproduction (immediate and delayed) subtests of the Wechsler Memory Scale: Third Edition (WMS–III) (Wechsler, 1997). Uncorrected visual acuity at the time of the neuropsychological examination was 6/6 in the right eye and 6/9 in the left. On the Farnsworth Munsell 100 Hue test (Farnsworth, 1957) he was shown to have an acquired color vision defect in both eyes: total error score of 476 (right eye), 406 (left eye), (>270 considered to be low color discrimination).
Ethical approval was obtained from the University Human Ethics Committee, The University of Sydney, Sydney, Australia.
Face versus Object Recognition
SC showed a mild deficit in within-category recognition of non-face objects when exemplars were from homogeneous categories. He performed below age- and sex-matched controls on tests of naming fruits and vegetables (66% correct) [>2 standard deviations (SD) below the mean], and cars (41% correct). His score was within 1 SD of the mean on the latter task but this was a surprising result because cars were a particular interest of his, whereas that was not necessarily true of the controls. However, in contrast to this, when asked to recognize a mixed set of objects from their prototypical view (Humphreys & Riddoch, 1984) he scored 19/20 correct. Similar within category object deficits have been reported in other prosopagnosic patients where it has been concluded that the dissociation between part-based and configural-based processing for objects versus faces is not complete; discrimination of within category object exemplars may invoke some measure of configural processing used in face and non-face (subordinate) object recognition, although more so in the former than the latter (Farah, 1990).
SC performed well on standard neuropsychological tests of visual processing (Table 1). His drawing to copy was excellent indicating good acuity. He passed all components of the Visual Object and Space Perception Battery (VOSP) (Warrington & James, 1991). He had no difficulty naming objects (living and man-made) from prototypical or unusual views (Humphreys & Riddoch, 1984), and his performance on the Alberts test for visual neglect (Alberts, 1973) was normal, notwithstanding a persisting right homonymous hemianopia.
In contrast, SC was severely prosopagnosic. When presented with a series of black and white photographs of immediate family members, famous faces and faces of persons previously unknown to him (matched in age/sex with the previously known faces) and asked which were familiar, he failed to successfully identify a single photograph. He claimed all photographs were unfamiliar. Using the same stimuli in different test sessions, SC was asked to estimated the age of the person and their gender, both of which he could do reliably.
When presented with an array of four black and white photographs, one of which was of a famous face the others unfamiliar faces, and asked to select the famous face he scored at chance (26.7% correct). He did a little better on a two-alternative forced choice (2AFC) version of the task (66.7% correct). He could sort only 4 of 12 well-known faces to occupation from four alternatives, and 6 of the 12 to name. All six errors on the Face-Name matching task were semantic foils. He was able to sort the same 12 famous names by occupation without error, confirming that the famous identities were familiar to him.
He scored in the normal range on the Benton Face Recognition Test (Benton et al., 1994) (43/54) which requires face matching across changes in viewpoint and lighting. On feature matching tasks he could accurately match pairs of features and feature combinations (including whole faces, 100% correct). However, his performance on each of these tasks was exceptionally slow, both his behavior and spontaneous verbalizations suggesting that he was reliant on an idiosyncratic part-based processing strategy.
Tests of Facial Expression Recognition
Recognition of expression from full faces
The first experiment investigated SC's ability to recognize facial expression from whole faces. SC was presented with faces depicting happy, sad, fear, surprise, anger, and disgust drawn from Ekman and Friesen's (1976) “Pictures of Facial Affect,” and the six relevant emotion category labels. There were 10 items per expression. Faces were presented in a random sequence. SC was asked to match each face to a label (e.g., Calder et al., 1996). The norms used were those derived by Ekman and Friesen (1976).
Face Matching Across Expression
Here, SC's ability to match faces across expressions was tested. Ekman and Friesen (1976) faces (happy, sad, surprised, fear, anger and disgust) were now presented in pairs in which either the face, or the expression might differ. The three conditions for this experiment were: (1) neutral expression (same face/different faces); (2) same expression (same face/different faces); and, (3) different expression (same face/different faces). Twenty pairs were constructed for each of the three conditions. With unlimited exposure SC was required to say whether the faces were the “same” or “different,” regardless of expression.
Recognition of expression from incomplete faces
In order to explore the basis of SC's expression judgments the expressions that he identified most successfully in the first experiment (happiness, surprise) and least successfully (fear, anger) were used to assess his ability to identify expression from incomplete faces. The incomplete-face conditions were constructed using Adobe Photoshop by pasting skin pixels over the masked region of the face (eyes or mouth) leaving only the remaining features exposed. This was done to ensure that the stimuli were sufficiently face-like to engage face-processing strategies. An example of the stimuli is shown in Figure 1. There were 10 photographs for each expression. SC was given the same emotion category labels as in the first experiment, and again was asked to match a label to each face.
In addition to SC, 11 (5 men and 6 women) staff and students from the School of Psychology at The University of Sydney agreed to participate. All were naïve as to the purpose of the experiment. Their age ranged from 18 to 49 years (mean, 23.9 years).
Test of Facial Identity Recognition
Recognition memory for identity from incomplete neutral faces
Here we investigated SC's feature based face recognition in a recognition memory task using neutral faces. The aim was to assess whether the disruption to information processing strategies would affect both facial expression and facial identity recognition.
In this task SC was presented with 12 unfamiliar neutral faces for learning. He was later required to discriminate old from new faces, in whole and incomplete conditions. There were two phases: learning and test. At learning the 12 target faces (6 men and 6 women) were presented in a random sequence, individually for four seconds. SC was told to remember the faces for subsequent testing. Training verification immediately followed exposure with a criterion of 11/12 correct required. Testing did not begin until this was met. To ensure learning, SC was presented with a 2AFC recognition test. Targets were paired with a distracter of the same gender. SC was required to pick one of the two faces presented as previously seen at study and respond by pressing the keyboard key (left or right) corresponding to the position of his choice on screen. The target face appeared equally often on the left and right side and all stimuli remained on screen until a response was made. Faces continued to be displayed and tested using this procedure until no fewer than 11 faces could be correctly recognized. Each test utilized a different set of distracter faces.
At test SC was presented with pairs of faces under 7 experimental conditions: (1) eyes-nose-mouth (whole-face; E-N-M); (2) eyes-nose (E-N); (3) eyes-mouth (E-M); (4) nose-mouth (N-M); (5) eyes-only (E-ONLY); (6) nose-only (N-ONLY); and, (7) mouth-only (M-ONLY) faces. An example of the test stimuli is shown in Figure 2. In a 2AFC memory test similar to that which followed learning, the paired stimuli were presented simultaneously and SC was required to decide which of the two had been presented previously. Trials involving each of the feature deletion conditions were run in separate blocks. The order of the blocks and the presentation of the target-distracter pairings within each block were determined randomly. There was no time limit, but speed and accuracy was stressed. For further details see Stephan and Caine (submitted).
Data Analysis
Where published norms were not available SC's scores were compared to the 95-percent confidence interval calculated from the comparison group data for the appropriate condition for each task. This is equivalent to a hypotheses testing procedure with alpha at .05.
RESULTS
Recognition Of Expression from Full Faces
In addition to the difficulty SC experienced identifying familiar faces, he was also quite impaired in his ability to recognize some, but not all, facial expressions (Table 2). He had no difficulty identifying happiness, was a little less certain identifying surprise but was impaired at recognizing sadness, disgust, fear, and anger.
Face matching across expression
SC was able usually to match faces across expression (55/60 correct) however his performance was abnormally slow. He would look back and forward between the faces repeatedly, suggesting a serial (i.e., feature-by-feature) search strategy, before committing to a decision. Thus, although SC could perform the task with limited exposure duration performance may be more impaired.
Recognition of Expression from Incomplete Faces
To evaluate the cues SC may be relying on to interpret facial expression we compared his ability to identify expressions from whole faces with his ability to do so from incomplete faces. This data is represented in Figure 3. Note that the pattern of results for the comparison group is quite consistent, and they suggest that overall whole-face expressions were more accurately recognized than part-face expressions. For SC there was no reliable pattern.
For the comparison group changes in information availability affected recognition accuracy of only some expressions. Pairwise comparisons (repeated) revealed that happiness and surprise could be reliably recognized from both whole and incomplete (eyes and mouth) faces (F(2,20) = 3.10, p > .05; F(2,20) = 1.47, p > .05, respectively). However, compared to performance from the whole face condition neither fear nor anger could be accurately recognized from the mouth alone (F(1,10) = 35.20, p < .001; F(1,10) = 16.50, p < .005). There was no difference in recognition accuracy from the whole compared to eyes alone faces for either of these expressions (all p > .05).
To assess SC's recognition performance we compared his results to the 95-percent confidence interval for each condition calculated from the comparison data (Table 3). As shown in Fig. 3 and Table 3, compared to the comparison group SC's performance on this task was quite erratic. He was able to recognize happiness and surprise from the whole face, and anger and surprise from the eyes, with performance on the latter actually better than controls. He was much worse than the comparison group at recognizing happiness or fear from the eyes, and anger from the mouth. He did as well as the comparison group in recognizing surprise and fear from the mouth, and at recognizing surprise and anger but not happiness or fear from the eyes.
In order to assess any decision biases arising from incomplete faces, the percentages for each incorrect label usage were tallied for each condition. These are presented in Table 4 for both: (A) SC; and (B) the comparison group. For both SC and the comparison group, covering the mouth in fearful faces biased participants' responses to labeling them as surprised. This reflects common confusions typically seen in expression recognition (Young et al., 1997). Indeed, in SC's case all errors to fearful eye part faces arose because of incorrect use of the surprise label. Other errors also followed the confusability pattern as proposed by Young et al. (1997); anger part faces were incorrectly labeled as disgust and fearful faces as surprised.
Recognition Memory for Identity from Incomplete Neutral Faces
It took SC 9 learning trials to reach the performance criterion, whereas the comparison group required an average of 1.4. Therefore some preservation of the ability to process and discriminate faces from their internal features must be inferred. However, given the large number of trials, SC's performance is impaired: incomplete or impaired representations of faces may be elaborated through repeated exposure thus leading to improved memory performance. This contrasts to his impaired performance on the Faces I and II tasks of the WMS-III where there is a single learning trial and test session (see, Table 1).
In order to assess SC's identity recognition performance from incomplete faces, his scores were compared to the 95-percent confidence interval for each condition, calculated from the comparison group. As shown in Figure 4, SC performs better than the comparison group in identifying nose-only faces and much more poorly in identifying faces from all other feature-deletion conditions. Unlike the pattern of improvement from the eyes-available conditions (in particular the eyes-mouth condition) seen for the comparison group, SC shows a mouth-available disadvantage. SC's response times were approximately twice as slow as the comparison groups.
DISCUSSION
The goal of this study was to explore facial effect decoding in a prosopagnosic patient. Whereas the relative importance of configural versus featural information has been the subject of much controversy and debate in the area of identity recognition, much less is known about the role of each type of information in expression recognition. The critical questions addressed were the processing strategies involved in the recognition of emotional expressions, and whether damaged configural processes observed in prosopagnosia can be linked to impaired expression recognition?
Assessment of Use of Visual Information: Comparison Group
The results from the expression tasks show that expression recognition is sensitive to modifications in available information. Data from normal participants demonstrated that some expressions are more easily identified from the whole face whereas others can be easily recognized from either whole or part information. For happiness and surprise, information from the eyes or mouth alone was sufficient for accurate recognition, whereas for anger and fear, information from the whole face or eyes was necessary. For the latter a highly specified change in the eyes uniquely signaled each expression, unlike the mouth where the change was less highly specified (e.g., overlap for surprise and fear). The eyes were more informative than the mouth (see also, Adolphs et al., 2005; Baron-Cohen et al., 1997) for all expressions tested.
Whereas configural processes have been implicated in the identification of expression recognition, these results suggest that features alone carry an effective message. Which kind of information is important depends on the emotion in question, and appears to relate to the unique informational change associated with each expression, such that whereas some facial movements retain their meaning in isolation as well as when presented in combination with other facial actions (e.g., upturned mouth or smile to indicate happiness), in other cases isolated facial feature movements do not convey any specific emotional meaning (e.g., taut mouth as part of an angry expression). Here the context of other feature movements is required to form a recognizable expression.
Taken together, the results from the expression tasks suggest that feature based processing of emotional expression may be more salient in the recognition of happiness, whereas configural based processing is more useful in the recognition of fear and anger.
With regard to identity recognition, accuracy was differentially affected by information availability. Consistent with previous findings, and unsurprisingly, availability of the whole face maximized recognition performance (Farah et al., 1998; Moscovitch & Moscovitch, 2000). With regard to features the eyes were the most informative, especially in combination with the mouth (see also, Haig, 1985; O'Donnell & Bruce, 2001; Sadr et al. 2003; Shepherd et al., 1981). The eyes and mouth may together form the key elements of configural processing, but they may also form a meaningful unit due to their concurrent role in speech processing (lip reading and direction of eye gaze) and expression identification (through a combination of brow and mouth movement).
Assessment of Use of Visual Information: SC
In the expression recognition task SC was sometimes able to use individual features in incomplete faces to identify facial expression (e.g., surprise from the eyes, happy and surprise from the mouth) but was, on the whole less successful than normal participants at doing so. More striking, however, was that his performance on the whole face was actually worse than his performance in the incomplete face conditions, for all expressions except happiness. SC fails to make normal use of facial information when recognizing emotional expressions of surprise, fear and anger. Surprisingly, assessment of SC's errors, especially for whole faces, was very similar to that of the comparison group: surprise was typically mistaken as fear (and vice versa) and anger as disgust. This suggests that there may be some qualitative overlap in the information extracted by SC and the comparison group. Errors for the incomplete face conditions also showed similar misattributions, especially of eye information, in the comparison group and SC.
In the test of face recognition, unlike the controls, SC did not show a benefit to recognition accuracy when cued by whole or eyes-available faces. Instead, he showed a detrimental effect of having the mouth cue available, either alone or in combination with other featural information. These results, together with the expression recognition findings support the conclusion that SC does not simply rely on a feature or part-based processing strategy for recognition. Indeed, if SC had completely disregarded the whole face, and recognition had depended entirely on part-based analysis, similar performance would be expected for whole and incomplete faces in both the identity and expression tasks. This challenges the notion that the ability to process configural information is simply lost in prosopagnosia and compensated by reliance on a part-based processing strategy.
Rather SC results are reminiscent of the finding of Farah et al. (1995a) whose prosopagnosic patient performed better at matching inverted faces than upright faces. They argued that this was evidence that configural processing of faces may control face-viewing behavior even when impaired (see also, Boutsen & Humphreys, 2002; de Gelder & Rouw, 2000a, b; Farah et al., 1995b; Rouw & de Gelder, 2002, for findings of paradoxical configural effects in prosopagnosia). The present results provide further support for the notion that configural processing may be so prepotent for face perception that even when it is profoundly impaired, as in the present case, given stimuli that would ordinarily recruit this form of processing, it predominates such that the information conveyed by isolated features becomes unavailable. SC is unable to make use of diagnostic featural information when presented in the whole face configuration. The paradoxical configural effect is here replicated in another case of prosopagnosia, and extended for the first time to expression recognition and new face learning/memory.
Alternatively, one might argue that SC's poor performance on whole faces arises from a deficit in attention allocation (Adolphs et al., 2005; Bukach et al., 2006). For instance, if SC's viewing is constrained to the eyes and/or nose when presented with whole faces, with limited processing of the mouth (or vice versa), then the addition of an unattended cue could act as a distracter, thereby inhibiting recognition. However, this explanation is unlikely to completely explain these findings. For instance, SC was able to recognize happiness equally well from both the whole face and mouth, but not from the eyes, suggesting that the mouth must have been processed in the whole face condition. His impairment may perhaps arise from a combination of both factors. Assessment of whether attentional cuing enhances face recognition and possibly fosters holistic/featural process or alternatively, recording the eye movements of SC during a series of face recognition tasks would evaluate this possibility (cf. Bloom & Mudd, 1991).
Unfortunately because the neuroanatomical damage sustained by SC was so extensive, specific conclusions regarding the underlying neuroanatomy cannot be drawn. However, the findings reported here substantiate a consistent theme in prosopagnosic research, which is that the disorder can be linked to impaired configural processing. Importantly, it is this information that is necessary for both accurate face identity recognition and also the recognition of some emotional expressions (especially fear and anger). Although SC's configural processing is impaired it was demonstrated that it is not completely inoperative, such that the presence of the full face interfered with part recognition. Significantly, this dramatic reversal of the normal pattern highlights the automaticity by which configural processing mechanisms are engaged even when defective, if the visual system is presented with a face-like stimulus.
CONCLUSION
Taken together, although face-parts may carry useful information, the recognition of most face expressions and the recognition of facial identity require the capacity to process configural information available from the whole face stimulus. This challenges standard models of face processing, which are based on the notion that different aspects of a face (i.e., identity, expression, and gender recognition) are processed independently using unique information. Identity and expression seem to share a common processing stage linked to configural information. In prosopagnosia, a deficit in identity and expression recognition, in addition to new face learning may be not because of a complete absence of configural processing, but rather, to impaired configural processing overriding an otherwise intact ability to utilize other processing routes.
ACKNOWLEDGMENTS
The authors sincerely thank SC for his time. This manuscript is original, has not been previously published, and is not under concurrent consideration elsewhere. All authors have consented to submission. This work was approved by the Human Research Ethics Committee at the University of Sydney, Australia. There are no conflicts of interest.