Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-02-06T09:11:28.231Z Has data issue: false hasContentIssue false

Neural substrates of object identification: Functional magnetic resonance imaging evidence that category and visual attribute contribute to semantic knowledge

Published online by Cambridge University Press:  01 March 2009

CHRISTINA E. WIERENGA*
Affiliation:
Department of Veterans Affairs Rehabilitation Research and Development, Brain Rehabilitation Research Center at the Malcom Randall VA Medical Center, Gainesville, Florida Department of Clinical and Health Psychology, University of Florida, Gainesville, Florida McKnight Brain Institute, University of Florida, Gainesville, Florida
WILLIAM M. PERLSTEIN
Affiliation:
Department of Clinical and Health Psychology, University of Florida, Gainesville, Florida McKnight Brain Institute, University of Florida, Gainesville, Florida
MICHELLE BENJAMIN
Affiliation:
Department of Clinical and Health Psychology, University of Florida, Gainesville, Florida McKnight Brain Institute, University of Florida, Gainesville, Florida
CHRISTIANA M. LEONARD
Affiliation:
McKnight Brain Institute, University of Florida, Gainesville, Florida Department of Neuroscience, University of Florida, Gainesville, Florida
LESLIE GONZALEZ ROTHI
Affiliation:
Department of Veterans Affairs Rehabilitation Research and Development, Brain Rehabilitation Research Center at the Malcom Randall VA Medical Center, Gainesville, Florida Department of Clinical and Health Psychology, University of Florida, Gainesville, Florida McKnight Brain Institute, University of Florida, Gainesville, Florida Department of Neurology, University of Florida, Gainesville, Florida
TIM CONWAY
Affiliation:
Department of Veterans Affairs Rehabilitation Research and Development, Brain Rehabilitation Research Center at the Malcom Randall VA Medical Center, Gainesville, Florida Department of Clinical and Health Psychology, University of Florida, Gainesville, Florida McKnight Brain Institute, University of Florida, Gainesville, Florida
M. ALLISON CATO
Affiliation:
Division of Neurology, Nemours Children’s Clinic, Jacksonville, Florida
KAUNDINYA GOPINATH
Affiliation:
Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, Florida Department of Radiology, University of Texas Southwestern Medical School, Dallas, Texas
RICHARD BRIGGS
Affiliation:
Department of Radiology, University of Texas Southwestern Medical School, Dallas, Texas Department of Radiology, University of Florida, Gainesville, Florida
BRUCE CROSSON
Affiliation:
Department of Veterans Affairs Rehabilitation Research and Development, Brain Rehabilitation Research Center at the Malcom Randall VA Medical Center, Gainesville, Florida Department of Clinical and Health Psychology, University of Florida, Gainesville, Florida McKnight Brain Institute, University of Florida, Gainesville, Florida
*
*Correspondence and reprint requests to: Christina E. Wierenga, UCSD Department of Psychiatry, VA San Diego Healthcare System, Psychology Service (151B), 3350 La Jolla Village Drive, San Diego, California 92161. E-mail: cwierenga@ucsd.edu
Rights & Permissions [Opens in a new window]

Abstract

Recent findings suggest that neural representations of semantic knowledge contain information about category, modality, and attributes. Although an object’s category is defined according to shared attributes that uniquely distinguish it from other category members, a clear dissociation between visual attribute and category representation has not yet been reported. We investigated the contribution of category (living and nonliving) and visual attribute (global form and local details) to semantic representation in the fusiform gyrus. During functional magnetic resonance imaging (fMRI), 40 adults named pictures of animals, tools, and vehicles. In a preliminary study, identification of objects in these categories was differentially dependent on global versus local visual feature processing. fMRI findings indicate that activation in the lateral and medial regions of the fusiform gyrus distinguished stimuli according to category, that is, living versus nonliving, respectively. In contrast, visual attributes of global form (animals) were associated with higher activity in the right fusiform gyrus, while local details (tools) were associated with higher activity in the left fusiform gyrus. When both global and local attributes were relevant to processing (vehicles), cortex in both left and right medial fusiform gyri was more active than for other categories. Taken together, results support distinctions in the role of visual attributes and category in semantic representation. (JINS, 2009, 15, 169–181.)

Type
Research Articles
Copyright
Copyright © INS 2009

INTRODUCTION

The organization of semantic knowledge has been inferred from category-specific impairments involving a selective deficit of knowledge for a particular category of information, such as living or nonliving objects (Damasio et al., Reference Damasio, Grabowski, Tranel, Hichwa and Damasio1996; De Renzi & Lucchelli, Reference De Renzi and Lucchelli1994; Hillis & Caramazza, Reference Hillis and Caramazza1991; Warrington & Shallice, Reference Warrington and Shallice1984), animals (Barry & McHattie, Reference Barry, McHattie, Campbell and Conway1995; Laws et al., Reference Laws, Evans, Hodges and McCarthy1995), tools (Hillis et al., Reference Hillis, Rapp, Romani and Caramazza1990; Sacchett & Humphreys, Reference Sacchett and Humphreys1992), fruits and vegetables (Farah & Wallace, Reference Farah and Wallace1992), body parts (Goodglass & Wingfield, Reference Goodglass and Wingfield1993), and medical items/instruments (Crosson et al., Reference Crosson, Moberg, Boone, Gonzalez Rothi and Raymer1997). Although category-specific deficits have traditionally been interpreted to suggest that semantic knowledge is represented primarily according to category, recent research suggests that semantic organization is far more complicated. An integration of case study and neuroimaging findings suggests that the neural representations of semantic processing occurs in a topographically distributed neural network involving attributes relevant to identifying objects (e.g., function, shape, color, emotional connotation), semantic category of objects (e.g., living vs. nonliving), and modality in which information is processed (i.e., visual vs. verbal) (Boronat et al., Reference Boronat, Buxbaum, Coslett, Tang, Saffran, Kimberg and Detre2005; Chao et al., Reference Chao, Haxby and Martin1999; Crosson et al., Reference Crosson, Cato, Sadek and Lu2000; Hart & Gordon, Reference Hart and Gordon1992; Ishai et al., Reference Ishai, Ungerleider, Martin, Schouten and Haxby1999; Vandenberghe et al., Reference Vandenberghe, Price, Wise, Josephs and Frackowiak1996; Warrington & Shallice, Reference Warrington and Shallice1984).

An object’s category is thought to be defined according to shared attributes that uniquely identify and distinguish it from other category members (see Taylor et al., Reference Taylor, Moss, Tyler, Hart and Kraut2007, for review). Anatomically, this concept suggests that one highly important topographic location for processing a specific category will be the area where information about the shared attributes converges. Evidence for the role of attributes in the semantic system comes from case studies of patients with impaired knowledge of object features, such as functional knowledge (Laws et al., Reference Laws, Evans, Hodges and McCarthy1995; Sheridan & Humphreys, Reference Sheridan and Humphreys1993) and visual perceptual features (Coltheart et al., Reference Coltheart, Inglis, Cupples, Michie and Budd1998; De Renzi & Lucchelli, Reference De Renzi and Lucchelli1994; Gainotti & Silveri, Reference Gainotti and Silveri1996; Hart & Gordon, Reference Hart and Gordon1992; Silveri & Gainotti, Reference Silveri and Gainotti1988) independent of category or modality of presentation. It is generally thought that knowledge of sensory or motor attributes is stored in cortical regions proximal to the primary cortical areas involved in the perception or mediation of these attributes (Boronat et al., Reference Boronat, Buxbaum, Coslett, Tang, Saffran, Kimberg and Detre2005; Crosson et al., Reference Crosson, Cato, Sadek and Lu2000); for instance, stored knowledge of object use has been localized to the ventral premotor area (Beauchamp et al., Reference Beauchamp, Lee, Haxby and Martin2003; Boronat et al., Reference Boronat, Buxbaum, Coslett, Tang, Saffran, Kimberg and Detre2005; Chao & Martin, Reference Chao and Martin2000; Chao et al., Reference Chao, Haxby and Martin1999; Devlin et al., Reference Devlin, Moore, Mummery, Gorno-Tempini, Phillips, Noppeney, Frackowiak, Friston and Price2002; Gerlach et al., Reference Gerlach, Law, Gade and Paulson2000, Reference Gerlach, Law, Gade and Paulson2002; Martin et al., Reference Martin, Haxby, Lalonde, Wiggs and Ungerleider1995, Reference Martin, Wiggs, Ungerleider and Haxby1996) and knowledge of action words that reference a body part follows the topographical organization of the motor cortex (Hauk et al., Reference Hauk, Johnsrude and Pulvermüller2004; Pulvermüller, Reference Pulvermüller2001; Pulvermüller et al., Reference Pulvermüller, Harle and Hummel2001). Several studies have reported a convergence of activity in the premotor cortex (BA 6/44) for living and nonliving categories (e.g., tools and fruits) that share an attribute (e.g., manipulability), suggesting that activation was not attributable solely to a category effect but rather to action-related knowledge (Gerlach et al., Reference Gerlach, Law, Gade and Paulson2002; Kellenbach et al., Reference Kellenbach, Brett and Patterson2003; Kraut et al., Reference Kraut, Moo, Segal and Hart2002). Although it has been hypothesized that living and nonliving objects differ primarily in terms of perceptual and functional attributes, respectively (Warrington & Shallice, Reference Warrington and Shallice1984), a clear dissociation between visual attribute and category representation has not yet been reported. The distinction between category and attribute in semantic representation not only is conceptually significant for models of semantic memory but also may inform our ability to assess semantic function in aging and disease states affecting semantic memory, such as Alzheimer’s disease (AD) or semantic dementias. In fact, it is highly debated whether the semantic memory deficit in AD reflects the loss of semantic knowledge for particular categories and concepts or the loss of knowledge of perceptual features and attributes (e.g., physical features, function) (Alathari et al., Reference Alathari, Trinh Ngo and Dopkins2004; Done & Hajilou, Reference Done and Hajilou2005; Harley & Grant, Reference Harley and Grant2004).

Recent clinical and experimental findings indicate that the inferior temporal lobe, especially the fusiform gyrus, is one focal point for convergence and integration of visual semantic information. Evidence indicates a reliable difference along the medial–lateral dimension in the fusiform gyrus, bilaterally, for the categorical distinction between nonliving and living things, respectively (Chao et al., Reference Chao, Haxby and Martin1999; Ishai et al., Reference Ishai, Ungerleider, Martin, Schouten and Haxby1999; Weisberg et al., Reference Weisberg, van Turennout and Martin2007; Whatmough et al., Reference Whatmough, Chertkow, Murtha and Hanratty2002). For example, pictures of animals activate the lateral fusiform gyrus to a greater degree than pictures of tools, whereas tools activate the medial fusiform gyrus more than animals. Paradoxically, however, the results of these categorical manipulations have been interpreted as due to differences in processing of visual attributes, because previous reports suggest that processing of categories is unlikely to be spatially distinct from processing of attributes. In order to test this assumption, it is necessary to dissociate processing of category from visual attribute.

The distinction between global and local visual features allows for a direct test of such a dissociation of category and attribute in the fusiform gyrus. Global visual attributes include descriptors such as the basic shape of an object (i.e., an object’s silhouette), while local visual attributes include the specific details of an object’s visual features. For example, an airplane can be recognized by its outline or global form, but airplanes also have local features such as windows, engines, and so forth. Global visual features are predominantly processed in the ventral visual stream of the right hemisphere, and local visual features are predominantly processed in the ventral visual stream of the left hemisphere (Delis et al., Reference Delis, Massman, Butters, Salmon, Shear, Demadura and Filoteo1992; Doyon & Milner, Reference Doyon and Milner1991). Recent research demonstrates that animals can be more easily distinguished in the absence of local detail (i.e., on the basis of global shape), while the presence of local detail is more important for identifying tools (Vannucci et al., Reference Vannucci, Viggiano and Argenti2001). However, what is needed to dissociate object category and attribute is a nonliving category from which items are processed largely by global visual form (i.e., share visual attribute with a living category) or a living category from which items are processed mostly by local visual detail. We predicted that vehicles would meet the former set of requirements.

Hence, the purpose of this research was twofold: to examine the role of global and local visual attributes on object identification and to determine whether brain structures employed in processing semantic categories (animals, tools, and vehicles) could be dissociated from those employed in processing visual features (global visual form and local visual features). Two experiments were performed: Experiment 1 was a preliminary study to determine the degree to which vehicles could be identified in the absence of visual detail as compared to animals and tools. Experiment 2 was a functional magnetic resonance imaging (fMRI) study to determine the fusiform gyrus areas differentially involved in processing these categories and attributes. In Experiment 1, visual detail was manipulated in a parametric fashion by filtering out varying degrees of high spatial frequency information (i.e., local details) while preserving the global form of three categories of interest: animals, tools, and vehicles. This technique permitted us to investigate the interaction of category and visual attributes in semantic organization. We hypothesized that both animals and vehicles are identified by low spatial frequency information (global form or contour), whereas tools require more high spatial frequency information (local details) for identification.

EXPERIMENT 1

Methods

Participants

Twenty (5 male and 15 female) healthy young adults (age range = 21–30 years, mean age = 24.6 years; mean education = 16.9 years) with normal or corrected-to-normal vision participated in the pilot study. All participants were native English speakers recruited from the University of Florida faculty, staff, and students and from the Gainesville, Florida, community. All participants were strongly right-handed (Edinburgh Handedness Inventory; Oldfield, Reference Oldfield1971). Informed consent was obtained from participants according to guidelines established by the Health Science Center Institutional Review Board at the University of Florida in accordance with the ethical standards set forth in the 1964 Declaration of Helsinki.

Stimuli, design, and task

Seventy-five grayscale photographs of real-life objects from the categories of animals, tools, and vehicles (25 photographs/category) were included. Each photograph was filtered at nine levels by removing high spatial frequency content (i.e., details) from the original image using the Gaussian Blur filter (Adobe Photoshop 7.0) in increments of 3 pixels. For example, the ninth level of filtering corresponds to a blur radius of 27 pixels, and the third level of filtering corresponds to a blur radius of 9 pixels. Each photograph was presented for 200 ms at 10 resolution levels (nine filtered plus original) of high spatial frequency filtering consecutively in order of increasing high spatial frequency content (Figure 1). Participants were instructed to press one of two buttons according to whether or not they could identify the object, thus recording object recognition response time (RT). After the identification response, participants were asked to name the object and were given feedback on naming accuracy for each object. The RT for the object recognition response at the resolution level for which the object was first correctly named was included in reaction time analyses.

Fig. 1. Schemata of an ascending sequence (decreasing spatial filtering, increasing high spatial frequency content) of nine filtering levels of a stimulus in each category for Experiment 1. Photographs were equated for size. Items in the three categories were balanced for familiarity, concreteness, and frequency in English (Coltheart, Reference Coltheart1981; Kucera & Francis, Reference Kucera and Francis1967). Each stimulus was presented in canonical orientation at the center of the computer monitor. The presentation order of the items was randomized for each participant. A training phase preceded the experimental phase in which different photographs were used.

Results and Discussion

For each participant, a threshold value was established for each object according to the level of spatial filtering at which the item was initially identified. After removing one outlier from the tool category, a within-subjects repeated-measures analysis of variance (ANOVA) conducted on mean accuracy rates at the original resolution level for animals (97.2% correct, SD = 5.37, range = 84–100%), tools (99.4% correct, SD = 1.53, range = 92–100%), and vehicles (98.6% correct, SD = 2.68, range = 92–100%) indicated no significant effect for category [F(2,38) = 3.57, p = .055]. A repeated-measures ANOVA was conducted on threshold values for correctly identified objects in the three categories (animals, tools, and vehicles). A significant effect for category was found [F(2,38) = 81.54, p = .000]. Planned contrasts revealed that each category differed significantly from the other two categories, indicating that identification of the three categories is differentially affected by the amount of local details (i.e., high spatial frequency content). Identification of animals required the least amount of high spatial frequency information (mean threshold = 4.84, range = 3.35–6.60), tools required the greatest amount of high spatial frequency content (mean threshold = 3.09, range = 2.25–3.96), and vehicles were intermediate between animals and tools in the amount of high spatial frequency information (mean threshold = 3.72, range = 2.08–5.24) required for identification [animals vs. tools: t(19) = 10.52, p = .000; animals vs. vehicles: t(19) = 8.37, p = .000; vehicles vs. tools: t(19) = 5.71, p = .000] (Figure 2). In contrast, a repeated-measures ANOVA revealed that RT for the initial correct identification of objects did not differ significantly [F(2,38) = 0.642, p = .532] between categories (animals: mean RT = 1295 ms, SD = 697 ms; vehicles: mean RT = 1210 ms, SD = 428 ms; tools: mean RT = 1177 ms, SD = 470 ms). Based on reaction time, results suggest that the three categories did not differ in terms of level of difficulty of identification.

Fig. 2. Object identification by category as a function of spatial frequency content for Experiment 1, showing filtering levels at which objects were recognized.

These findings indicate that the contribution of global form versus local detail to the identification of objects differs by semantic category. Animals required the least visual detail for identification, and tools required the most visual detail for identification, while vehicles were intermediate in the amount of detail required for identification. Current results form a basis from which to examine the neural substrates of semantic information to investigate whether the dimensions of category and visual attributes can be dissociated at the neural level of semantic representation. Based on hemispheric differences in processing global versus local features, we anticipate that objects that can be identified from global form (e.g., animals) may evoke stronger activity in the right fusiform gyrus, while objects that require greater local detail (e.g., tools) for identification may provoke stronger activity in the left fusiform gyrus. Objects that require both global form and local details (e.g., vehicles) for identification are anticipated to activate the fusiform gyrus bilaterally. A categorical distinction is also expected. As living things, animals are expected to exhibit greater activity in the lateral fusiform gyrus than tools, and as nonliving things, tools and vehicles are expected to exhibit greater activity in the medial fusiform gyrus than animals.

EXPERIMENT 2

Methods

Participants

Twenty neurologically normal young adults (10 male and 10 female; age = 20–34 years, mean = 25.1 years; mean education = 15.85 years) and 20 normal older adults (10 male and 10 female; age = 68–84 years, mean = 74.9 years; mean education = 15.65 years) participated as part of a larger study assessing age-related changes in word retrieval and the neural substrates of semantic knowledge. Older and younger participants did not differ significantly according to level of education [t(38) = −0.28, p = .779]. All participants were native English speakers recruited from the University of Florida faculty, staff, and students; the Malcom Randall VAMC; and the Gainesville, Florida, community. All participants were strongly right-handed (Edinburgh Handedness Inventory; Oldfield, Reference Oldfield1971). Potential participants were excluded if they reported a history of neurological disease, dementia or mild cognitive impairment, cardiovascular disease, uncontrolled hypertension, Diagnostic and Statistical Manual of Mental Disorders-IV Axis 1 diagnosis, or poor visual acuity after correction. Additionally, participants were excluded if they had metal in their body or if they were taking psychoactive prescription medications. Participants were also excluded if they obtained less than 27/30 on the Mini Mental State Exam (Folstein et al., Reference Folstein, Folstein and McHugh1975) or less than 15 on total recall over three trials of the Hopkins Verbal Learning Test (Shapiro et al., Reference Shapiro, Benedict, Schretlen and Brandt1999). Female participants were excluded if they were pregnant or trying to become pregnant. Participants were instructed to abstain from caffeine on the day of the scan. Informed consent was obtained from participants according to guidelines established by the Health Science Center Institutional Review Board at the University of Florida in accordance with the ethical standards set forth in the 1964 Declaration of Helsinki.

fMRI naming task

Participants alternated between an overt picture naming task and a passive viewing task during eight functional imaging runs. During the visual naming task, 20 grayscale photographs of animals, 20 grayscale photographs of tools or implements, and 20 grayscale photographs of vehicles selected from Experiment 1 were presented. Each photograph was presented in its original resolution twice for a total of 120 naming trials during the scanning session. Photographs were chosen based on the Experiment 1 results to further experimentally manipulate category and visual attribute to equate photographs of animals and vehicles for spatial frequency content necessary for object identification [t(38) = 0.19, p = .850], by eliminating the five items from each of these categories with the lowest threshold value (e.g., requiring the most local detail for identification). In contrast, the four items with the highest threshold values (e.g., requiring the least local detail for identification) and the one outlier based on accuracy performance were eliminated from the tool category. Subsequently, both animals and vehicles differed significantly from tools such that the chosen tools required significantly greater high–spatial frequency content than animals [t(38) = 4.26, p = .000] or vehicles [t(38) = 4.10, p = .000] for identification. Two sets of imaging runs (e.g., runs 1–4 and runs 5–8) were created. Presentation order of pictures was psuedorandomized but remained fixed within a run (e.g., 15 pictures per run), and all 60 pictures were presented once in each imaging set. Imaging set and run order within each set were counterbalanced, and all 60 items were presented once before being repeated. Thus, repetition distance of items varied pseudorandomly.

Pictures were presented one at a time for 3400 ms each, in an event-related format, with participants naming the picture aloud (Figure 3). An event-related design was chosen to allow for overt responding so that performance accuracy and RT could be assessed. Between trials, participants were instructed not to think any words to themselves, to rest quietly, and to look at abstract patterns derived by pixelating photographs from the three categories using Adobe PhotoShop 7.0. Overt verbal responses were monitored using a bidirectional dual microphone to passively cancel some of the background scanner noise (Commander XG, Resonance Technology, Inc., Northridge, California). Microphone output was run through the penetration panel and connected to a Solo 2500 LS Laptop Computer (Gateway, Inc., Irvine, California) with Cool Edit software in the scanner control room that recorded verbal responses from each scanning run. These responses were scored for accuracy and reaction time off-line.

Fig. 3. Schemata of the overt naming task design for Experiment 2. Photographs were equated for size and resolution, and members of each category were matched for frequency of occurrence in English (Kucera & Francis, Reference Kucera and Francis1967). Inter-trial intervals were pseudorandomly varied between 13,600 ms (8 images), 15,300 ms (9 images), 17,000 ms (10 images) and 18,700 ms (11 images) to mitigate effects of periodic or quasiperiodic physiological noise and allow the HDR to return to baseline before the participant spoke again, preventing contamination of the latter part of the HDR by movement during the subsequent response. Experimental runs began and ended with a rest interval. There were 15 trials in an experimental run (five trials from each category), and eight runs were administered. Each 15-trial run was 323 s in length and acquired 188 functional images for each slice. Stimuli were projected onto a translucent screen above the participant’s head via the Integrated Functional Imaging System using E-Prime Version 1 software.

Image acquisition

Images were acquired on a 3T Siemens Allegra head-only scanner using an eight-element birdcage radio frequency coil (MRI Devices, Inc., Pewaukee, WI) in quadrature mode. Functional images were obtained with a one-shot gradient echo echo planar imaging (EPI) sequence: 24 cm field of view (FOV), 64 × 64 matrix, 3.75 × 3.75 mm in-plane resolution, repetition time (TR) = 1700 ms, echo time (TE) = 30 ms, flip angle (FA) = 70°. Twenty-eight 4- to 5-mm-thick sagittal slices covering the whole brain were acquired. A high-resolution T1-weighted three-dimensional (3D) Magnetization Prepared RApid Gradient Echo (MP-RAGE) scan (TE = 4.13 ms, TR = 2000 ms, number of excitations (NEX) = 1, FOV = 24 cm, FA = 8°, matrix size = 256 × 256, one hundred and twenty-eight 1.3-mm slices) was obtained for anatomic reference. Head motion was minimized using foam padding.

Data analysis

Behavioral data. Performance on the overt naming task during fMRI was compared for the three categories using 2-group (young, old) × 3-category (animals, tools, vehicles) ANOVAs. Follow-up paired samples t tests were also performed for accuracy and response latency.

Neuroimaging data. fMRI data were analyzed and overlaid onto structural images with the Analysis of Functional Neuroimaging (AFNI) program (National Institutes of Health; Cox, Reference Cox1996). The first six images of all functional runs were discarded to ensure attainment of steady state. To minimize effects of head motion, time-series images were spatially registered in 3D space. Images were visually inspected for gross artifacts, and quality control procedures were applied to the data to detect residual motion or susceptibility artifact. All imaging runs were normalized to account for differences in gross signal, detrended of low-frequency signal drifts (Birn et al., Reference Birn, Saad and Bandettini2001) and concatenated into a single time series. For each voxel, the observed fMRI intensity time series for each category was modeled as the convolution of the experimental stimulus vector (comprising 40 picture presentations) and the estimated best-fit 12-lag impulse response, allowing the hemodynamic response (HDR) to return to baseline. Area under the curve (AUC) of the deconvolved HDR was the dependent variable for analyses. AUC was calculated by adding the deconvolved image intensity at each deconvolved time point of the impulse response, with the exception of the first two images. The first two images following stimulus presentation, during which the participant responded overtly, were excluded to eliminate stimulus-correlated signal artifact (Carter et al., Reference Carter, Macdonald, Botvinick, Ross, Stenger, Noll and Cohen2000), since the vast majority of responses occurred within the first 3.4 s following stimulus presentation. The T1-weighted anatomic images and the AUC functional activation maps were warped to the coordinates of the Talairach and Tournoux (Reference Talairach and Tournoux1988) atlas and resampled at 1 mm3 resolution. Subsequently, functional images were spatially smoothed (3 mm full width at half-maximum Gaussian kernel).

A voxelwise 2-group (young, old) × 3-category (animals, tools, vehicles) ANOVA with subjects as a random factor and AUC of the HDR as the dependent variable was the base analysis. Follow-up voxelwise paired t tests comparing functional activity between each category were also performed. A cluster thresholding technique was used to determine which areas of activation on the F-maps and t-maps were significant after multiple comparisons correction by thresholding at a single-voxel p value of .001 for the repeated-measures ANOVA and at .005 for subsequent a priori contrasts. The cluster size was predetermined as a region of at least 200 contiguous voxels (i.e., 200 mm3), which protected for a whole-brain p value of <.001 in Monte Carlo simulations (AlphaSim, AFNI) at single-voxel statistical thresholds of p <.005 (a priori contrasts) and p < .001 (repeated-measures ANOVA). To demonstrate how the temporal domain contributed to generating the significant blood oxygen-level dependent (BOLD) differences between categories, follow-up functional region of interest time-series analyses were conducted by extracting the HDR from each relevant cluster showing differences in AUC BOLD activity between categories. The voxelwise HDR, based on raw signal change from baseline, for each participant was derived and then averaged across voxels within each significant cluster. The averaged HDRs for each subject and cluster were entered into a repeated-measures ANOVA to determine where in the HDR time course for each cluster, significant differences in activation between categories occurred. As in base AUC ANOVAs, the first two images were dropped from the time series to reduce motion artifact.

Results

Behavioral results

Accuracy and reaction time performance on the fMRI naming task for younger and older adults are presented in Table 1. An interaction between age and naming accuracy for each category [F(2,76) = 4.10, p = .020] was found. Follow-up pairwise t tests reveal that older adults responded less accurately for animals than younger adults [t(38) = 3.25, p = .002]. However, mean accuracy rates for animals (89.1% correct, SD = 8.3), tools (92.1% correct, SD = 6.5), and vehicles (91.3% correct, SD = 6.6) did not differ when performance was collapsed across participants [F(2,76) = 2.74, p = .071]. Additionally, there was a significant interaction between age and category RT [F(2,76) = 7.45, p = .001]. Follow-up pairwise t tests reveal that older adults responded more slowly to animals [t(38) = −4.15, p = .000] and vehicles [t(38) = −3.68, p = .001] than younger adults. Similarly, RT for correctly identified objects differed significantly between categories (animals: mean RT = 1611 ms, SD = 293 ms; vehicles: mean RT = 1563 ms, SD = 264 ms; tools: mean RT = 1458 ms, SD = 282 ms) when performance was collapsed across subjects [F(2,76) = 19.00, p = .000]. Participants responded more quickly to tools than vehicles [t(38) = 3.49, p = .001] or animals [t(38) = 5.07, p = .000] and responded more quickly to vehicles than animals [t(38) = 2.86, p = .007]. No outliers were identified for these comparisons, so all subjects and items were included in these analyses. Since naming accuracy did not differ for animals, tools, or vehicles collapsed across subjects, level of naming performance is unlikely to have influenced fMRI comparisons of semantic category. Moreover, despite the interaction of age and category on naming performance, this interaction was not evident in fMRI BOLD response, which is likely due to the restricted range of accuracy rates (e.g., performance was above 85% accurate). Similarly, although RTs collapsed across participants differed significantly between category, the differences were so small (e.g., less than 160 ms) that they are unlikely to have much bearing on the fMRI category comparisons.

Table 1. Performance on the fMRI naming task for younger and older adults

fMRI results

Interaction of Age × Category Results of the 2 (group) × 3 (category) repeated-measures ANOVA model did not yield a significant interaction effect at a statistical threshold of p < .001 and a cluster volume of 200 mm3. Therefore, the effects of aging were discussed in a separate paper (Wierenga et al., Reference Wierenga, Benjamin, Gopinath, Perlstein, Leonard, Gonzalez Rothi, Conway, Cato, Briggs and Crosson2008). Given the absence of a significant Age × Category interaction, we collapsed across groups for follow-up semantic comparisons. Results regarding the a priori hypotheses will be presented first, followed by post hoc analyses based on category main effect.

A priori comparisons in the fusiform gyrus As noted above, planned comparisons between the three categories were performed, collapsed across age, to further elucidate the role of visual attribute and semantic category on representation of semantic information. Results of within-subjects paired t tests for animals versus tools revealed a significant cluster of activity for animals greater than tools in the lateral fusiform gyrus in the right hemisphere (Table 2). A cluster of significant activity for tools greater than animals was lateralized to the left medial fusiform gyrus (Figure 4). For the comparison of tools versus vehicles, clusters of significant activity for vehicles greater than tools were found bilaterally in the medial fusiform gyrus, with two large clusters extending from the parahippocampal gyrus along the collateral sulcus through the fusiform gyrus bilaterally (Figure 5) and a smaller cluster located in the posterior medial fusiform gyrus of the left hemisphere. There were no significant clusters of activity for tools greater than vehicles. For animals versus vehicles, there were no significant clusters of activity for animals greater than vehicles in the inferior temporal cortex. Clusters of significant activity for vehicles greater than animals extended along the medial fusiform gyrus bilaterally including the parahippocampal gyrus and collateral sulcus (Figure 5).

Fig. 4. Lateral and medial fusiform gyri activated for the within-subject comparison of animals and tools, red: p < .005, yellow: p < .001 for animals greater than tools; dark blue: p < .005, light blue: p < .001 for tools greater than animals, along with corresponding HDR functions based on raw signal change for animals and tools for these areas. Note that the first two images are excluded from the HDR to reduce motion artifact. The 2 (category) × 11 (time) interaction was significant for the right lateral fusiform cluster in which animals showed a larger AUC than tools [F(10,390) = 10.57, p = .000] and for the left medial fusiform cluster in which tools showed a greater AUC than animals [F(10,390) = 2.78, p = .003]. Asterisks demonstrate image numbers at which signal intensity differed significantly (p < .05) between categories.

Fig. 5. Activity in the fusiform gyrus bilaterally for the within-subject comparison of (A) vehicles and tools (dark blue: p < .005, light blue: p < .001 for vehicles greater than tools; red: p < .005, yellow: p < .001 for tools greater than vehicles) along with corresponding HDR functions based on raw signal change for vehicles and tools for these areas and (B) vehicles and animals (dark blue: p ≤ .005, light blue: p < .001 for vehicles greater than animals) along with corresponding HDR functions based on raw signal change for vehicles and animals for these areas. Note that the first two images are excluded from the HDR to reduce motion artifact. The 2 (category) × 11 (time) interaction was significant for the left posterior medial fusiform cluster [F(10,390) = 10.68, p = .000] and for the medial fusiform clusters bilaterally extending anteriorly to the collateral sulcus and parahippocampal gyri in which vehicles showed a greater AUC than tools [left: F(10,390) = 17.01, p = .000; right: F(10,390) = 28.02, p = .000] and for the medial fusiform clusters bilaterally in which vehicles showed a greater AUC than animals [left: F(10,390) = 16.07, p = .000; right: F(10,390) = 14.89, p = .000]. Asterisks demonstrate image numbers at which signal intensity differed significantly (p < .05) between categories.

Table 2. Within-subject regions of significant brain response for each category (animals, tools, and vehicles) in the inferior temporal cortex during overt picture naming in adults

Note.

A, animals; T, tools; V, vehicles; L, left; R, right. Clusters shown survived our cluster threshold procedures, p < .005, volume > 200 mm3.

A visual examination of the time course for each category in the seven clusters identified by the pairwise comparisons in the fusiform gyrus reveals similar shapes across category, although amplitudes vary between categories, thus driving the BOLD differences. Significant Category × Time interactions were found in each cluster of activity (Figures 4 and 5).

Taken together, a medial–lateral differentiation for tools and animals, respectively, was found in the fusiform gyrus, consistent with previous findings (Chao et al., Reference Chao, Haxby and Martin1999; Ishai et al., Reference Ishai, Ungerleider, Martin, Schouten and Haxby1999). When animals and tools were compared, animals showed greater right lateral fusiform activity, whereas tools showed greater left medial fusiform activity, as predicted. Notably, this medial–lateral distinction was lateralized according to whether the items are identified on the basis of global form (e.g., right hemisphere for animals) or local details (e.g., left hemisphere for tools). However, this apparent double dissociation between semantic category and visual attribute does not remain for vehicles; activity for vehicles dominates the medial fusiform gyrus in both the left and right hemispheres when compared to either animals or tools. It is also important to note that the activity differences are a matter of degree rather than presence versus absence of activity, as robust HDRs are shown for all category pairs in these clusters.

Main effect of category. Analysis of the category main effect revealed several significant activity clusters (p < .001 and volume > 200 mm3) throughout the brain, including regions of the frontal lobe (right inferior frontal sulcus, right anterior cingulate, left middle frontal gyrus, and cingulate gyrus), sensory–motor cortex (left precentral gyrus and postcentral gyrus bilaterally), parietal lobe (left angular gyrus and inferior parietal lobe bilaterally), limbic region (right posterior cingulate and left insula), temporal lobe (left middle temporal gyrus and right inferior temporal gyrus), occipital lobe (middle occipital gyrus bilaterally, right lingual gyrus, and right cuneus), and basal ganglia (left caudate nucleus) (Table 3).

Table 3. Areas of significant activity for the main effect of category during overt picture naming in adults

Note.

A, animals; T, tools; V, vehicles; L, left; R, right.

a Clusters shown survived our cluster threshold procedures, p < .001, volume > 200 mm3.

b Clusters shown survived our cluster threshold procedures, p < .0001, volume > 200 mm3.

To analyze differences in HDR signal amplitude across time between the three categories, an even more stringent p value (p < .0001) was chosen to reduce the number of analyses for activated clusters from 21 (at p < .001) to a more manageable number. The remaining clusters included regions of the left inferior parietal lobe and angular gyrus, left posterior middle temporal gyrus, right middle occipital gyrus, and right inferior frontal sulcus. Examination of the time course for each category in each cluster revealed similar shapes across category, although amplitudes differed significantly between categories. Significant Category × Time interactions were found in each cluster of activity. Tools showed a greater amplitude of activity than vehicles and animals in the left inferior parietal lobe [F(20,780) = 6.62, p = .000] and left middle temporal gyrus [F(20,780) = 7.67, p = .000]. Animals and vehicles elicited a greater amplitude of activity than tools in the right inferior frontal sulcus [F(20,780) = 2.38, p = .001]. The amplitude of activity for vehicles was greater than that for animals and tools in the right middle occipital gyrus [F(20,780) = 9.75, p = .000] (Table 3).

Discussion

Experiment 2 endeavored to address fine-grained distinctions in the neural substrates of semantic processes across the adult life span because of their importance to our understanding of how information is represented in the brain. Participants were initially divided into two groups on the basis of age, given concerns that the neural underpinnings of semantic knowledge might change with age and thereby contribute to age-related changes in word retrieval. However, even with a relatively large number of participants in each group, none of the differences in semantic category interacted with age. Thus, findings indicate that neural substrates of primary semantic functions do not deteriorate during normal aging.

To examine the role of visual attribute and category in semantic representation, three categories (animals, tools, and vehicles) were selected for which processing of specific visual features (global form vs. local detail) could be dissociated from processing of semantic category. Previous research indicated that nonliving things (tools and vehicles) and living things (animals) are processed in the medial and lateral fusiform gyrus, respectively (Chao et al., Reference Chao, Haxby and Martin1999; Ishai et al., Reference Ishai, Ungerleider, Martin, Schouten and Haxby1999). Thus, if previous research is correct, then category should drive a medial versus lateral fusiform distinction in processing. On the other hand, global form and local detail are visual features processed in the right and left hemispheres, respectively (Delis et al., Reference Delis, Massman, Butters, Salmon, Shear, Demadura and Filoteo1992; Doyen & Milner, Reference Doyon and Milner1991). Our preliminary study indicated that adults rely more on global form for identifying animals and local details for identifying tools but rely on both global form and local detail for identifying vehicles. Thus, visual attributes should drive a left versus right fusiform distinction in processing.

Our functional neuroimaging results indicate that the brain respects both semantic category and visual attribute. Compared to tools, identifying animals evoked greater right lateral fusiform activity, and compared to animals, identifying tools evoked greater left medial fusiform activity. Notably, this medial–lateral distinction was lateralized according to whether the items are identified on the basis of global form (e.g., right for animals) or local details (e.g., left for tools). Compared to either animals or tools, identification of vehicles evoked greater activity in both the left and right medial fusiform gyri, despite our attempt to match animals and vehicles according to global visual attributes. In fact, the functional imaging data are consistent with our preliminary data that suggested that vehicles require both global form and local details for identification.

Previous literature has suggested that nonliving and living items are processed in different locations because different attributes are important for distinguishing among category members and assumed that processing of semantic categories is unlikely to be spatially distinct from processing of attributes. However, the current study indicates that the visual attributes used to process members of different categories can be distinguished from the processing of the categories themselves. Hart and Gordon (Reference Hart and Gordon1992) provided evidence that brain lesion could dissociate category, attribute, and modality from one another. However, since that time, investigators have been hesitant to conclude that category-specific deficits implicate spatially distinct processors in the brain for different categories.

Crosson et al. (Reference Crosson, Cato, Sadek and Lu2000) provided a plausible explanation for spatially distinct processors for different categories. Items within categories are distinguished by multiple attributes with spatially distributed representations in the brain. Pattern associators within the inferior temporal lobe receive information from these spatially distributed processors and, from this input, resolve object identity. For example, tools require information not only about local visual detail but also about use (praxis) and movement to resolve the semantic concept. The point at which the critical information for object identification within a category converges (in this case, in the inferior temporal lobe) determines the location of the pattern associator for that category, as reflected in the locus of functional activity. This concept is generally consistent with the recent Conceptual Structure Account of semantic memory put forth by Taylor et al. (Reference Taylor, Moss, Tyler, Hart and Kraut2007) that posits a distributed connectionist semantic system in which “processing of a concept corresponds to overlapping patterns of activation across units representing the concepts.” Research suggests that the ventral temporal cortex processes visual information in a hierarchical fashion, whereby more posterior sites process basic visual features while integration, and conjunction of visual features into complex meaningful objects occurs in more anterior regions (Ungerleider & Mishkin, Reference Ungerleider, Mishkin, Ingle, Goodale and Mansfield1982). Graded differences in brain activity in the fusiform gyrus during object naming likely represent the instantiation and convergence of processing shared visual features that define a category. In other words, category distinctions may emerge as a result of differences in co-occurring or shared features. Thus, clusters of activity in the ventral visual cortex that emerge as a result of category comparisons (e.g., processing the concept of animals vs. tools) likely reflect processing of shared features within a category rather than representing a modular store of category knowledge per se. In fact, no differences between animals and tools were found in the lateral and medial fusiform gyrus when stimuli were presented as words rather than pictures (Tyler et al., Reference Tyler, Stamatakis, Dick, Bright, Fletcher and Moss2003), suggesting that these categorical differences arise from a distributed process of visual integration. Our ability to dissociate visual attribute (e.g., local detail vs. global form) from category provides further support for the importance of visual attributes and suggests that while category distinctions remain, they may be due to the convergence of other shared attributes not specifically examined.

Indeed, the fact that 21 separate clusters of significant activity outside the fusiform gyrus emerged in the post hoc analysis of categories indicates the complexity of distributed semantic processing. Of these clusters, only those meeting the most stringent significance criterion were selected for discussion and interpretation. Among these clusters, tools activated the left inferior parietal lobe compared to vehicles and animals. Dominance of activity for tools in the left inferior parietal lobe likely represents knowledge of the skilled motor movements needed to use such objects, and lesions to this area often result in apraxia (Heilman & Valenstein, 1993). Animals and vehicles activated the right inferior frontal sulcus more than tools, suggesting that processing of global form may extend as far anterior as the inferior frontal lobe, likely due to input from the ventral visual stream given that this area has been implicated in object working memory (Belger et al., Reference Belger, Puce, Krystal, Gore, Goldman-Rakic and McCarthy1998; Wilson et al., Reference Wilson, Scalaidhe and Goldman-Rakic1993). Taken together, these findings suggest that lateralized processing of local detail and global form may extend beyond the inferior temporal cortex responsible for primary visual semantic processing.

In summary, results suggest that category, that is, nonliving versus living, mediates whether medial or lateral fusiform cortex is more involved in resolving semantic identity, but visual attribute, that is, global form versus local detail, mediates whether the left or right fusiform gyrus is more implicated. Similar distinctions between category and attribute have previously been reported for knowledge of body motion (Hauk et al., Reference Hauk, Johnsrude and Pulvermüller2004; Pulvermüller, Reference Pulvermüller2001; Pulvermüller et al., Reference Pulvermüller, Harle and Hummel2001) or object manipulability (Kraut et al., Reference Kraut, Moo, Segal and Hart2002), but this represents the first study to address the independent contribution of visual attributes to semantic representation. Thus, these findings indicate that semantic substrates can be fractionated along the lines of both category and attribute and demonstrate that visual attributes are an important dimension of how information is represented in the human brain. Although object recognition has traditionally been explained as a result of hierarchical processing (Humphreys et al., Reference Humphreys, Riddoch and Quinlan1988) consistent with sequential modular models, current results raise the possibility that processing within the fusiform gyrus occurs in a distributed manner involving coactivation of multiple components of semantic knowledge, such as visual attribute and category, to derive the identity of the object. Of note, selectivity in the fusiform gyrus for animals, tools, and vehicles reflected differences in the degree of activity between categories rather than all-or-none responses. In other words, regions selective for one category were activated to a lesser degree by the other categories. Consistent with previous reports (Ishai et al., Reference Ishai, Ungerleider, Martin, Schouten and Haxby1999), this provides further evidence that object knowledge is distributed within the fusiform gyrus. Although the relative activity levels for the different categories in the right and left fusiform gyri (Experiment 2) conform well to the thresholds of high–spatial frequency information needed to identify the objects from the three categories (Experiment 1), it could be argued that the current fMRI study did not completely dissociate visual attributes because both high- and low–spatial frequency information were available in the standard pictures that were named. It would be useful in future studies to present pictures in which the high–spatial frequency information has been minimized, as well as pictures in which the low–spatial frequency information has been minimized to address this issue. Because picture naming involves both visual and verbal processing, future studies will also be needed to more precisely determine how the neural substrates for category and attribute dissociate within the verbal and visual modalities. Furthermore, the dissociation between category and attribute may inform our understanding of the neural substrates of semantic memory impairment in AD and dementia risk.

ACKNOWLEDGMENTS

This work was supported by grants from the Evelyn F. & William L. McKnight Brain Institute and the Department of Veteran Affairs Office of Academic Affiliations and Rehabilitation Research and Development Service (Pre-Doctoral Associated Health Rehabilitation Research Fellowship to C.E.W.; Research Career Scientist Award to B.C.). We have disclosed any and all financial or other relationships that could be interpreted as a conflict of interest affecting this manuscript.

References

REFERENCES

Alathari, L., Trinh Ngo, C., & Dopkins, S. (2004). Loss of distinctive features and a broader pattern of priming in Alzheimer’s disease. Neuropsychology, 18, 603612.CrossRefGoogle Scholar
Barry, C. & McHattie, J.V. (1995). Problems naming animals: Category-specific anomia or a misnomer? In Campbell, R. & Conway, M.A. (Eds.), Broken memories: Case studies in memory impairment, 237248. New York: Blackwell.Google Scholar
Beauchamp, M.S., Lee, K.E., Haxby, J.V., & Martin, A. (2003). fMRI responses to video and point-light displays of moving humans and manipulable objects. Journal of Cognitive Neuroscience, 15, 9911001.CrossRefGoogle ScholarPubMed
Belger, A., Puce, A., Krystal, J.H., Gore, J.C., Goldman-Rakic, P., & McCarthy, G. (1998). Dissociation of mnemonic and perceptual processes during spatial and nonspatial working memory using fMRI. Human Brain Mapping, 6, 1432.3.0.CO;2-O>CrossRefGoogle ScholarPubMed
Birn, R.M., Saad, Z.S., & Bandettini, P.A. (2001). Spatial heterogeneity of the nonlinear dynamics in the fMRI BOLD response. NeuroImage, 14, 817826.CrossRefGoogle ScholarPubMed
Boronat, C.B., Buxbaum, L.J., Coslett, H.B., Tang, K., Saffran, E.M., Kimberg, D.Y., & Detre, J.A. (2005). Distinctions between manipulation and function knowledge of objects: Evidence from functional magnetic resonance imaging. Cognitive Brain Research, 23, 361373.CrossRefGoogle ScholarPubMed
Carter, C., Macdonald, A.M., Botvinick, M., Ross, L., Stenger, V.A., Noll, D., & Cohen, J.D. (2000). Parsing executive processes: Strategic vs. evaluative functions of the anterior cingulated cortex. Proceedings of the National Academy of Sciences of the United States of America, 97, 19441948.CrossRefGoogle Scholar
Chao, L., Haxby, J., & Martin, A. (1999). Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects. Nature Neuroscience, 2, 913919.CrossRefGoogle ScholarPubMed
Chao, L. & Martin, A. (2000). Representation of manipulable man-made objects in the dorsal stream. NeuroImage, 12, 478484.CrossRefGoogle ScholarPubMed
Coltheart, M. (1981). The MRC psycholinguistic database. Quarterly Journal of Experimental Psychology, 33A, 497505.CrossRefGoogle Scholar
Coltheart, M., Inglis, L., Cupples, L., Michie, P., & Budd, W. (1998). A semantic subsystem of visual attributes. Neurocase, 4, 353370.CrossRefGoogle Scholar
Cox, R.W. (1996). AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages. Computers in Biomedical Research, 29, 162173.CrossRefGoogle ScholarPubMed
Crosson, B., Cato, M.A., Sadek, J.R., & Lu, L. (2000). Organization of semantic knowledge in the human brain: Toward a resolution in the next millennium. Brain and Cognition, 42, 146148.CrossRefGoogle Scholar
Crosson, B., Moberg, P.J., Boone, J.R., Gonzalez Rothi, L.J., & Raymer, A. (1997). Category-specific naming deficit for medical terms after dominant thalamic/capsular hemorrhage. Brain and Language, 60, 407442.CrossRefGoogle ScholarPubMed
Damasio, H., Grabowski, T.J., Tranel, D., Hichwa, R.D., & Damasio, A.R. (1996). A neural basis for lexical retrieval. Nature, 380, 499505.CrossRefGoogle ScholarPubMed
De Renzi, E. & Lucchelli, F. (1994). Are semantic systems separately represented in the brain? The case of living category impairment. Cortex, 30, 325.CrossRefGoogle ScholarPubMed
Delis, D., Massman, P., Butters, N., Salmon, D., Shear, P., Demadura, T., & Filoteo, J. (1992). Spatial cognition in Alzheimer’s disease: Subtypes of global-local impairment. Journal of Clinical and Experimental Neuropsychology, 14, 463477.CrossRefGoogle ScholarPubMed
Devlin, J.T., Moore, C.J., Mummery, C.J., Gorno-Tempini, M.L., Phillips, J.A., Noppeney, U., Frackowiak, R.S., Friston, K.J., & Price, C.J. (2002). Anatomic constraints on cognitive theories of category specificity. NeuroImage, 15, 675685.CrossRefGoogle ScholarPubMed
Done, D.J. & Hajilou, B.B. (2005). Loss of high-level perceptual knowledge of object structure in DAT. Neuropsychologia, 43, 6068.CrossRefGoogle ScholarPubMed
Doyon, J. & Milner, B. (1991). Right temporal-lobe contribution to global visual processing. Neuropsychologia, 29, 343360.CrossRefGoogle ScholarPubMed
Farah, M.J. & Wallace, M.A. (1992). Semantically-bounded anomia: Implications for the neural implementation of naming. Neuropsychologia, 30, 609621.CrossRefGoogle Scholar
Folstein, M.F., Folstein, S.E., & McHugh, P.R. (1975). ‘Mini-mental State’. A practical method for grading the cognitive status of patients for the clinician. Journal of Psychiatric Research, 12, 189198.CrossRefGoogle Scholar
Gainotti, G. & Silveri, M.C. (1996). Cognitive and anatomical locus of lesion in a patient with a category-specific semantic impairment for living beings. Cognitive Neuropsychology, 13, 357389.CrossRefGoogle Scholar
Gerlach, C., Law, I., Gade, A., & Paulson, O.B. (2000). Categorization and category effects in normal object recognition: A PET study. Neuropsychologia, 38, 16931703.CrossRefGoogle ScholarPubMed
Gerlach, C., Law, I., Gade, A., & Paulson, O.B. (2002). The role of action knowledge in the comprehension of artifacts: A PET study. NeuroImage, 15, 143152.CrossRefGoogle Scholar
Goodglass, H. & Wingfield, A. (1993). Selective preservation of a lexical category in aphasia: Dissociations in comprehension of body parts and geographical place names following focal brain lesion. Memory, 1, 313328.CrossRefGoogle ScholarPubMed
Harley, T.A. & Grant, F. (2004). The role of functional and perceptual attributes: Evidence from picture naming in dementia. Brain and Language, 91, 223234.CrossRefGoogle ScholarPubMed
Hart, J. & Gordon, B. (1992). Neural subsystems for object knowledge. Nature, 359, 6064.CrossRefGoogle ScholarPubMed
Hauk, O., Johnsrude, I., & Pulvermüller, F. (2004). Somatotopic representation of action words in human motor and premotor cortex. Neuron, 41, 301307.CrossRefGoogle ScholarPubMed
Heilman, K.M. & Valenstein, E. (1993). Clinical Neuropsychology (3rd ed.). New York: Oxford University Press.CrossRefGoogle Scholar
Hillis, A.E. & Caramazza, A. (1991). Category-specific naming and comprehension impairment: A double dissociation. Brain, 114, 20812094.CrossRefGoogle ScholarPubMed
Hillis, A.E., Rapp, B., Romani, C., & Caramazza, A. (1990). Selective impairments of semantics in lexical processing. Cognitive Neuropsychology, 7, 191243.CrossRefGoogle Scholar
Humphreys, G.W., Riddoch, M.J., & Quinlan, P.T. (1988). Cascade processes in picture identification. Cognitive Neuropsychology, 5, 67103.CrossRefGoogle Scholar
Ishai, A., Ungerleider, L., Martin, A., Schouten, J., & Haxby, J. (1999). Distributed representation of objects in the human ventral visual pathway. Proceedings of the National Academy of Sciences of the United States of America, 96, 93799384.CrossRefGoogle ScholarPubMed
Kellenbach, M.L., Brett, M., & Patterson, K. (2003). Actions speak louder than functions: The importance of manipulability and action in tool representation. Journal of Cognitive Neuroscience, 15, 3046.CrossRefGoogle ScholarPubMed
Kraut, M.A., Moo, L.R., Segal, J.B., & Hart, J. (2002). Neural activation during an explicit categorization task: Category- or feature-specific effects? Brain Research Cognitive Brain Research, 13, 213220.CrossRefGoogle ScholarPubMed
Kucera, H. & Francis, W. (1967). Computational analysis of present-day American English. Providence, RI: Brown University Press.Google Scholar
Laws, K., Evans, J., Hodges, J., & McCarthy, R. (1995). Naming without knowing and appearance without associations: Evidence for constructive processes in semantic memory? Memory, 3, 409433.CrossRefGoogle ScholarPubMed
Martin, A., Haxby, J., Lalonde, F., Wiggs, S., & Ungerleider, L. (1995). Discrete cortical regions associated with knowledge of color and knowledge of action. Science, 270, 102105.CrossRefGoogle ScholarPubMed
Martin, A., Wiggs, C.L., Ungerleider, L.G., & Haxby, J.V. (1996). Neural correlates of category-specific knowledge. Nature, 379, 649652.CrossRefGoogle ScholarPubMed
Oldfield, R.C. (1971). The assessment and analysis of handedness: The Edinburgh Inventory. Neuropsychologia, 9, 97113.CrossRefGoogle ScholarPubMed
Pulvermüller, F. (2001). Brain reflections of words and their meaning. Trends in Cognitive Sciences, 5, 517524.CrossRefGoogle ScholarPubMed
Pulvermüller, F., Harle, M., & Hummel, F. (2001). Walking or talking? Behavioral and neurophysiological correlates of action verb processing. Brain and Language, 78, 143168.CrossRefGoogle ScholarPubMed
Sacchett, C. & Humphreys, G. (1992). Calling a squirrel a squirrel, but a canoe a wigwam: A category-specific deficit for artefactual objects and body parts. Cognitive Neuropsychology, 9, 7386.CrossRefGoogle Scholar
Shapiro, A.M., Benedict, R.H., Schretlen, D., & Brandt, J. (1999). Construct and concurrent validity of the Hopkins Verbal Learning Test-revised. Clinical Neuropsychology, 13, 348358.CrossRefGoogle ScholarPubMed
Sheridan, J. & Humphreys, G. (1993). A verbal-semantic category-specific recognition impairment. Cognitive Neuropsychology, 10, 143184.CrossRefGoogle Scholar
Silveri, M. & Gainotti, G. (1988). Interaction between vision and language in category-specific semantic impairment. Cognitive Neuropsychology, 5, 677709.CrossRefGoogle Scholar
Talairach, J. & Tournoux, P. (1988). Co-planar stereotaxic atlas of the human brain. New York: Thiem Medical Publishers.Google Scholar
Taylor, K.I., Moss, H.E., & Tyler, L.K. (2007). Cognitive model of semantic memory. In Hart, J. Jr & Kraut, M.A. (Eds.), Neural basis of semantic memory (pp. 265301). New York: Cambridge University Press.CrossRefGoogle Scholar
Tyler, L.K., Stamatakis, E.A., Dick, E., Bright, P., Fletcher, P., & Moss, H. (2003). Objects and their actions: Evidence for a neurally distributed semantic system. NeuroImage, 18, 542557.CrossRefGoogle ScholarPubMed
Ungerleider, L.G. & Mishkin, M. (1982). Two cortical visual systems. In Ingle, D.J., Goodale, M.A., & Mansfield, R.J.W. (Eds.), Analysis of visual behavior (pp. 549586). Cambridge, MA: MIT Press.Google Scholar
Vandenberghe, R., Price, C., Wise, R., Josephs, O., & Frackowiak, R. (1996). Functional anatomy of a common semantic system for words and pictures. Nature, 383, 254256.CrossRefGoogle ScholarPubMed
Vannucci, M., Viggiano, M.P., & Argenti, F. (2001). Identification of spatially filtered stimuli as function of the semantic category. Cognitive Brain Research, 12, 475478.CrossRefGoogle ScholarPubMed
Warrington, E. & Shallice, T. (1984). Category specific semantic impairments. Brain, 107, 829854.CrossRefGoogle ScholarPubMed
Weisberg, J., van Turennout, M., & Martin, A. (2007). A neural system for learning about object function. Cerebral Cortex, 17, 513521.CrossRefGoogle Scholar
Whatmough, C., Chertkow, H., Murtha, S., & Hanratty, K. (2002). Dissociable brain regions process object meaning and object structure during picture naming. Neuropsychologia, 40, 174186.CrossRefGoogle ScholarPubMed
Wierenga, C.E., Benjamin, M., Gopinath, K., Perlstein, W.M., Leonard, C.M., Gonzalez Rothi, L.J., Conway, T., Cato, M.A., Briggs, R., & Crosson, B. (2008). Age-related changes in word retrieval: Role of bilateral frontal and subcortical networks. Neurobiology of Aging, 29, 436451.CrossRefGoogle ScholarPubMed
Wilson, F.A., Scalaidhe, S.P., & Goldman-Rakic, P.S. (1993). Dissociation of object and spatial processing domains in primate prefrontal cortex. Science, 260, 19551958.CrossRefGoogle ScholarPubMed
Figure 0

Fig. 1. Schemata of an ascending sequence (decreasing spatial filtering, increasing high spatial frequency content) of nine filtering levels of a stimulus in each category for Experiment 1. Photographs were equated for size. Items in the three categories were balanced for familiarity, concreteness, and frequency in English (Coltheart, 1981; Kucera & Francis, 1967). Each stimulus was presented in canonical orientation at the center of the computer monitor. The presentation order of the items was randomized for each participant. A training phase preceded the experimental phase in which different photographs were used.

Figure 1

Fig. 2. Object identification by category as a function of spatial frequency content for Experiment 1, showing filtering levels at which objects were recognized.

Figure 2

Fig. 3. Schemata of the overt naming task design for Experiment 2. Photographs were equated for size and resolution, and members of each category were matched for frequency of occurrence in English (Kucera & Francis, 1967). Inter-trial intervals were pseudorandomly varied between 13,600 ms (8 images), 15,300 ms (9 images), 17,000 ms (10 images) and 18,700 ms (11 images) to mitigate effects of periodic or quasiperiodic physiological noise and allow the HDR to return to baseline before the participant spoke again, preventing contamination of the latter part of the HDR by movement during the subsequent response. Experimental runs began and ended with a rest interval. There were 15 trials in an experimental run (five trials from each category), and eight runs were administered. Each 15-trial run was 323 s in length and acquired 188 functional images for each slice. Stimuli were projected onto a translucent screen above the participant’s head via the Integrated Functional Imaging System using E-Prime Version 1 software.

Figure 3

Table 1. Performance on the fMRI naming task for younger and older adults

Figure 4

Fig. 4. Lateral and medial fusiform gyri activated for the within-subject comparison of animals and tools, red: p < .005, yellow: p < .001 for animals greater than tools; dark blue: p < .005, light blue: p < .001 for tools greater than animals, along with corresponding HDR functions based on raw signal change for animals and tools for these areas. Note that the first two images are excluded from the HDR to reduce motion artifact. The 2 (category) × 11 (time) interaction was significant for the right lateral fusiform cluster in which animals showed a larger AUC than tools [F(10,390) = 10.57, p = .000] and for the left medial fusiform cluster in which tools showed a greater AUC than animals [F(10,390) = 2.78, p = .003]. Asterisks demonstrate image numbers at which signal intensity differed significantly (p < .05) between categories.

Figure 5

Fig. 5. Activity in the fusiform gyrus bilaterally for the within-subject comparison of (A) vehicles and tools (dark blue: p < .005, light blue: p < .001 for vehicles greater than tools; red: p < .005, yellow: p < .001 for tools greater than vehicles) along with corresponding HDR functions based on raw signal change for vehicles and tools for these areas and (B) vehicles and animals (dark blue: p ≤ .005, light blue: p < .001 for vehicles greater than animals) along with corresponding HDR functions based on raw signal change for vehicles and animals for these areas. Note that the first two images are excluded from the HDR to reduce motion artifact. The 2 (category) × 11 (time) interaction was significant for the left posterior medial fusiform cluster [F(10,390) = 10.68, p = .000] and for the medial fusiform clusters bilaterally extending anteriorly to the collateral sulcus and parahippocampal gyri in which vehicles showed a greater AUC than tools [left: F(10,390) = 17.01, p = .000; right: F(10,390) = 28.02, p = .000] and for the medial fusiform clusters bilaterally in which vehicles showed a greater AUC than animals [left: F(10,390) = 16.07, p = .000; right: F(10,390) = 14.89, p = .000]. Asterisks demonstrate image numbers at which signal intensity differed significantly (p < .05) between categories.

Figure 6

Table 2. Within-subject regions of significant brain response for each category (animals, tools, and vehicles) in the inferior temporal cortex during overt picture naming in adults

Figure 7

Table 3. Areas of significant activity for the main effect of category during overt picture naming in adults