Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-06T20:12:29.278Z Has data issue: false hasContentIssue false

A registration problem for functional fingerprinting

Published online by Cambridge University Press:  30 June 2016

David M. Kaplan
Affiliation:
Department of Cognitive Science, Macquarie University, Sydney NSW 2109, Australiadavid.kaplan@mq.edu.auhttp://www.davidmichaelkaplan.org/
Carl F. Craver
Affiliation:
Philosophy-Neuroscience-Psychology Program, Washington University in St. Louis, St. Louis, MO 63105. ccraver@wustl.eduhttps://pages.wustl.edu/cfcraver

Abstract

Functional fingerprints aggregate over heterogeneous tasks, protocols, and controls. The appearance of functional diversity might be explained by task heterogeneity and conceptual imprecision.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2016 

Anderson (Reference Anderson2014) promises to move neuroscience beyond phrenology by rejecting strict functional localization, the idea that the brain is composed of highly selective and functionally specialized areas connected along developmentally and evolutionarily dedicated pathways. Anderson proposes a competitor idealization, the neural reuse hypothesis, according to which the activities of different brain regions flexibly recombine to support performance across many different task domains. Anderson supports this hypothesis in part by appeal to functional fingerprints, a novel methodological contribution for representing and analyzing functional diversity in the brain.

Functional fingerprinting is a data-driven tool that relies on meta-analyses of neuroimaging studies to characterize which task domains preferentially engage a given brain region. Anderson borrows his task domains from the BrainMap database (Fox et al. Reference Fox, Laird, Fox, Fox, Uecker, Crank, Koenig and Lancaster2005; Laird et al. Reference Laird, Lancaster and Fox2005). They are defined by two features: (a) a cognitive construct (such as working memory) and (b) a collection of tasks (and, more specifically, a set of studies) unified by the fact that they are commonly accepted ways of studying that construct. Domains include several emotions, action, attention, working memory, reasoning, vision, and others (see Fig. 2 in the target article). Fingerprints are designed to capture the functional diversity of a given brain region or network. For a brain area to be functionally involved in a task domain (for a given construct) is for it to be active during tasks that neuroscientists accept as valid for studying that construct. The functional fingerprint for that brain area is a polar plot in which vertices represent different task domains. Distances along each vertex represent the number of activations at a given site for a particular task domain expressed as a percentage of the total activations reported at that site across all sampled task domains. Anderson extends this idea to explore the functional diversity of brain networks, but this extension relies fundamentally on the more basic project of constructing the fingerprint itself.

This method is prone to the problem of functional registration. Anderson's fingerprinting method aggregates findings obtained in fMRI studies using diverse experimental task conditions, distinct subtraction conditions (controls), and distinct experimental protocols. Given the diversity of tasks, controls, and protocols, one would expect to observe activation in regions that are nonspecific to the domain-defining psychological construct under investigation. Performance across different experimental task and control conditions will often rely on different cognitive capacities and will therefore recruit different underlying neural mechanisms, leading to differences detectable in neuroimaging experiments (e.g., Owen et al. Reference Owen, McMillan, Laird and Bullmore2005; Price et al. Reference Price, Devlin, Moore, Morton and Laird2005). Fed into Anderson's method, such nonspecific activations will functionally implicate a region in a task domain simply because it was not controlled for in the task in question. As a result, failures to register differences between tasks, controls, and protocols within a given task domain will contaminate one's measurements of functional diversity with extraneous and ancillary activations tied to aspects of the comparison that were either irrelevant or simply uncontrolled for in the context of the original studies. Our suspicion is that Anderson's method glosses over heterogeneity in task and control conditions to a degree that could explain the functional diversity he reports.

To illustrate this, suppose for the moment (as Anderson does) that we accept the BrainMap taxonomy as a more or less correct taxonomy of cognitive capacities or functions. Anderson does not characterize precisely how task-relevant activations are sorted from task-irrelevant activations, but it is difficult to envision how this could be done systematically for all studies subsumed within a given meta-analysis in a way that avoids the perils of simply associating activations with tasks and tasks with constructs. Consider the task domain of working memory, for example.

Owen et al.'s (2005) recent meta-analysis of working memory activations focuses specifically on 24 studies employing the so-called n-back task (just one type of task associated with the working memory task domain in BrainMap). Although all of these studies nominally employ the same task, Owen et al.'s (2005) systematic cataloging of different parameters used in the n-back task reveals considerable task diversity. In particular, they identify four major categories of n-back task (location monitoring, identity monitoring, verbal stimuli, and nonverbal stimuli), which can be further subdivided along a number of finer-grained dimensions including how many trials back subjects are matching (n = 1-, 2-, 3-back). These n-back studies also differ substantially in the chosen contrast (i.e., the control condition used). For example, a task subtraction might subtract activation observed in the n = 3 condition from the activation observed in the n = 2 condition, it might subtract activation in n = 2 from that in n = 0, it might subtract activation during matching of Korean words from that of English words, it might subtract activation in response to letters from that in response to shapes, or it might reflect monotonic increases in task difficulty.

Surprisingly, Owen et al. (Reference Owen, McMillan, Laird and Bullmore2005) report that despite this task diversity, some frontal and parietal activations are consistent across these different task conditions. This result is surprising and valuable precisely because it reveals the signal in the noise. One does not expect such tidy results emerging from such a motley collection of experimental paradigms. Yet, critically, Owen et al. (Reference Owen, McMillan, Laird and Bullmore2005) also show that there are differences in activations depending on whether the material is presented visually or aurally and on whether the task involves identity or location monitoring. No task is “pure” in the sense that it requires all and only the mechanisms responsible for a given task domain. When one pools data across different tasks that are “impure” in different ways, one is likely to aggregate over ancillary activations resulting from aspects of the task not specific to the construct in question: in other words, the false appearance of functional diversity. And this is the primary point: there will be many regions showing nonspecific activation that do not overlap between these task presentations. Although these diverse regions of nonoverlap are not the focus in Owen et al.'s (2005) meta-analysis, they are central to interpreting Anderson's findings because they are the data points for his functional fingerprints. The appearance of functional diversity could hence result from the incautious pooling of data from heterogeneous tasks and protocols employing distinct control conditions.

Anderson's fingerprints are a kind of aggregate “reverse inference” (from activation during a task to functional involvement in the construct/task domain), but without the careful attention to task construction and control required in each case to make the reverse inference convincing. Traditional problems with reverse inference in neuroimaging (such as the existence of nonspecific activations during task performance) are thus both multiplied and obscured in Anderson's functional fingerprints. Indeed, given the diversity of protocols with which the analysis begins, one would expect evidence of functional diversity even if localization were broadly true. The challenge going forward is to devise methods that can successfully establish functional diversity as a real feature of brain organization rather than as a reflection of the heterogeneity and imprecision in our methods.

Performing an informative meta-analysis about the functional diversity of a brain region will require precisely the kind of work that should have been, and in some quarters has been, driving task-based fMRI all along: to devise task-control pairs in such a way that they isolate the areas involved in the construct under investigation independently of other ancillary activations. Anderson does not explain how tasks and controls are chosen, related to one another, or grouped into task domains in his meta-analytic method. Without this information, attempts to read off “functional involvement” directly from activation profiles each involve a separate, incautious reverse inference for each activating task hidden behind the veil of a meta-analysis.

The problem of functional registration is just a specific application of a more general challenge facing any meta-analytic approach to functional diversity such as Anderson's – to distinguish the signal of functional diversity from the inevitable and expected noise produced by experimental heterogeneity. Variability in task and control conditions is just the tip of the iceberg. Other sources of experimental “noise” in fMRI meta-analyses include differences in subject population, spatial normalization, scanner strength, and essentially any other uncontrolled variables capable of affecting experimental outcomes (for further discussion, see Brett et al. Reference Brett, Johnsrude and Owen2002; Costafreda Reference Costafreda2009). Within the localizationist framework, the rules are clear: search for a task (or task domain) that preferentially drives the area in question. In the context of neuroimaging meta-analyses, the primary objective is to identify the consistently activated regions (if any exist) across a set of studies that are assumed to probe the same psychological state or capacity using similar or identical experimental tasks (Fox et al. Reference Fox, Lancaster, Laird and Eickhoff2014).

Anderson urges us to abandon (or at least, relax) these localizationist assumptions and to think instead of brain regions multitasking and recombining across different task domains. Anderson's framework predicts that brain activation patterns will tend not to show sharp functional specialization, but will instead fan out broadly across the polar graph. One limit of this framework, as it is currently developed, is that it makes no specific predictions (comparable to those made by localization), except that one will not see the functional specialization predicted by the localizationist. But if functional diversity is the expected outcome when pooling fMRI data across different experimental tasks (regardless of whether the hypothesis of localization, reuse, or some other hypothesis is correct), then the data reported in functional fingerprints fail to decide between localization and reuse. Anderson's proposed method currently lacks a principled way to sort the noise introduced by experimental heterogeneity from the signal reflecting real functional diversity in the brain. Perhaps more specific, risky predictions about the kinds of diversity one is or is not likely to see would be more compelling.

Despite these criticisms, we think that Anderson's critical perspective on classical localization is commendable. The very idea of functional diversity enjoins us to think more broadly about how functions might be localized in the brain. However, we do not think that Anderson has succeeded entirely in sketching a way to do cognitive neuroscience “without the analysis, decomposition, and localization of component cognitive operations” (Anderson Reference Anderson2014, p. 117). In the first place, Anderson relies on the BrainMap taxonomy of task domains and so simply embraces the dominant ideas in contemporary cognitive science concerning how brain systems should be functionally analyzed and decomposed. (Notably, Gall [Reference Gall1835], one of the original phrenologists, promoted radical revision in our taxonomy of cognitive functions.) Whether a given brain region turns out to have a narrow or broad orientation around the polar graph is highly sensitive to how the vertices of the graph are defined. What appears as functional diversity through the lens of one particular taxonomy of task domains could appear as functional unity through the lens of another.

The fact that Anderson's method implicitly reifies the task domains of BrainMap brings to mind a warning issued long ago by Petersen and Fiez (Reference Petersen and Fiez1993). They counsel against assuming that the function of a brain region can be identified with the tasks used to activate it; as they prosaically remark, there is no tennis forehand area in the human brain. There is no such area, first, because the tennis forehand likely involves contributions from many distinct and dissociable cognitive processes (i.e., recruits many different task domains). Again, this is why the problem of functional registration is a difficult one to solve. Second, there is no such area because any particular experimental task (including performing a tennis forehand) is at best a proxy for or representative of some broader class of behavioral or cognitive phenomena that is the real target of explanation. The functions that ultimately get localized in the brain might therefore be very distant from the tasks that are paradigmatically used in our experimental investigations. The general lesson here is that the conceptual relationships between tasks, task domains, and cognitive constructs is complex and dynamic, and cannot be taken for granted without costs.

With the above points taken into consideration, Anderson's neural reuse hypothesis might be understood, not as a complete rejection of localization, but rather as a form of localization consistent with dominant attitudes in the contemporary neuroimaging community (Petersen & Fiez Reference Petersen and Fiez1993). According to this approach, elementary operations, not tasks, are functionally localized to brain regions. Recent work on so-called canonical neural computations – i.e., standard computational operations applied across different brain areas – reinforces this idea (Carandini & Heeger Reference Carandini and Heeger2012). According to this view, elementary operations might be rather task-general and might be flexibly recombined in many different task domains. The picture is still localizationist, but the localized functions are conceptually distant from traditional task domains and psychological constructs. These areas will be functionally diverse from the point of view of the BrainMap task domains, but functionally unitary once the correct elementary operation has been identified. Regardless, we will continue to face the challenge of separating diversity in the brain from messiness in our cognitive categories and from imprecision and heterogeneity in our experimental tasks.

References

Anderson, M. L. (2014) After phrenology: Neural reuse and the interactive brain. MIT Press.CrossRefGoogle Scholar
Brett, M., Johnsrude, I. S. & Owen, A. M. (2002) The problem of functional localization in the human brain. Nature Reviews Neuroscience 3(3):243–49.Google Scholar
Carandini, M. & Heeger, D. J. (2012) Normalization as a canonical neural computation. Nature Reviews Neuroscience 13(1):5162.Google Scholar
Costafreda, S. G. (2009) Pooling fMRI data: Meta-analysis, mega-analysis and multi-center studies. Frontiers in Neuroinformatics 3:33.Google Scholar
Fox, P. T., Laird, A. R., Fox, S. P., Fox, P. M., Uecker, A. M., Crank, M., Koenig, S. F. & Lancaster, J. (2005) BrainMap taxonomy of experimental design: Description and evaluation. Human Brain Mapping 25(1):185–98.Google Scholar
Fox, P. T., Lancaster, J. L., Laird, A. R. & Eickhoff, S. B. (2014) Meta-analysis in human neuroimaging: Computational modeling of large-scale databases. Annual Review of Neuroscience 37:409–34.Google Scholar
Gall, F. J. (1835) On the functions of the brain and of each of its parts: With observations on the possibility of determining the instincts, propensities, and talents, or the moral and intellectual dispositions of men and animals, by the configuration of the brain and head. Marsh, Capen & Lyon.Google Scholar
Laird, A. R., Lancaster, J. L. & Fox, P. T. (2005) BrainMap: The social evolution of a human brain mapping database. Neuroinformatics 3(1):6578.Google Scholar
Owen, A. M., McMillan, K. M., Laird, A. R. & Bullmore, E. (2005) N-back working memory paradigm: A meta-analysis of normative functional neuroimaging studies. Human Brain Mapping 25(1):4659.Google Scholar
Petersen, S. E. & Fiez, J. A. (1993) The processing of single words studied with positron emission tomography. Annual Review of Neuroscience 16:509–30.Google Scholar
Price, C. J., Devlin, J. T., Moore, C. J., Morton, C. & Laird, A. R. (2005) Meta-analyses of object naming: Effect of baseline. Human Brain Mapping 25(1):7082.Google Scholar