Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-02-06T08:47:06.517Z Has data issue: false hasContentIssue false

Neuropsychology 3.0: Evidence-Based Science and Practice

Published online by Cambridge University Press:  19 November 2010

Robert M. Bilder*
Affiliation:
Jane and Terry Semel Institute for Neuroscience & Human Behavior at UCLA, Los Angeles, California Department of Psychiatry & Biobehavioral Sciences, David Geffen School of Medicine at UCLA, Los Angeles, California Department of Psychology, UCLA College of Letters & Science, Los Angeles, California
*
Correspondence and reprint requests to: Robert M. Bilder, PhD, Semel Institute at UCLA, 740 Westwood Plaza, Room C8-849, Los Angeles, CA 90095. E-mail: rbilder@mednet.ucla.edu
Rights & Permissions [Opens in a new window]

Abstract

Neuropsychology is poised for transformations of its concepts and methods, leveraging advances in neuroimaging, the human genome project, psychometric theory, and information technologies. It is argued that a paradigm shift toward evidence-based science and practice can be enabled by innovations, including (1) formal definition of neuropsychological concepts and tasks in cognitive ontologies; (2) creation of collaborative neuropsychological knowledgebases; and (3) design of Web-based assessment methods that permit free development, large-sample implementation, and dynamic refinement of neuropsychological tests and the constructs these aim to assess. This article considers these opportunities, highlights selected obstacles, and offers suggestions for stepwise progress toward these goals. (JINS, 2011, 17, 000–000)

Type
Short Reviews
Copyright
Copyright © The International Neuropsychological Society 2010

Three Generations Of Neuropsychology

Neuropsychology is a relatively young discipline that already has undergone significant transformations. Without intention to offer a comprehensive historical review, it is suggested that the field has experienced two distinctive periods already, and is poised for an exciting third phase.

Neuropsychology 1.0 (1950–1979)

The idea that behavior is related to brain function has roots traced at least to Pythagoras (circa 550 BC), but systematic study of brain-behavior relations did not begin until the 19th century. Neuropsychology was recognized as a discipline distinct from applied areas of psychology or neurology only in the 1960s, as signified by first use of the term “neuropsychology” in the English biomedical literature (Kløve, Reference Kløve1963) and establishment of the International Neuropsychological Society (1967). Early practitioners of clinical neuropsychology tended to work in neurology clinics and focus on functional impairments associated with discrete brain lesions (reviewed in classic texts such as Heilman & Valenstein, Reference Heilman and Valenstein1993). During this period, clinical assessment often relied on interpretation without extensive normative data, and many tests had significant psychometric limitations. While some neuropsychological batteries were formalized, many practitioners used tests flexibly for neuropsychological diagnosis.

Neuropsychology 2.0 (1980–present)

Widespread availability of neuroimaging mitigated the utility of clinical neuropsychology as a tool for localizing lesions and certain differential diagnostic questions, and revolutionized research for assessing brain–behavior relations, spawning the new discipline of cognitive neuroscience. The late 1970s witnessed the establishment of formal training programs in neuropsychology, then specialty board certification (1981), and ultimately the Houston Conference to codify training guidelines (Hannay, Reference Hannay1998). More attention was paid to classical psychometrics, with newer tests incorporating more elaborate standardization and co-norming to enable actuarial interpretation of score discrepancies. Paralleling this was an increase in forensic neuropsychology and rapid growth of symptom validity testing. Clinical neuropsychology increasingly focused on characterizing cognitive strengths and weaknesses rather than differential diagnosis, and gained traction in research on psychiatric syndromes.

Neuropsychology 3.0

This brief review suggests neuropsychology is poised for a paradigm shift leveraging its position at the interface of basic biology and clinical science that integrates information science, and co-evolves with healthcare reform. A brief summary of contributing factors is followed by description of opportunities for transformative change.

Forces Promoting Change In Neuropsychology

Neuroimaging

Technological advances in visualizing brain structure and function already have revolutionized neuropsychology (see Neuropsychology 2.0, above) and will likely continue to do so, as we remain far from exhausting known technical limits on the spatial and temporal resolution of individual imaging modalities much less the quality of information that may be available from integration across modalities, and by applying newer neuroinformatics strategies (Van Horn et al., Reference Van Horn, Bandettini, Cheng, Egan, Stenger, Strother and Toga2008). Already structural, diffusion, and resting state functional brain image databases are being assembled on a large scale, enabling for example the use of probabilistic atlases that enable us to apply the same kinds of actuarial approaches to quantifying brain structure that are customary in our inspection of neuropsychological test scores (Shattuck et al., Reference Shattuck, Mirza, Adisetiyo, Hojatkashani, Salamon and Narr2008).

Functional neuroimaging has experienced particularly striking growth and has unique conceptual impact on neuropsychology. PubMed now contains more than 19,000 articles on “fMRI,” which is striking given that the list begins in 1988. Despite controversy about how much fMRI has advanced understanding of brain–behavior relations, there has been a clear shift to thinking about brain activation as a “dependent variable” that responds to cognitive manipulations, in contrast to the classical neuropsychology perspective focused on effects of lesions. Newer analytic strategies are revealing patterns of regional co-activation, suggesting functional networks different from those identified by lesion studies, and knowledge of connectional anatomy is increasing rapidly and will progress further under the aegis of the NIH Human Connectome Project (Biswal et al., Reference Biswal, Mennes, Zuo, Gohel, Kelly and Smith2010; Glahn et al., Reference Glahn, Winkler, Kochunov, Almasy, Duggirala and Carless2010). To leverage these advances will demand new theories of brain functional organization and novel designs less biased by prior theories that may be wrong (Poldrack, Halchenko, & Hanson, Reference Poldrack, Halchenko and Hanson2009). These developments hold promise that current concepts about cognitive processes as “emergent functions” of brain activity will be supplanted by mechanistic models relating specific brain activation states to specific behavioral states, addressing empirically the “mind-brain” dilemma.

The Human Genome Project

The completion of the human genome project, dramatic cost reductions for genotyping and whole genome sequencing, and increased capacity to create transgenic models has revolutionized virtually all areas of biomedicine. Particularly given that most well-characterized behavioral traits have heritability of approximately 50%, it is a matter of when not if we will find genomic associations for many individual differences in behavior. Recent research suggests, however, that these relations are even more complex than anticipated, and developing mechanistic models of how genetic variation yields behavioral variation will require work of unprecedented scope (Bilder, Reference Bilder2008), prompting a call for The Human Phenome Project to begin assembling myriad assays of phenotypic expression from molecule to mind (Freimer & Sabatti, Reference Freimer and Sabatti2003). This led to establishment of projects such as the Consortium for Neuropsychiatric Phenomics (CNP), which highlights neuropsychological function as a critical central ground to help span the chasm between molecular biology and more complex behavior (Bilder, Sabb, Cannon et al., Reference Bilder, Sabb, Cannon, London, Jentsch and Parker2009). The CNP, supported by the NIH Roadmap Initiative under the aegis of its theme “research teams of the future,” is focusing on strategies that aim to advance neuropsychiatric diagnosis beyond its current atheoretical taxonomy as expressed by the most recent edition of the Diagnostic and Statistical Manual (American Psychiatric Association, 1994), by defining neuropsychological phenotypes that possess mechanistic relations to underlying neural systems, that are important across the conventionally defined diagnostic syndromes, and that are tractable targets for basic research across species. This research agenda, which demands integration of neuropsychological science with expertise in genomics, molecular and cellular biology, systems biology, neuroimaging, psychometrics, and information sciences, is already being prioritized as part of the NIH Blueprint for Neuroscience and the NIMH Strategic Plan (see particularly the Research Domain Criteria [RDoC] initiative; Insel et al., Reference Insel, Cuthbert, Garvey, Heinssen, Pine and Quinn2010; Insel, & Cuthbert, Reference Insel and Cuthbert2009).

Information Science

Even if we are not on the verge of a technological “singularity” when non-biological knowledge will outstrip all biological knowledge (Kurzweil, Reference Kurzweil2005), there is little doubt that dramatic change in representation and use of human knowledge has been triggered by growth of the Internet. More than two billion people use the Internet (∼29% of the world population), with higher usage in North America (77%), Oceania/Australia (61%) and Europe (58%)(Miniwatts Marketing Group, 2010). “Web 3.0” emphasizes more intelligent personalized search and retrieval, with “semantic Web” features for structuring and efficient mining of content. Online biomedical knowledge includes PubMed with ∼20 million citations and PubMed Central with ∼1 million full-text articles. Bioinformatics resources include knowledgebases spanning genomics, gene expression, proteomics, molecular, and cellular processes. Individual case-level genomic data are being centralized in national repositories, with phenotype data to follow. The “wisdom of crowds” was once considered an oxymoron, but Wikipedia has more than 3.3 million content pages with quality comparable to the best encyclopedias (Giles, Reference Giles2005). Facebook claims 500 million active users who spend 700 billion minutes per month accessing this site, and ∼100,000 each month who update their Five Factor Personality scores. Despite so far limited validity data, a Google search for “brain training” yields more than 10 million hits, with some sites claiming millions of users despite subscriptions rates of $15/month. Implications of these developments for neuropsychology are vast, and include opportunities to (1) share knowledge both within our professional community, and with the public, on a massive scale; (2) collaboratively assemble knowledge about brain and behavior; (3) engage large numbers of research participants; and (4) provide educational and clinical services in ways not previously imaginable. While systematic research and clinical application of these strategies is currently germinal, recent scholarly work (Jagaroo, Reference Jagaroo2009) and the birth of the Society for Neuroinformatics in Neuropsychology (http://www.scnn.org/) may mark the beginning of a new era.

Healthcare Revolution

Our healthcare system faces unprecedented crises, while support for research and training in neuropsychology, and the viability of neuropsychological services, are threatened by financial uncertainties afflicting governments and other institutions. Evidence-based medicine (EBM) is increasingly seen as critical to providing healthcare resources in clinically and cost-effective ways, and this already is impacting clinical neuropsychology, which is not always reviewed as “medically necessary.” Given widespread rapid deployment of electronic medical records in part by federal mandate, there is enormous potential to assemble relevant clinical data enabling objective evaluation of neuropsychological services alongside other diagnostic and treatment alternatives. This will be done with or without participation of specialists in neuropsychology. Beyond this is the promise of personalized medicine strategies, which will ultimately be augmented by genome-wide sequence data and personal health records, including lifetime diagnostic and treatment data for every individual.

Agenda For Neuropsychology 3.0

To achieve a paradigm shift in neuropsychology capitalizing on these developments will require decades of commitment, but we possess today multiple actionable options to accelerate change and prepare for the future (see Figure 1).

Fig. 1 A possible agenda for Neuropsychology 3.0 involves partiallyoverlapping stages from ontology development, through collaborativeknowledge aggregation, to web-based adaptive test development.

Formalizing Neuropsychological Concepts and Measurements

To increase shared knowledge about neuropsychology and enable its use across disciplines requires operational definition of key concepts and their inter-relations. Formal descriptions of content domains or ontologies are rapidly revolutionizing other biomedical disciplines. More than 2000 bioinformatics resources are available now on line. Neuropsychology requires similar developments for its concepts to be represented, mined, and connected to the structure and function of underlying neural circuits, cellular systems, signaling pathways, molecular biology, and genomics.

Challenges in the creation of neuropsychological ontologies include fuzzy concepts, semantic disambiguation of terms, instability, and lack of consensus about concept labels. One large advantage is that abstract neuropsychological constructs are measured by objective test scores, just as latent constructs are validated with respect to observable indicators in structural equation modeling. By linking neuropsychological concepts to specific measurement methods, it is possible to define families of tests and objectively evaluate the degree to which these measure overlapping or non-overlapping constructs (for further discussion, see Bilder, Sabb, Parker, et al., Reference Bilder, Sabb, Parker, Kalar, Chu and Fox2009).

Figure 2 illustrates how it is possible to begin formalizing hypotheses about complex neuropsychological concepts and the evidence that is used to support or refute these. Starting with an assertion, we can identify evidence that includes cognitive task indicators linked to specific functional processes (cognitive constructs), and measurements of brain function and structure that converge on neuroanatomic circuits. The cellular elements of this circuit model can be linked to other bioinformatics resources (including signaling pathways, molecular expression data, and gene networks; not shown).

Fig. 2 This schematic representation of a neuropsychological hypothesis includes an assertion (about “motor response inhibition”) and associated evidence. The evidence is derived from a particular publication, which used a specific cognitive task (the Stop Signal Reaction Time test) to measure a specific functional process (which in this example is the cognitive concept “response suppression” according to one author [Poldrack]). The hypothesis suggests that this process is dependent on functioning of a specific corticostriatal pathway, and this circuit (the “indirect pathway”) is linked to a graphical representation of the relevant connectional anatomy. The evidence also includes neuroimaging data, including functional MRI (fMRI) and diffusion tensor imaging (DTI) as supporting links to implicate the neuroanatomic circuit components that are putatively involved in the behavioral process.

Figure 2 also illustrates how conflicting hypotheses can be represented. For example, Poldrack and Chambers disagree about how best to describe functions of hyperdirect and indirect pathways; the model can be augmented by evidence to resolve conflicting interpretations. Furthermore, quantitative annotation can enable automated meta-analysis. This strategy was used to estimate the heritability of “cognitive control” even though no study had assessed this directly; nevertheless, it was possible to define cognitive control through other associated concepts and draw conclusions using indirect evidence (Sabb et al., Reference Sabb, Bearden, Glahn, Parker, Freimer and Bilder2008). Methods for meta-analytic structural equation modeling can be applied to these data, enabling tests of goodness of fit for competing hypotheses (Furlow & Beretvas, Reference Furlow and Beretvas2005; Riley, Simmonds, & Look, Reference Riley, Simmonds and Look2007).

No integrated resource addresses all of these issues but some relevant applications are under development. The Consortium for Neuropsychiatric Phenomics (www.phenomics.ucla.edu) includes a Hypothesis Web project offering free resources for designing multilevel graphical hypotheses, searching relevant literature, and recording qualitative and quantitative annotations particularly about cognitive concepts and measurements (see: PubGraph, PubAtlas, PubBrain, Phenomining, and Phenowiki). An affiliated project focuses specifically on cognitive concepts, cognitive tasks, and their inter-relations (see www.cognitiveatlas.org). Further development of these tools can help represent and work with neuropsychological concepts and link these to other repositories of biomedical knowledge, thereby enabling evidence-based science. Similar tools can serve evidence-based practice by formalizing hypotheses about assessment necessary to optimize differential diagnosis or select among different treatment options.

Collaborative Knowledge Building for Neuropsychology

Shared definitions of neuropsychological constructs and measurements enable systematic aggregation of neuropsychological knowledge. So far there are no large repositories for neuropsychological data despite relatively high consistency in data types and substantial homogeneity of specific variables that are collected. Neuropsychological evidence comprises primarily group data and individual case data. Group data exist primarily in research publications or proprietary manuals from test publishers. These sources are intrinsically static (once published, results do not change). Group data dissemination in clinical neuropsychology typically involves two stages: (1) a test is released by its publisher, with a manual including normative and validity data from selected clinical studies; and (2) subsequent publications describe results of studies applying the test in new samples. Updating of tests occurs only in stage 1, and a typical cycle time for revision is 10 years. Test interpretation often relies solely on data from the original manual. Some users complement this with information from subsequent publications, but absent organized repositories, this is left to the initiative of the researcher or clinician. Individual case data today are mostly in private computer databases or file cabinets and not accessible outside the locations where the data were collected.

Dramatic improvements in open access to both group and individual case data are feasible using existing technology. The neuropsychology community can immediately assemble databases that summarize results of published studies. Just as meta-analytic results are compiled by authors of systematic review papers, we can collaboratively assemble published data about specific tests for online access. An example can be found at www.neuropsychnorms.com, which enables users to input individual test scores and receive immediate reports comparing these to published findings (Mitrushina, Boone, Razani, & D’Elia, Reference Mitrushina, Boone, Razani and D’Elia2005). With community engagement the scope of this work could be expanded greatly, probably covering most relevant published literature within a few years. Since many papers include healthy groups, meta-analytic normative databases could rapidly rival many standardization samples, and accrued data on new clinical samples, treatment effects, and predictive validity could grow dynamically—as fast as the studies are published.

It is assumed that individual case data can never be released without careful consideration of informed consent and privacy protections; these issues are extremely important and complex, but because there is insufficient space to elaborate these issues are not discussed further here. The primary sources of individual case data are the original test publishers, independent researchers, and clinics. Publishers tend to maintain individual case data as proprietary but release such data under certain circumstances. Researchers tend to keep data secure at least until they have published findings, and often longer, but might release data if there were a national repository that appropriately credited contributions.

An exciting possibility is that clinics and clinicians could contribute data from every examined patient in real time. If this were done, the clinical validity data for major neuropsychological tests would grow very rapidly and provide opportunities to compare any individual patient examined to customized reference groups stratified by demographic characteristics, or by scores on other cognitive tests. Users could be provided tools to effectively filter on diagnostic characteristics, and given that there might be variability in the credibility of different sources, users could further filter on the characteristics of the clinics providing data. A national bank for neuropsychological data could revolutionize both research and assessment practices, enabling rapid aggregation of information regarding understudied populations and that can support evidence-based effectiveness studies that will be critical for research and public healthcare decision making.

If individual case data are assembled at the item level, it will be possible to analyze data using modern psychometric theory, leading to new and improved assessment methods. Community consortia could conduct not-for-profit normative and validation studies. Assuming there are ∼5000 neuropsychologists in the United States (based on memberships in INS, APA Division 40, The American Academy of Clinical Neuropsychology, and the National Academy of Neuropsychology), it is exciting to imagine progress that could be made if each examined even one person per year as part of a national consortium.

Assessment Innovation

The most widely used assessment strategies in neuropsychology have undergone little fundamental change over the past century, despite breakthroughs in cognitive neuroscience, neuroimaging, psychometric theory, and human-machine interfaces. Test revisions using traditional print publishing can also have unintended consequences. For example, the WAIS-IV/WMS-IV revisions have been criticized for failing to consider back-compatibility issues that may invalidate clinical interpretations (Loring, & Bauer, Reference Loring and Bauer2010). Promising experimental paradigms typically languish for decades in the lab before use in clinics. Meanwhile, Web-based acquisition strategies enable rapid collection of data from widely distributed populations using adaptive testing strategies likely to at least double efficiency in construct measurement, and when constructs are correlated (as is true of most cognitive constructs), efficiency gains may be higher. One study found a 95% average reduction in items administered using a computerized adaptive test relative to administering all items on the original scales (Gibbons et al., Reference Gibbons, Weiss, Kupfer, Frank, Fagiolini and Grochocinski2008). Furthermore, use of modern psychometric theory enables preservation of robust back-compatibility with prior test versions, simultaneously enabling introduction of new content and new constructs after these are validated (addressing the primary critique of Loring & Bauer, Reference Loring and Bauer2010).

Neuropsychological test development can move forward rapidly if we embrace modern technology, adopt modern psychometric theory, and collaborate. First, neuropsychology needs to embrace computerized assessment. Some express fear that computer tests will somehow replace clinicians, or miss important observations. But the computer is just a tool enabling presentation of certain stimuli and collection of certain responses, and properly used can clearly outperform a human examiner in precision and rapid implementation of adaptive algorithms. One clear advantage of computer timing precision is that it enables implementation of methods from cognitive neuroscience that rely on more subtle task manipulations and trial-by-trial analyses, which can be more sensitive and specific to individual differences in neural system function. To the extent that future computer logic may provide prompts for differential diagnosis, test selection, or test interpretation, this would only supplement and enhance clinical decision making.

A second bolder step will involve Web assessment. This idea often triggers the same anxieties raised about computerized assessment, plus concerns that examiners cannot adequately control conditions of testing, be confident that test-takers are performing tasks as instructed, or even be sure about the identities of test-takers. There are further concerns about individual differences in computer literacy, and the “digital divide” that prevents equal access to the Internet. The first class of problems has technological solutions including embedded validity indicators, on-line video surveillance, and anthropometric identifiers. But elaborate surveillance strategies are not necessary for some research and even select clinical applications. There are many people who will try their best, will follow instructions, and will generate valid results, without such interventions. This is a particularly important point for psychometric test development and specific research questions, particularly genetic studies that require large samples. In contrast to conventional test development efforts involving hundreds of participants over years, Web-based protocols can acquire hundreds of thousands of participants in months. Given algorithms for item-level response monitoring and automated consistency checks, there is much greater opportunity than in most current tests to detect outlying response patterns of uncertain validity. Because “brain testing” and “brain training” applications are already proliferating without quality control, there is a pressing need for neuropsychologists to participate, establish guidelines, and ensure the responsible use of such applications.

Soon many individuals will be completing Web-based tests of brain function in the privacy of their homes using a wide range of Web-enabled devices. Rich longitudinal behavioral data will be stored in repositories, along with electronic medical records, complete genome sequences, and automatically aggregated information about environmental exposures based on individual life history. Clinicians will need to develop competencies in the use of data mining tools to effectively manage and interpret torrents of information. The neuropsychologist of the future will synthesize these data and then determine what needs to be done in lab, office, or clinic, and how to direct patients toward optimal therapeutic options.

Overcoming Obstacles

These rosy visions of the future depend on multiple changes, some of which are fundamental to both neuropsychological research and clinical practice. The most critical current bottle-neck is achieving consensus frameworks for describing neuropsychological concepts and their measurement. Agreement on terms may seem difficult, but there exist already platforms to achieve this aim (see www.cognitiveatlas.org), and engagement in such collaborative efforts may be an achievable goal for neuropsychological membership organizations. Even after we agree on terms, we will still face obstacles in knowledge aggregation, because existing data vary widely in the ways these are currently maintained, and in the quality with which these were originally acquired. In the longer term, it is likely that publication of research findings will be increasingly structured and data will be “deposited” in a case-wise manner, fostering capacity for group analysis but raising additional challenges and possibly threats to academic innovation (i.e., will scientists be supported to pursue directions that deviate markedly from “standardized” data frameworks?). In the shorter term, there are opportunities for aggregation of clinical data, but standards for quality control need to be developed, implemented, and monitored. But this aggregation of an adequate knowledgebase is critical to foster acceptance of new methods for assessment, because the responsible researcher or clinician understandably desires to use the best validated methods available. This final stage—development of novel methods—may appear the most daunting but is facilitated by rapid development of relevant technologies, and indeed a true revolution in current assessment methods is achievable with existing technology. The greater obstacles may be financial, given that current funding for test development depends largely on a relatively small “niche” print publishing market. To overcome this, it may be that we need to encourage broader public interest in brain function, while simultaneously developing frameworks to ensure the responsible deployment of methods that are being widely disseminated.

In summary, dramatic changes in science, technology, and society now offer us great opportunities and grand challenges to advance our shared mission as neuropsychologists; it is hoped that by working collaboratively Neuropsychology 3.0 will be seen as a ground-breaking success in biomedicine, and pave the road to Neuropsychology 4.0.

Acknowledgments

Dr. Bilder has received consulting fees or honoraria from Roche, CHDI, Johnson & Johnson, Pfizer, Cypress Pharmaceutical, and Dainippon Sumitomo within the past 12 months and is a shareholder in Cogtest Inc, part of the Cognition Group; none of these relationships poses conflicts of interest with the material presented in this article. This work was supported by the Consortium for Neuropsychiatric Phenomics (NIH Roadmap for Medical Research grants UL1-DE019580, RL1LM009833), the Cognitive Atlas project (R01MH082795), and the Michael E. Tennenbaum Family Center for the Biology of Creativity.

References

REFERENCES

American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders (DSM-IV). Washington, DC: American Psychiatric Association.Google Scholar
Bilder, R.M. (2008). Phenomics: Building scaffolds for biological hypotheses in the post-genomic era. Biological Psychiatry, 63(5), 439440. doi:S0006-3223(07)01140-7 [pii] 0.1016/j.biopsych.2007.11.013CrossRefGoogle ScholarPubMed
Bilder, R.M., Sabb, F.W., Cannon, T.D., London, E.D., Jentsch, J.D., Parker, D.S. (2009). Phenomics: The systematic study of phenotypes on a genome-wide scale. Neuroscience, 164(1), 3042. doi:S0306-4522(09)00048-7 [pii] 10.1016/j.neuroscience.2009.01.027CrossRefGoogle ScholarPubMed
Bilder, R.M., Sabb, F.W., Parker, D.S., Kalar, D., Chu, W.W., Fox, J. (2009). Cognitive ontologies for neuropsychiatric phenomics research. Cognitive Neuropsychiatry, 14(4–5), 419450. doi:913383678 [pii] 10.1080/13546800902787180CrossRefGoogle ScholarPubMed
Biswal, B.B., Mennes, M., Zuo, X.N., Gohel, S., Kelly, C., Smith, S.M. (2010). Toward discovery science of human brain function. Proceedings of the National Academy of Sciences of the United States of America, 107(10), 47344739. doi:0911855107 [pii] 10.1073/pnas.0911855107CrossRefGoogle ScholarPubMed
Freimer, N., Sabatti, C. (2003). The human phenome project. Nature Genetics, 34(1), 1521.CrossRefGoogle ScholarPubMed
Furlow, C.F., Beretvas, S.N. (2005). Meta-analytic methods of pooling correlation matrices for structural equation modeling under different patterns of missing data. Psychological Methods, 10(2), 227254. doi:2005-07009-006 [pii] 10.1037/1082-989X.10.2.227CrossRefGoogle ScholarPubMed
Gibbons, R.D., Weiss, D.J., Kupfer, D.J., Frank, E., Fagiolini, A., Grochocinski, V.J. (2008). Using computerized adaptive testing to reduce the burden of mental health assessment. Psychiatric Services, 59(4), 361368. doi:59/4/361 [pii] 10.1176/appi.ps.59.4.361CrossRefGoogle ScholarPubMed
Giles, J. (2005). Internet encyclopaedias go head to head. Nature, 438(7070), 900901. doi:438900a [pii] 10.1038/438900aCrossRefGoogle ScholarPubMed
Glahn, D.C., Winkler, A.M., Kochunov, P., Almasy, L., Duggirala, R., Carless, M.A. (2010). Genetic control over the resting brain. Proceedings of the National Academy of Sciences of the United States of America, 107(3), 12231228. doi:0909969107 [pii] 10.1073/pnas.0909969107CrossRefGoogle ScholarPubMed
Hannay, H.J. (1998). Proceedings of the Houston Conference on Specialty Education and Training in Clinical Neuropsychology, September 3-7, 1997, University of Houston Hilton and Conference Center. Archives of Clinical Neuropsychology, 13(2), 157250.Google Scholar
Heilman, K.M., Valenstein, E. (1993). Clinical neuropsychology (Vol. 3). New York: Oxford University Press.CrossRefGoogle Scholar
Insel, T.R., Cuthbert, B.N. (2009). Endophenotypes: Bridging genomic complexity and disorder heterogeneity. Biological Psychiatry, 66(11), 988989. doi:S0006-3223(09)01208-6 [pii] 10.1016/j.biopsych.2009.10.008CrossRefGoogle ScholarPubMed
Insel, T., Cuthbert, B., Garvey, M., Heinssen, R., Pine, D.S., Quinn, K. (2010). Research domain criteria (RDoC): Toward a new classification framework for research on mental disorders. American Journal of Psychiatry, 167(7), 748751. doi:167/7/748 [pii] 10.1176/appi.ajp.2010.09091379CrossRefGoogle Scholar
Jagaroo, V. (2009). Obstacles and aids to neuroinformatics in neuropsychology. In: Neuroinformatics for neuropsychology (pp. 8593). New York: Springer.CrossRefGoogle Scholar
Kløve, H. (1963). Clinical neuropsychology. The Medical Clinics of North America, 47, 16471658.CrossRefGoogle ScholarPubMed
Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York: Viking.Google Scholar
Loring, D.W., Bauer, R.M. (2010). Testing the limits: Cautions and concerns regarding the new Wechsler IQ and Memory scales. Neurology, 74(8), 685690. doi:74/8/685 [pii] 10.1212/WNL.0b013e3181d0cd12CrossRefGoogle ScholarPubMed
Miniwatts Marketing Group. (2010). Internet world stats usage and population statistics. Retrieved from http://www.internetworldstats.com/Google Scholar
Mitrushina, M.N., Boone, K.B., Razani, J., D’Elia, L.F. (2005). Handbook of normative data for neuropsychological assessment. New York: Oxford University Press.Google Scholar
Poldrack, R.A., Halchenko, Y.O., Hanson, S.J. (2009). Decoding the large-scale structure of brain function by classifying mental States across individuals. Psychological Science, 20(11), 13641372. doi:PSCI2460 [pii] 10.1111/j.1467-9280.2009.02460.xCrossRefGoogle ScholarPubMed
Riley, R.D., Simmonds, M.C., Look, M.P. (2007). Evidence synthesis combining individual patient data and aggregate data: A systematic review identified current practice and possible methods. Journal of Clinical Epidemiology, 60(5), 431439. doi:S0895-4356(06)00403-3 [pii] 10.1016/j.jclinepi.2006.09.009CrossRefGoogle Scholar
Sabb, F.W., Bearden, C.E., Glahn, D.C., Parker, D.S., Freimer, N., Bilder, R.M. (2008). A collaborative knowledge base for cognitive phenomics. Molecular Psychiatry, 13(4), 350360. doi:4002124 [pii] 10.1038/sj.mp.4002124Google Scholar
Shattuck, D.W., Mirza, M., Adisetiyo, V., Hojatkashani, C., Salamon, G., Narr, K.L. (2008). Construction of a 3D probabilistic atlas of human cortical structures. Neuroimage, 39(3), 10641080. doi:S1053-8119(07)00809-9 [pii] 10.1016/j.neuroimage.2007.09.031CrossRefGoogle ScholarPubMed
Van Horn, J.D., Bandettini, P.A., Cheng, K., Egan, G.F., Stenger, V.A., Strother, S., Toga, A.W. (2008). New horizons for the next era of human brain imaging, cognitive, and behavioral research: Pacific Rim Interactivity. Brain Imaging and Behavior, 2(4), 227231.CrossRefGoogle ScholarPubMed
Figure 0

Fig. 1 A possible agenda for Neuropsychology 3.0 involves partiallyoverlapping stages from ontology development, through collaborativeknowledge aggregation, to web-based adaptive test development.

Figure 1

Fig. 2 This schematic representation of a neuropsychological hypothesis includes an assertion (about “motor response inhibition”) and associated evidence. The evidence is derived from a particular publication, which used a specific cognitive task (the Stop Signal Reaction Time test) to measure a specific functional process (which in this example is the cognitive concept “response suppression” according to one author [Poldrack]). The hypothesis suggests that this process is dependent on functioning of a specific corticostriatal pathway, and this circuit (the “indirect pathway”) is linked to a graphical representation of the relevant connectional anatomy. The evidence also includes neuroimaging data, including functional MRI (fMRI) and diffusion tensor imaging (DTI) as supporting links to implicate the neuroanatomic circuit components that are putatively involved in the behavioral process.