Hostname: page-component-745bb68f8f-d8cs5 Total loading time: 0 Render date: 2025-02-06T05:58:39.682Z Has data issue: false hasContentIssue false

Rationale and Design of the National Neuropsychology Network

Published online by Cambridge University Press:  04 March 2021

David W. Loring*
Affiliation:
Department of Neurology, Emory University School of Medicine, Atlanta, GA30329, USA Department of Pediatrics, Emory University School of Medicine, Atlanta, GA30322, USA
Russell M. Bauer
Affiliation:
Department of Clinical and Health Psychology, University of Florida, Gainesville, FL32610, USA Brain Rehabilitation Research Center, Malcom Randall VAMC, Gainesville, FL32610, USA
Lucia Cavanagh
Affiliation:
Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, CA90024, USA
Daniel L. Drane
Affiliation:
Department of Neurology, Emory University School of Medicine, Atlanta, GA30329, USA Department of Pediatrics, Emory University School of Medicine, Atlanta, GA30322, USA
Kristen D. Enriquez
Affiliation:
Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, CA90024, USA
Steven P. Reise
Affiliation:
Department of Psychology, University of California, Los Angeles, CA90095, USA
KuoChung Shih
Affiliation:
Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, CA90024, USA
Laura Glass Umfleet
Affiliation:
Department of Neurology, Medical College of Wisconsin, Milwaukee, WI53226, USA
Dustin Wahlstrom
Affiliation:
Pearson Clinical Assessment, San Antonio, TX78259, USA
Fiona Whelan
Affiliation:
Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, CA90024, USA
Keith F. Widaman
Affiliation:
Graduate School of Education, University of California, Riverside, CA92521, USA
Robert M. Bilder
Affiliation:
Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, CA90024, USA
*
Correspondence and reprint requests to: D.W. Loring, Emory Brain Health Center, 12 Executive Park, Atlanta, GA30329, USA. E-mail: dloring@emory.edu
Rights & Permissions [Opens in a new window]

Abstract

Objective.

The National Neuropsychology Network (NNN) is a multicenter clinical research initiative funded by the National Institute of Mental Health (NIMH; R01 MH118514) to facilitate neuropsychology’s transition to contemporary psychometric assessment methods with resultant improvement in test validation and assessment efficiency.

Method:

The NNN includes four clinical research sites (Emory University; Medical College of Wisconsin; University of California, Los Angeles (UCLA); University of Florida) and Pearson Clinical Assessment. Pearson Q-interactive (Q-i) is used for data capture for Pearson published tests; web-based data capture tools programmed by UCLA, which serves as the Coordinating Center, are employed for remaining measures.

Results:

NNN is acquiring item-level data from 500–10,000 patients across 47 widely used Neuropsychology (NP) tests and sharing these data via the NIMH Data Archive. Modern psychometric methods (e.g., item response theory) will specify the constructs measured by different tests and determine their positive/negative predictive power regarding diagnostic outcomes and relationships to other clinical, historical, and demographic factors. The Structured History Protocol for NP (SHiP-NP) helps standardize acquisition of relevant history and self-report data.

Conclusions:

NNN is a proof-of-principle collaboration: by addressing logistical challenges, NNN aims to engage other clinics to create a national and ultimately an international network. The mature NNN will provide mechanisms for data aggregation enabling shared analysis and collaborative research. NNN promises ultimately to enable robust diagnostic inferences about neuropsychological test patterns and to promote the validation of novel adaptive assessment strategies that will be more efficient, more precise, and more sensitive to clinical contexts and individual/cultural differences.

Type
Regular Research
Copyright
Copyright © INS. Published by Cambridge University Press, 2021

INTRODUCTION

Neuropsychological practice typically involves manual administration of paper and pencil tests using methods and techniques developed during the mid-20th century, with some tests having historical roots in the 19th century (Bilder & Reise, Reference Bilder and Reise2019; Boake, Reference Boake2000). Transition to newer methods that leverage the multiple advantages of computer-assisted testing has been limited despite a recognized need for method modernization (Marcopulos & Lojek, Reference Marcopulos and Lojek2019) and despite the interest in computer-assisted testing associated with telehealth spawned by COVID-19 pandemic (Bilder et al., Reference Bilder, Postal, Barisa, Aase, Cullum, Gillaspy and Woodhouse2020a, Reference Bilder, Postal, Barisa, Aase, Cullum, Gillaspy and Woodhouse2020b; Hewitt, Rodgin, Loring, Pritchard, & Jacobson, Reference Hewitt, Rodgin, Loring, Pritchard and Jacobson2020; Postal et al., Reference Postal, Bilder, Lanca, Aase, Barisa, Holland and Salinas2020). Many computerized neuropsychological tests were designed to generate results comparable to their paper and pencil counterparts and are direct adaptions of those measures (e.g., Wisconsin Card Sorting Test). Although multiple assessment protocols have been developed specifically for computerized testing (e.g., CNS Vital Signs, Cogstate, CANTAB), most of these procedures do not fully satisfy published standards for computerized testing (Bauer et al., Reference Bauer, Iverson, Cernich, Binder, Ruff and Naugle2012). More importantly, newer measures have not been optimized by applying advanced psychometric techniques to enhance test validity, increase efficiency, guide differential diagnoses, or suggest appropriate recommendations.

Neuropsychology is criticized because of its lengthy testing sessions (Teng & Manly, Reference Teng and Manly2005), which particularly disadvantage patients at-risk for fatigue (e.g., Parkinson’s disease, multiple sclerosis). Lengthy assessment protocols further limit neuropsychology access because fewer patients can be tested each day, and long appointment wait times preclude timely evaluations which detracts from efficient patient management. Clinical test validation has also been constrained by diagnostic criteria variability and selection biases associated with samples of convenience in which the base rates of performance patterns are unknown (Pawlowski, Segabinazi, Wagner, & Bandeira, Reference Pawlowski, Segabinazi, Wagner and Bandeira2013). Thus, many “rules of thumb” for diagnostic decision-making are insufficiently validated across the spectrum of clinical conditions that may be referred for neuropsychological evaluation (Chaytor & Schmitter-Edgecombe, Reference Chaytor and Schmitter-Edgecombe2003; Duff, Suhrie, Dalley, Anderson, & Hoffman, Reference Duff, Suhrie, Dalley, Anderson and Hoffman2019; Hoogland et al., Reference Hoogland, van Wanrooij, Boel, Goldman, Stebbins, Dalrymple-Alford and Weintraub2018; Raspall et al., Reference Raspall, Donate, Boget, Carreno, Donaire, Agudo and Salamero2005).

The National Neuropsychology Network (NNN) is a multicenter clinical research initiative funded by the National Institute of Mental Health (NIMH; R01 MH118514). NNN was designed to facilitate neuropsychology’s transition to contemporary psychometric assessment methods with resultant improvement in clinical test validation. Although previous collaborations have been successful in generating important clinical research findings (e.g., Bozeman Epilepsy Consortium; Loring, Reference Loring2010), the absence of independent funding does not permit infrastructure development essential for uniform data acquisition and quality control nor support expertise in advanced psychometrics and test construction. The NIMH is supporting a 5-year award (2019–2024) to achieve these objectives using the Multiple Program Director/Principal Investigator (PI) option (overall PI: Robert M. Bilder; Clinical Site PIs: Russell M. Bauer, Daniel L. Drane, David W. Loring, Laura Glass Umfleet; Non-clinical Site PI: Dustin Wahlstrom).

The NNN collects item-level data on multiple neuropsychological measures to identify the most informative items that characterize relevant neurocognitive constructs. Item-level analysis enables more efficient assessment methods to be developed by applying modern psychometric analyses that either eliminate redundancies or specify adaptive strategies to efficiently answer diagnostic questions. Derived results are then used to establish robust estimates of positive and negative predictive within relevant neurobehavioral domains and will also be rigorously characterized psychometrically and subjected to robust clinical validation to establish their incremental diagnostic validity over conventional approaches. NNN data will also be analyzed using differential item functioning and differential test function to characterize and reduce inequities due to racial, ethnic, linguistic, and economic factors that may influence neuropsychological test performance (Zahodne, Manly, Smith, Seeman, & Lachman, Reference Zahodne, Manly, Smith, Seeman and Lachman2017). Such analyses will determine empirically if items and tests function similarly across different groups, and thus suggest ways to revise measures and propose new methods that would address these biases.

Although the need for neuropsychological test modernization has long been recognized, clinicians are comfortable with existing measures and often reluctant to embrace new assessment methods. Consequently, NNN adopted an incremental approach to influence test usage starting with the “usual suspects” (Curtiz, Reference Curtiz1942) to characterize how legacy measures perform in tightly characterized clinical environments, and then based on initial analyses, make recommendations for test modification and novel assessment method development. The initial NNN database thus includes standard neuropsychological measures that are currently in widespread use (Rabin et al., Reference Rabin, Paolillo and Barr2016). Although most Neuropsychology (NP) tests have either some psychometric or clinical validation, few data exist on how tests perform when combined with other cognitive measures. The analytic approach used by NNN will address: a) how do neuropsychological test findings, individually and in combination, provide unique information to establish an accurate post-assessment diagnosis (i.e., new information not present with pre-examination/referral diagnosis but which influences final diagnostic characterization); and b) what specific test items within established measures are especially informative (or not) in this process. Following successful item-level analyses of existing neuropsychological measures, we will gradually implement cross-network data collection from a limited number of novel or experimental tests that have been developed using voice, video, and other unique formats. As additional sites are added to NNN, rapid collection of validation data for newly fashioned measures will be facilitated through the participation of multiple clinical sites.

NNN has three specific aims: (1) establish network infrastructure; (2) data collection and deposit; and (3) data analyses (see Supplemental Material for grant proposal specific aims). Creating appropriate network infrastructure is critical not only for project execution but also to provide the foundation for NNN expansion to new clinical sites that will permit data capture from larger and more diverse clinical settings. Included in infrastructure development is implementation of the Pearson Q-interactive (Q-i) system across existing NNN sites, and design and development of point-of-testing digital response capture of individual item responses for non-Q-i measures. Item-level data including response times are contributed to the NIMH Data Archive (NDA) and will comprise the largest single source of NP data at the item-level, broadly facilitating data analyses beyond the boundaries of the NNN project.

The NDA was developed by NIMH to promote open access and promote data sharing to accelerate scientific progress (see https://nda.nih.gov/about/about-us.html). Originally developed to support autism research, the NDA integrates several existing data repositories including the National Database for Autism Research (NDAR), the Research Domain Criteria Database (RdoCdb), the National Database for Clinical Trials related to Mental Illness (NDCT), and the NIH Pediatric MRI Repository (PedsMRI). By placing data in the NDA, NNN data are available to researchers worldwide facilitating additional independent analyses of archived NNN material. Many papers already have emanated from archival NDA data (e.g., Human Connectome Project, ABCD datasets) and we anticipate that NNN data will contribute to a similar trajectory of archival data analysis.

NNN NEUROPSYCHOLOGICAL TEST SELECTION

Data are obtained from the most frequently administered neuropsychological tests and are representative of national assessment trends (Rabin, Paolillo, & Barr, Reference Rabin, Paolillo and Barr2016). The clinical sites (Emory University, Medical College of Wisconsin, University of Florida, and UCLA) were selected since they represent geographic diversity, are nationally recognized programs, involve multiple board-certified clinical faculty, and have established track records of collaborative clinical research. Several NNN investigators have also been involved in formal test development initiatives, learning the difficult lesson that funding, infrastructure, and dedicated/protected research time are essential elements for project success. These institutions provide neuropsychology training at the practicum, internship, doctoral, and postdoctoral levels and are well-positioned to influence practices and expectations of emerging neuropsychologists. Clinical sites are expected to enroll more than 10,000 participants spanning diverse neuropsychiatric and neurologic diseases/syndromes and to compile item-level data on 500+ clinical cases for nearly 50 common neuropsychological measures during its 5-year NIMH funding period.

The NNN established a formal collaborative relationship with Pearson Clinical Assessment, the publisher of many of the most widely used NP tests (Rabin et al., Reference Rabin, Paolillo and Barr2016). The NNN leadership plan includes Pearson in all project discussions although Pearson does not have voting rights in the governing board, which consists of leaders of the four clinical sites and an independent external advisor (Robert Heaton). The Pearson Q-i platform captures item-level responses on Pearson tests including measures of general cognition (Wechsler Adult Intelligence Scale-Fourth Edition, WAIS-IV), memory (Wechsler Memory Scale-Fourth Edition; California Verbal Learning Test-Third Edition), executive function (Delis-Kaplan Executive Function System), and brief neuropsychological assessment screening (Repeatable Battery for the Assessment of Neuropsychological Status). The Q-i platform employs iPads for both stimulus presentation and examiner recording of responses.

Because non-Pearson tests (e.g., Boston Naming Test) lacked a point-of-testing data entry platform, the UCLA Semel Institute Biostatistics Core (SI-Stat) developed web-based data capture applications for these measures (SAILOR - System for Acquisition of Item Level and Observational Responses). Agreements to program data-entry applications have been executed with test publishers permitting appropriate per-use royalty payment for clinical test usage. Pearson Clinical Assessment is providing Q-i use for the NNN initiative as part of the research collaboration with support from NIMH. Neuropsychological tests included in NNN are listed in Table 1.

Table 1. Measures being obtained as part of the National Neuropsychology Network

* Selected RBANS subtests administered after TeleNP implementation due to COVID-19 pandemic.

STRUCTURED HISTORY PROTOCOL FOR NEUROPSYCHOLOGY AND OTHER COMMON DATA ELEMENTS

The Structured History Protocol for Neuropsychology (SHiP-NP) is a standardized history protocol developed to harmonize clinical data collection across NNN sites including demographics, medication use, and medical/psychiatric history to facilitate data analysis. Demographic data elements are based on the PhenX Toolkit project (Hamilton et al., Reference Hamilton, Strader, Pratt, Maiese, Hendershot, Kwok and Haines2011), which developed consensus measures for “Phenotypes and eXposures” that include age, race, ethnicity, sex, gender, marital status, educational attainment, annual family income, and child-reported parental educational attainment. Medical history, family history, and concomitant medications are recorded based on the NINDS Common Data Elements (CDE) conventions (Grinnon et al., Reference Grinnon, Miller, Marler, Lu, Stout, Odenkirchen and Kunitz2012), and medications associated with increased risks for adverse cognitive effects (e.g., narcotics, benzodiazepines, anticholinergics, sedatives/hypnotics, and selected anti-seizure medications) are highlighted. The SHiP-NP includes a standardized assessment of potential influences on neuropsychological performance including developmental history, academic performance, legal and military history, mental health treatment, and social health determinants (e.g., financial strain). Appropriate follow-up questions are available for specific medical conditions (e.g., epilepsy, traumatic brain injury, stroke, cancer) to characterize symptom presentation, duration, and treatment history. In response to the COVID-19 pandemic, novel coronavirus exposure and associated distress queries were added that were obtained from the Montreal Behavioral Medicine Centre (https://mbmc-cmcm.ca/covid19/; Lavoie & Bacon, Reference Lavoie and Bacon2020). Additional clinical CDEs are linked to the SHiP-NP (see section below: Clinical Interview).

The SHiP-NP is designed for online completion prior to the clinical appointment via a secure website hosted by SI-Stat, although a paper and pencil SHiP-NP version can be used by patients without internet access. The SHiP-NP relies on branching logic and forced response options to gather relevant information while minimizing overall assessment burden. SHiP-NP data are collected and stored securely on a secure UCLA Si-Stat server without Protected Health Information (PHI), which is available only to patients and their clinicians. The SHiP-NP generates patient data sheets and narrative history text files that can be modified for each clinical site and which are easily integrated into clinical reports. For patients unwilling or unable to complete the SHiP-NP, NNN sites collect a minimal demographic dataset including age, gender, education, and handedness, with secondary questions documenting marital status, employment, and languages spoken other than English. The first 135 patients completing the SHiP-NP spent approximately 22 min completing the form (Mdn = 22.4 min; range = 4.4–56.0 min).

MILESTONES ACHIEVED DURING THE INITIAL PROJECT PERIOD

The initial project months (March 2019–August 2019) were dedicated to logistical problem-solving associated with collaboration across sites using different models of clinical service delivery and addressed training and quality control, data capture, and options for obtaining participants’ consent. Although NNN uses the SMART IRB National IRB reliance system, it was still necessary to obtain agreement from site IRBs prior to beginning subject enrollment. The Q-i platform allows individual item data capture for Pearson tests; however, it was necessary during project start-up to develop individual item data capture for non-Q-i measures and to establish appropriate mechanisms for data transfer to the NDA.

INSTITUTIONAL REVIEW BOARD

UCLA serves as the IRB of Record, with all participating sites relying on the SMART IRB agreement (NIH’s National Center for Advancing Translational Sciences (NCATS) Streamlined, Multi-site, Accelerated Resource for Trials (SMART) IRB Reliance Platform). Future NNN sites will be able to sign on to SMART IRB and join this agreement, and if ineligible for SMART IRB, an alternative appropriate IRB agreement will be negotiated. This process coordinates, collects, and verifies information including: (a) local context; (b) site variations in areas such as recruiting, informed consent, HIPAA, populations; (c) conflict of interest disclosure and management; (d) completion of ancillary reviews; (e) training and qualifications of study team; (f) continuing review or closure information; and (g) reportable events such as protocol deviations or adverse reactions. In response to the 2020 COVID-19 outbreak and the emergence of telehealth, NNN was first granted IRB approval to obtain verbal informed consent for study participation. Given the negligible risks to patients of participation in the NNN, that clinical practice remains unchanged, and that no PHI is included, we applied and received approval for a waiver of informed consent from the UCLA IRB. With consent waiver, participants are assigned a random number (i.e., pseudo-GUID) to identify subjects as distinct study participants but which avoids use of PHI for data sharing with Pearson Clinical Assessment, UCLA, and the NDA.

RECRUITMENT

All adult English-speaking patients referred for neuropsychological evaluation are considered potential NNN participants. We initially aimed to enroll approximately 50 cases weekly across our network to reach 10,000 cases total in the database. Soon after starting enrollment on 31 July, 2019, we reached this weekly enrollment target. We experienced a sudden drop in recruitment in March 2020 at the onset of the COVID-19 pandemic, but our clinics have subsequently developed new models of practice and we have again returned to target recruitment levels, expecting to further increase recruitment (see Supplemental Figures 1a and 1b). Participant characteristics are provided in Supplemental Table 1.

TRAINING AND QUALITY CONTROL

Each site has a quality control officer, and each person performing NNN assessments is certified at both the test administration and Q-i assessment interface levels. The NNN has a library of training materials, with support from Pearson Clinical Assessment for Q-i training tools, and the NNN maintains a database documenting training outcomes for all personnel involved in data collection.

DATA ACQUISITION, STORAGE, AND TRANSFER

NNN protocols transmit only data without PHI to Pearson Clinical Assessment, and data received back from Pearson or SI-Stat are based upon assigned study ID. Thus, each site is responsible for creating linking tables to connect NNN study number to other identifiers in their own clinical records. Q-i data transmitted to Pearson Clinical Assessments are tagged based upon site-specific logins with deidentified patient data, which are also how data are transferred and stored in the Pearson database and subsequently shared with the NDA. Pearson executes their usual workflow to score the obtained information, creating data files for the sending site as is their usual reporting standard on Q-i. In parallel, Pearson sends complete data, including all trial timing and individual response selection variables, to the NDA. The test results and normative information generated by Pearson are provided back to the clinic sites in an Excel spreadsheet which facilitates preparation of the formal clinical report, a process that typically takes less than 30 min after data transmission to Pearson. UCLA Si-Stat is responsible for data aggregation with non-Q-i measures and for transfer to the NDA. We continue to resolve issues related to the highly specific data elements that are enabled by the Q-i outputs, which include not only item-by-item scores and timings but also individual examiner annotations as image files (e.g., locations of each block in individual block designs). The complete data dictionary will be accessible on our website and from the NDA.

CLINICAL INTERVIEW

The NNN has adopted conventions for data collection for contemporary diagnostic and clinical status information while allowing individual clinicians to conduct interviews following their current practices. While diagnostic assessments will vary, specification of both pre-assessment and post-assessment diagnoses follows ICD-10-CM and is harmonized with the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5) codes for psychiatric disorders. The same online platform developed for the SHiP-NP is used to acquire other CDEs including multiple dimensional self-report rating scales. These CDEs were recommended by an NIMH workgroup (Barch et al., Reference Barch, Gotlib, Bilder, Pine, Smoller, Brown and Farber2016) and include the DSM-5 Level 1 Cross-Cutting Symptom measure, a 23-item self-report that assesses 13 domains (depression, irritability, anxiety, mania, somatic symptoms, suicidality, psychotic symptoms, insomnia, memory problems, compulsions, derealization/depersonalization, personality functioning, and substance use disorders) (Available at: https://www.psychiatry.org/psychiatrists/practice/dsm/educational-resources/assessment-measures). Participants who screen positive on the Level 1 assessment also receive the relevant DSM-5 Level 2 measures, which include the Patient-Reported Outcomes Measurement Information System (PROMIS) short-forms for Depression, Anxiety, Anger, and Sleep Problems and self-report ratings of Mania, Substance Use, Repetitive Thoughts and Behaviors, and Somatic Symptoms. Finally, there is a Clinician-Rated Dimensions of Psychosis Symptom Severity because this domain was determined to be unreliable by self-report. Many patients with neurologic referral diagnoses are not expected to screen positive on the Level 1 assessment, although this screening will permit detailed characterization of psychiatric comorbidities in chronic neurologic diseases such as epilepsy or multiple sclerosis or degenerative conditions such as Alzheimer’s disease or Parkinson’s disease.

DIMENSIONAL RATINGS OF EVERYDAY FUNCTIONING AND DISABILITY

Disability ratings are provided by the 36-item self- and informant-reported World Health Organization Disability Assessment Schedule 2.0 (WHO-DAS 2.0; available at: https://www.who.int/classifications/icf/whodasdownloads/en/), which are also incorporated as part of the SHiP-NP/CDE online platform. The WHODAS 2.0 follows the theoretical framework of the International Classification of Functioning, Disability and Health (ICF), permitting it to be used worldwide across all health conditions. Additional ratings of current functioning and quality of life for non-overlapping domains are gathered using validated short forms of the Quality of Life in Neurological Disorder (Neuro-QoL) battery (Cella et al., Reference Cella, Lai, Nowinski, Victorson, Peterman, Miller and Moy2012), comprising a total of 65 additional items for Emotional and Behavioral Dyscontrol; Fatigue; Lower Extremity Function – Mobility; Positive Affect and Well-Being; Satisfaction with Social Roles and Activities; Sleep Disturbance; Stigma; and Upper Extremity Function – Fine Motor, Activities of Daily Living (ADL). These are all programmed in the SHiP-NP/CDE web application, with data collected either prior to each participant’s visit via their own web-enabled devices or on site using one of the sites’ iPads or other data entry devices.

DATA ANALYSES

Initial NNN analyses will apply item response theory (IRT) to define more precisely the constructs measured by each test and how these constructs can be assessed more efficiently. Most IRT models are expected to be unidimensional, although multidimensional IRT (mIRT) models will be explored as appropriate. Following dimensionality determination, efficiency optimization will be explored by specifying fixed short forms, computerized adaptive tests (CATs), or multidimensional adaptive tests. IRT application has led to efficiency gains of 50–95% relative to conventional method for many measures (Choi, Reise, Pilkonis, Hays, & Cella, Reference Choi, Reise, Pilkonis, Hays and Cella2010; Gibbons et al., Reference Gibbons, Weiss, Kupfer, Frank, Fagiolini, Grochocinski and Immekus2008; Moore et al., Reference Moore, Scott, Reise, Port, Jackson, Ruparel and Gur2015). IRT can also characterize the same construct using different item sets, which is particularly useful in longitudinal assessments to ensure that inclusion of a newly derived measure is back-compatible with earlier test versions that may have been used in previous testing, and to enable repeated measurement without using previously exposed items.

As a proof of principle demonstration that modern psychometric methods applied to existing measures can improve test efficiency, we applied IRT to WAIS-IV Matrix Reasoning (MR) data in a group of 549 NNN participants (Reise et al., Reference Reise, Widaman, Bauer, Drane, Loring, Umfleet and Bilder2021). The mean MR raw score was 15.4 (SD = 5.6) and the mean MR scaled score was 9.9 (SD = 3.2), similar to the standardization sample. The first five MR items were completed without error by 97% of the subjects, adding little to the measurement of the MR latent trait. The most difficult MR items also contributed little information at ability levels greater than 1.5 SD above the test mean. Finally, there was considerable overlap in item information across the remaining MR items suggesting that a short form or adaptive version of MR could provide results with precision comparable to the full MR subtest. Using the standard administration start and stop rules, more than half of our sample (282/549 or 51%) were administered 23/26 MR items to obtain their total score. In contrast, a simulated CAT with only 10 MR items was almost perfectly correlated with the ability level estimated from the entire 26-item test (r = .99). This represents more than a doubling of assessment efficiency.

These analyses can be further extended using the nominal or graded response models. In brief, the traditional analyses of the MR responses use dichotomization, considering only whether each item was completed correctly or incorrectly. But because each response is chosen from a multiple-choice array that includes five response options, each distractor can be considered independently as an indicator of ability on either the primary latent trait or on other factors that might be identified following data analysis. In the nominal response model, each response is considered independently, while in the graded response model, the response alternatives can be ranked on an ordinal scale from those that are “closest” to those that are most “distant” from the correct choice. For example, selection of a “close” response option that matches on some but not all item dimensions might be more frequent among those with higher ability, while selection of a “distant” option that does not follow the expected dimensions might be more frequent among those with lower ability or who may be following an unusual response strategy. This approach to response analysis may have special value in performance validity testing in which selection of particularly “distant” choices might reveal a marked deviation from the best estimates of that individual’s true abilities suggesting possible intentional response deficit exaggeration. Indeed, this method examining the relationship of individual responses to estimates of true ability (i.e., person-fit statistics) should allow the design of empirically derived embedded performance validity tests for many measures.

As part of quality control, data analyses will exam measurement invariance in an IRT framework to determine whether putatively identical tests behave comparably across sites, diagnostic boundaries, and groups defined by demographic characteristics. These analyses will establish to what extent data can be combined for subsequent analyses and to what extent unique scoring and interpretation may be indicated for different groups defined by age, sex, education, racial/ethnic, linguistic, or cultural backgrounds. Novel methods for variable harmonization developed for the Whole Genome Sequencing in Psychiatric Disorders consortium (U01 MH105578) will be employed to combine data that were acquired using different instruments in different samples (with no overlap of items across samples and no patient receiving both instruments) (Mansolf et al., Reference Mansolf, Vreeker, Reise, Freimer, Glahn and Gur2020). This method analyzes the goodness-of-fit of correlations among variables to determine if two different variables are sufficiently similar to include in pooled analyses. Items are selected that best match items in another test based on the strengths of their correlations with all the other items in their respective test. When the loss function asymptotes, this process stops yielding the items that are considered equivalent across datasets.

Data analyses will include both confirmatory and exploratory factor analytic approaches. The degree to which alternate models may be more appropriate to our clinical samples, or specified using different tests, will be rigorously explored and analyses will determine to what extent the additional tests measures add to or modify that structure. Alternate models using confirmatory factor analysis will establish how various models differ with respect to efficiency (i.e., assessment time) by determining the degree to which goodness of fit deteriorates with fewer variables or by substituting short-form test scores for long-form results. This permits the creation of short-form tests, new short-form batteries, and CATs to increase efficiency to measure the same constructs or arrive at the same diagnostic conclusions. We will also characterize short-form use across different clinical diagnostic referrals since, for example, a short-form assessment of confrontation naming may be appropriate in patients with referral diagnoses of multiple sclerosis but not for epilepsy patients referred for surgical evaluation. Thus, NNN will provide the evidence base that enables multidimensional adaptive testing to maximize diagnostic information while minimizing burden to patients and facilitating new large-scale collaborative research projects (e.g., as in genomics and population behavioral health).

CLINICAL VALIDATION OF NEUROPSYCHOLOGICAL PROCEDURES

By employing adaptive assessment techniques, assessment protocols may eventually be personalized according to referral diagnoses and patient performance patterns in which both item and test selection maximize the predictive power by sequentially selecting the most informative next item within a test and the most informative next test within a battery. When specific test findings are obtained, prior probability estimates are updated, and the next most informative measure is selected. This process continues until desired levels of precision are obtained with respect to the outcomes of interest. With adaptive testing, assessment protocols are not administered in the same pre-determined sequence, but rather change to the specific clinical context and adjust dynamically according to task performance.

The combination of tests, administration order, and all available sources of information are included to establish the best combination for accurate prediction, whether it will be post-assessment diagnosis or risk of adverse treatment outcome. These analyses will generally use multinomial logistic regression (MLR) models if there are more than two categories, or logistic regression for cases with only two classes.

Mild and Major Neurocognitive Disorder

A common neuropsychological referral question is whether there is cognitive decline that exceeds that expected with normal aging, and if so, whether the pattern is consistent with a specific underlying etiology. Following DSM-5 nomenclature, three classes of neurocognitive function are characterized for all patients: No Neurocognitive Disorder (No NCD), Mild NCD, or Major NCD. This classification is applied for all NNN participants, not simply those referred due to age-related cognitive or memory concerns. This approach reflects the growing recognition that consistent application of cognitive taxonomies for all clinical conditions is necessary to better characterize and enable comparison across disease entities (e.g., presence of single vs. multi-domain impairments, natural history of disease, relationship to sociodemographic and psychological variables; Norman et al., Reference Norman, Wilson, Baxendale, Barr, Block, Busch and Hermann2021).

The characterization of all diagnostic referrals will also help address the relationship of mild NCD and psychopathology. For example, the DSM-5 diagnosis of Mild NCD has been associated with more “anergia” and “observed slowness” while the Petersen Mild Cognitive Impairment criteria have been more associated with neuro-vegetative symptoms and dysphoric mood (Lopez-Anton et al., 2015). Thus, there remains a major gap in understanding precisely how quantitative neurocognitive evidence, evidence of disruption in instrumental activities of daily living, and evidence of non-cognitive psychopathology (particularly mood and anxiety symptoms) all contribute to the ultimate diagnosis of Mild and Major NCD.

MLR analyses will include estimated premorbid ability, neuropsychological performance in multiple discrete domains, psychopathology symptom ratings from the DSM-5 and PROMIS measures, and level of everyday functioning as measured by the WHODAS 2.0 and Neuro-QOL. Although this classification system does not assist in establishing diagnostic specificity regarding the presumed etiology of cognitive impairment, it provides a classification nosology to facilitate cross disease comparison that includes the important IADL component. The results will identify the relationship among premorbid, objective neurocognitive, and psychopathological features that contribute to impairment of everyday functioning and are associated with different dementia outcomes. The NNN final sample is anticipated to have sufficient sample size to characterize prediction of Alzheimer’s disease, vascular cognitive impairment, Parkinson’s disease dementia, dementia with Lewy bodies, and mixed dementia syndromes.

Epilepsy Lateralization

Epilepsy surgery candidates will be analyzed to characterize the diagnostic sensitivity of neuropsychological findings to confirm seizure onset lateralization and localization, particularly in patients with temporal lobe epilepsy. The primary approach will examine epilepsy patients who are candidates for surgical resection/ablation of the temporal lobe determined to be left hemisphere language dominant, although patients with mixed or right cerebral language dominance provide a special subset to explore hypotheses related to cerebral “crowding” and other features of potential cerebral reorganization (; Strauss, Satz, & Wada, Reference Strauss, Satz and Wada1990). Some data suggest greater risk of decline with atypical (particularly bilateral) language lateralization since language functions tend to be complimentary across hemispheres rather than “redundant” processes (Drane & Pedersen, Reference Drane and Pedersen2019).

The surgical decision is informed from ictal and interictal EEGs, PET, and MRI, and surgery provides the gold standard for criterion classification. Neuropsychological data typically are considered confirmatory, and when neuropsychological results are inconsistent with EEG and imaging, patients are considered at increased risk for adverse post-surgical cognitive outcome. NNN analyses will examine neuropsychological predictors of seizure onset using logistic regression models and examine positive/negative predictive power of each test and test combinations to classify patients into the four seizure groups – left temporal lobe epilepsy (TLE), right TLE, bilateral TLE, and extra-temporal seizure onset for seizure free patients following surgery. Predictors will include verbal and nonverbal learning and memory tests, selected language measures, and site-specific measures that we can examine using our variable harmonization strategies. Secondary analysis will involve finer-grain localization of seizure onset based upon more precise regional differences in structure–function relationships (e.g., the temporal pole is more associated with proper nouns than common nouns as assessed with both category-related visual confrontation naming tasks and verbal semantic fluency measures; Abel et al., Reference Abel, Rhone, Nourski, Kawasaki, Oya, Griffiths and Tranel2015; Drane et al., Reference Drane, Ojemann, Phatak, Loring, Gross, Hebb and Tranel2013; Drane & Pedersen, Reference Drane and Pedersen2019).

Epilepsy patients will also be examined as an initial group to evaluate possible reduction in test administration length. The BNT is sensitive to seizure onset lateralization in TLE patients and may be superior in diagnostic sensitivity than material-specific performance discrepancies of verbal learning and memory (Busch, Frazier, Iampietro, Chapin, & Kubu, Reference Busch, Frazier, Iampietro, Chapin and Kubu2008). In the standard administration, presentation begins with item 30 although anecdotal clinical evidence suggests that earlier items are sensitive to both seizure onset laterality effects and may demonstrate post-operative decline following open resection of the anterior temporal lobe. In addition, there are also likely BNT stimuli that are unfamiliar to younger patients compared to when the test was initially published in 1978 (e.g., yoke, compass) making IRT analysis a particularly valuable approach to improve BNT validity and improve testing efficiency.

Psychiatric Contributions to Cognitive Function and Disability

Comorbid psychiatric disorders frequently complicate neurological disease and diagnosis. Depression is both a risk factor for and a prodromal feature of dementia (Brzezińska et al., Reference Brzezińska, Bourke, Rivera-Hernández, Tsolaki, Woźniak and Kaźmierski2020). Similarly, depression is common in temporal lobe epilepsy and is related to outcomes following temporal lobe resection with evidence suggesting a bidirectional relationship between factors (Hermann, Loring, & Wilson, Reference Hermann, Loring and Wilson2017). Complex effects of anxiety on cognitive test performance are also present (Mella et al., Reference Mella, Vallet, Beaudoin, Fagot, Baeriswyl, Ballhausen and Desrichard2020). There is limited information, however, on the disorders as either comorbid conditions or complications of neurologic disease. Increased accuracy in identifying treatable mental disorders may have a marked impact on future incidence of neurocognitive dysfunction and disability. NNN will examine these relationships using appropriate diagnostic categorization and offer a unique and valuable resource for testing diverse hypotheses about those NP features that distinguish comorbid psychiatric diagnoses in neurological contexts.

In addition to categorical diagnoses, the NNN will provide one of the largest consistent collections of dimensional psychiatric symptoms collected contemporaneously with neuropsychological assessments and clinical interviews following structured history-taking. This enables analyses of covariation among cognitive and psychiatric symptom indicators not previously possible. NNN will characterize robust assessments of symptoms assessed by DSM-5 Level 1 measures, with many subjects also assessed with DSM-5 Level 2 characteristics. These data will reveal the degree to which symptoms of depression, anxiety, or psychosis either influence neuropsychological function or are at least comorbid factors, informing treatment options designed to maximize cognitive outcome. These models will determine how neuropsychological measures covary with psychiatric symptoms, and the degree to which those relations are observed consistently across different syndromes and levels of ability. While we are currently recording ICD-10-CM/DSM-5 diagnostic information, our long-term strategy is agnostic about the validity of these diagnostic taxonomies, and we are eager to determine if alternative systems for dimensional or categorical representation of neuropsychiatric syndromes (e.g., the Research Domains Criteria (RDoC; Cuthbert et al., Reference Cuthbert and Insel2013; Cuthbert, Reference Cuthbert2020) or Hierarchical Taxonomy of Psychopathology (HiTOP; Kotov et al., Reference Kotov, Krueger, Watson, Achenbach, Althoff, Bagby, Brown, Carpenter, Caspi, Clark, Eaton, Forbes, Forbush, Goldberg, Hasin, Hyman, Ivanova, Lynam, Markon, Miller and Zimmerman2017)) may possess greater validity and provide deeper insights into our understanding of neurocognitive disorders. Finally, the NNN dataset will provide a unique opportunity to examine models in which NP measures are mediators or moderators of relations between psychiatric symptoms and everyday functioning. These models will explore how neuropsychiatric syndromes exert their effects through neurocognitive features and, more importantly, if psychiatric symptoms have direct effects on outcomes. These analyses will generate evidence-based hypotheses about the relationship of psychiatric symptoms to real-world functioning, the role of neurocognitive assessment in the understanding of these effects, and the degree to which effective treatments for mental illness can yield major benefits in disability reduction.

SUMMARY

The NNN will provide a major resource to advance understanding of neurocognitive function through refined neurocognitive assessment. NNN is designed to establish a strong foundation capable of engaging neuropsychologists across a variety of practices. By contributing neuropsychological findings to the NDA, we aim to stimulate development of the next generation of evidence-based assessments. The NNN will provide foundational data to establish incremental validity of current, reconfigured, and ultimately novel neuropsychological measures for characterizing our emerging understanding of functional systems in the brain and real-world outcomes. We hope the ultimate result will be marked improvements in our ability to assess brain functions, leading to both improved understanding and treatment of neuropsychological impairment and mental illness in the USA. By enhancing the efficiency of assessment and directly addressing the problems of measurement invariance that have long plagued neuropsychological assessment, we anticipate that NNN will promote the development of methods that improve access globally and reduce current inequities in neuropsychological service delivery.

Our vision is to create a platform that has a foundation forged from “classic” NP tests with widespread use in the NP community, and to build upon this foundation a new generation of tasks that may incorporate a range of current technologies to explore modern theories of brain structure–function relationships. By supplementing current test batteries with additional procedures to examine concurrent and predictive validity of newer measures, we believe the field can benefit from both back-compatibility with established standards and future-directed extensions that will be more efficient and possess greater utility than the current methods. The new methods may include the application of virtual and augmented reality, “the internet of things,” videography, wearables, and eye tracking, among other current and anticipated innovations – all of which may enhance ecological validity relative to current methods. New assessment methods combined with multimodal neuroimaging or electrophysiological techniques (e.g., cortico-cortical evoked potentials in the context of stereoelectroencephalography) and advanced computational processing (e.g., artificial intelligence) may further hold the promise of moving assessments beyond neuropsychological “domains” (e.g., motor, visuo-perceptual, and language processes such as word retrieval) to a deeper understanding of complex neural circuits (e.g., distributed neural processing, connectivity metrics) (Fox & Friston, Reference Fox and Friston2012; Gonzalez-Castillo & Bandettini, Reference Gonzalez-Castillo and Bandettini2015). These developments will ultimately be needed if we hope to understand major unanswered questions such as those that center on the complexities of memory (e.g., how novel information is integrated into a sense of autobiographical self and integrated semantic knowledge over time and space), consciousness and its distortions (e.g., déjà vu, jamais vu, reduplicative paramnesia), socio-emotional functions, and default mode processing.

To facilitate engagement of the larger neuropsychology community, neuropsychologists are encouraged to register at the NNN website https://www.sistat.ucla.edu/NNNWeb/index.html, by providing their name, email address, phone number, and organization with which interested person is affiliated. The NNN will provide registered members of the neuropsychology community with progress updates, invitations to provide feedback on new initiatives, and access to data, assessment tools, and algorithms developed by the NNN. We also hope soon to be able to extend registration of clinical patients with the NNN so that the network can grow beyond the current proof-of-concept to include multiple sites that can contribute data and benefit from sharing novel normative and clinical validity data that are rapidly being aggregated by the network nationwide. The mature NNN will help assure that clinical neuropsychology evolves to include the most accessible, efficient, and valid methods for the assessment of brain–behavior relations.

SUPPLEMENTARY MATERIAL

To view supplementary material for this article, please visit https://doi.org/10.1017/S1355617721000199

ACKNOWLEDGMENTS

Partial support for development of the SHiP-NP was provided by a grant from the National Academy of Neuropsychology (NAN) to Lucia Cavanagh.

CONFLICT OF INTEREST

We have no known conflicts of interest to disclose.

FINANCIAL SUPPORT

This project is funded by NIMH R01 MH118514.

Footnotes

*

NNN Study Group Members: Russell M. Bauer, Robert M. Bilder, Lucia Cavanagh, Daniel L. Drane, Kristen Enriquez, Felicia C. Goldstein, Jude Henry. Kelsey C. Hewitt, David W. Loring, Stephen P. Reise, Kuo Chung Shih, Catherine Sugar, Sean Turner, Laura Glass Umfleet, Dustin Wahlstrom, Keith F. Widaman, Patricia Walshaw, Fiona Whelan.

References

REFERENCES

Abel, T.J., Rhone, A.E., Nourski, K.V., Kawasaki, H., Oya, H., Griffiths, T.D., … Tranel, D. (2015). Direct physiologic evidence of a heteromodal convergence region for proper naming in human left anterior temporal lobe. Journal of Neuroscience, 35(4), 15131520. doi: 10.1523/jneurosci.3387-14.2015 CrossRefGoogle ScholarPubMed
Barch, D.M., Gotlib, I.H., Bilder, R.M., Pine, D.S., Smoller, J.W., Brown, C.H., … Farber, G.K. (2016). Common measures for National Institute of Mental Health funded research. Biological Psychiatry, 79(12), e9196. doi: 10.1016/j.biopsych.2015.07.006 CrossRefGoogle ScholarPubMed
Bauer, R.M., Iverson, G.L., Cernich, A.N., Binder, L.M., Ruff, R.M., & Naugle, R.I. (2012). Computerized neuropsychological assessment devices: joint position paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology. The Clinical Neuropsychologist, 26(2), 177196. doi: 10.1080/13854046.2012.663001 CrossRefGoogle ScholarPubMed
Bilder, R.M., Postal, K.S., Barisa, M., Aase, D.M., Cullum, C.M., Gillaspy, S.R., … Woodhouse, J. (2020a). InterOrganizational Practice Committee Recommendations/Guidance for Teleneuropsychology in Response to the COVID-19 pandemic. Archives of Clinical Neuropsychology, 35(6), 647659. doi: 10.1093/arclin/acaa046 CrossRefGoogle Scholar
Bilder, R.M., Postal, K.S., Barisa, M., Aase, D.M., Cullum, C.M., Gillaspy, S.R., … Woodhouse, J. (2020b). InterOrganizational practice committee recommendations/guidance for Teleneuropsychology (TeleNP) in response to the COVID-19 pandemic. The Clinical Neuropsychologist, 34(7–8), 13141334. doi: 10.1080/13854046.2020.1767214 CrossRefGoogle ScholarPubMed
Bilder, R.M., & Reise, S.P. (2019). Neuropsychological tests of the future: How do we get there from here? The Clinical Neuropsychologist, 33(2), 220245. doi: 10.1080/13854046.2018.1521993 CrossRefGoogle Scholar
Boake, C. (2000). Edouard Claparède and the Auditory Verbal Learning Test. Journal of Clinical and Experimental Neuropsychology, 22(2), 286292. doi: 10.1076/1380-3395(200004)22:2;1-1;FT286 CrossRefGoogle ScholarPubMed
Brzezińska, A., Bourke, J., Rivera-Hernández, R., Tsolaki, M., Woźniak, J., & Kaźmierski, J. (2020). Depression in dementia or dementia in depression? Systematic review of studies and hypotheses. Current Alzheimer Research, 17(1), 1628. doi: 10.2174/1567205017666200217104114 CrossRefGoogle ScholarPubMed
Busch, R.M., Frazier, T.W., Iampietro, M.C., Chapin, J.S., & Kubu, C.S. (2008). Clinical utility of the Boston Naming Test in predicting ultimate side of surgery in patients with medically intractable temporal lobe epilepsy: A double cross-validation study. Epilepsia, 17, 17.Google Scholar
Cella, D., Lai, J.S., Nowinski, C.J., Victorson, D., Peterman, A., Miller, D., … Moy, C. (2012). Neuro-QOL: brief measures of health-related quality of life for clinical research in neurology. Neurology, 78(23), 18601867. doi: 10.1212/WNL.0b013e318258f744 CrossRefGoogle ScholarPubMed
Chaytor, N., & Schmitter-Edgecombe, M. (2003). The ecological validity of neuropsychological tests: a review of the literature on everyday cognitive skills. Neuropsychology Review, 13(4), 181197.CrossRefGoogle ScholarPubMed
Choi, S.W., Reise, S.P., Pilkonis, P.A., Hays, R.D., & Cella, D. (2010). Efficiency of static and computer adaptive short forms compared to full-length measures of depressive symptoms. Quality of Life Research, 19(1), 125136. doi: 10.1007/s11136-009-9560-5 CrossRefGoogle ScholarPubMed
Curtiz, Z. (Director). (1942). Casablanca [film]. Warner Bros. Pictures.Google Scholar
Cuthbert, B.N. (2020). The role of RDoC in future classification of mental disorders’. Dialogues Clin Neurosci. 22(1):8185. doi: 10.31887/DCNS.2020.22.1/bcuthbert.Google Scholar
Cuthbert, B.N., & Insel, T.R. (2013). Toward the future of psychiatric diagnosis: the seven pillars of RDoC. BMC medicine, 11(1), 126.CrossRefGoogle ScholarPubMed
Drane, D.L., Ojemann, J.G., Phatak, V., Loring, D.W., Gross, R.E., Hebb, A.O., … Tranel, D. (2013). Famous face identification in temporal lobe epilepsy: Support for a multimodal integration model of semantic memory. Cortex, 49(6), 16481667. doi: 10.1016/j.cortex.2012.08.009 CrossRefGoogle ScholarPubMed
Drane, D.L., & Pedersen, N.P. (2019). Knowledge of language function and underlying neural networks gained from focal seizures and epilepsy surgery. Brain & Language, 189, 2033. doi: 10.1016/j.bandl.2018.12.007 CrossRefGoogle ScholarPubMed
Duff, K., Suhrie, K.R., Dalley, B.C. A., Anderson, J.S., & Hoffman, J.M. (2019). External validation of change formulae in neuropsychology with neuroimaging biomarkers: A methodological recommendation and preliminary clinical data. The Clinical Neuropsychologist, 33(3), 478489. doi: 10.1080/13854046.2018.1484518 CrossRefGoogle ScholarPubMed
Fox, P.T., & Friston, K.J. (2012). Distributed processing; distributed functions? Neuroimage, 61(2), 407426. doi: 10.1016/j.neuroimage.2011.12.051 CrossRefGoogle ScholarPubMed
Gibbons, R.D., Weiss, D.J., Kupfer, D.J., Frank, E., Fagiolini, A., Grochocinski, V.J., … Immekus, J.C. (2008). Using computerized adaptive testing to reduce the burden of mental health assessment. Psychiatric Services, 59(4), 361368. doi: 10.1176/ps.2008.59.4.361 CrossRefGoogle ScholarPubMed
Gonzalez-Castillo, J., & Bandettini, P.A. (2015). What cascade spreading models can teach us about the Brain. Neuron, 86(6), 13271329. doi: 10.1016/j.neuron.2015.06.006 CrossRefGoogle ScholarPubMed
Grinnon, S.T., Miller, K., Marler, J.R., Lu, Y., Stout, A., Odenkirchen, J., & Kunitz, S. (2012). National institute of neurological disorders and stroke common data element project: Approach and methods. Clincal Trials, 9(3), 322329. doi: 10.1177/1740774512438980 CrossRefGoogle ScholarPubMed
Hamilton, C.M., Strader, L.C., Pratt, J.G., Maiese, D., Hendershot, T., Kwok, R.K., … Haines, J. (2011). The PhenX Toolkit: Get the most from your measures. American Journal of Epidemiology, 174(3), 253260. doi: 10.1093/aje/kwr193 CrossRefGoogle ScholarPubMed
Hermann, B., Loring, D.W., & Wilson, S. (2017). Paradigm shifts in the neuropsychology of epilepsy. Journal of the International Neuropsychological Society, 23(9-10), 791805. doi: 10.1017/S1355617717000650 CrossRefGoogle ScholarPubMed
Hewitt, K.C., Rodgin, S., Loring, D.W., Pritchard, A.E., & Jacobson, L.A. (2020). Transitioning to telehealth neuropsychology service: Considerations across adult and pediatric care settings. The Clinical Neuropsychologist, 117. doi: 10.1080/13854046.2020.1811891 Google ScholarPubMed
Hoogland, J., van Wanrooij, L.L., Boel, J.A., Goldman, J.G., Stebbins, G.T., Dalrymple-Alford, J.C., … Weintraub, D. (2018). Detecting mild cognitive deficits in Parkinson’s Disease: Comparison of neuropsychological tests. Movement Disorders, 33(11), 17501759. doi: 10.1002/mds.110 CrossRefGoogle ScholarPubMed
Kotov, R., Krueger, R.F., Watson, D., Achenbach, T.M., Althoff, R.R., Bagby, R.M., Brown, T.A., Carpenter, W.T., Caspi, A., Clark, L.A., Eaton, N.R., Forbes, M.K., Forbush, K.T., Goldberg, D., Hasin, D., Hyman, S.E., Ivanova, M.Y., Lynam, D.R., Markon, K., Miller, J.D., … Zimmerman, M. (2017). The Hierarchical Taxonomy of Psychopathology (HiTOP): A dimensional alternative to traditional nosologies. Journal of abnormal psychology, 126(4), 454477. doi: 10.1037/abn0000258 CrossRefGoogle ScholarPubMed
Lavoie, K., & Bacon, S. for the iCARE Study Team (2020). iCARE Study Questionnaire.Google Scholar
Lopez-Anton, R., Santabárbara, J., De-la-Cámara, C., Gracia-García, P., Lobo, E., Marcos, G., … Lobo, A. (2015). Mild cognitive impairment diagnosed with the new DSM-5 criteria: Prevalence and associations with non-cognitive psychopathology. Acta Psychiatrica Scandinavica, 131(1), 2939. doi: 10.1111/acps.12297 CrossRefGoogle ScholarPubMed
Loring, D.W. (2010). History of neuropsychology through epilepsy eyes. Archives of Clinical Neuropsychology, 25(4), 259273. doi: 10.1093/arclin/acq024.CrossRefGoogle ScholarPubMed
Loring, D.W., Strauss, E., Hermann, B.P., Perrine, K., Trenerry, M.R., Barr, W.B., Westerveld, M., Chelune, G.J., Lee, G.P., & Meador, K.J. (1999). Effects of anomalous language representation on neuropsychological performance in temporal lobe epilepsy. Neurology, 53(2), 260264. https://doi.org/10.1212/wnl.53.2.260 CrossRefGoogle ScholarPubMed
Mansolf, M., Vreeker, A., Reise, S.P., Freimer, N.B., Glahn, D.C., Gur, R.E., … GROUP Investigators; WGSPD Consortium (2020). Extensions of multiple-group item response theory alignment: Application to psychiatric phenotypes in an international genomics consortium. Educational and Psychological Measurement, 80(5), 870909.CrossRefGoogle Scholar
Marcopulos, B., & Lojek, E. (2019). Introduction to the special issue: Are modern neuropsychological assessment methods really “modern?” Reflections on the current neuropsychological test armamentarium. The Clinical Neuropsychologist, 33(2), 187199. doi: 10.1080/13854046.2018.1560502 CrossRefGoogle Scholar
Mella, N., Vallet, F., Beaudoin, M., Fagot, D., Baeriswyl, M., Ballhausen, N., … Desrichard, O. (2020). Distinct effects of cognitive versus somatic anxiety on cognitive performance in old age: The role of working memory capacity. Aging and Mental Health, 24(4), 604610. doi: 10.1080/13607863.2018.1548566 CrossRefGoogle ScholarPubMed
Moore, T.M., Scott, J.C., Reise, S.P., Port, A.M., Jackson, C.T., Ruparel, K., … Gur, R.C. (2015). Development of an abbreviated form of the Penn Line Orientation Test using large samples and computerized adaptive test simulation. Psychological Assessment, 27(3), 955964. doi: 10.1037/pas0000102 CrossRefGoogle ScholarPubMed
Norman, M., Wilson, S.J., Baxendale, S., Barr, W.B., Block, C., Busch, R.M., … Hermann, B.P. (2021). Addressing neuropsychological disorders in adults with epilepsy: The International League Against Epilepsy (ILAE)/International Neuropsychological Society (INS) initiative. Epilepsia Open. doi: 10.1002/epi4.12478 CrossRefGoogle Scholar
Pawlowski, J., Segabinazi, J.D., Wagner, F., & Bandeira, D.R. (2013). A systematic review of validity procedures used in neuropsychological batteries. Psychology & Neuroscience, 6, 311329.CrossRefGoogle Scholar
Postal, K.S., Bilder, R.M., Lanca, M., Aase, D.M., Barisa, M., Holland, A.A., … Salinas, C. (2020). Inter Organizational Practice Committee guidance/recommendation for models of care during the novel coronavirus pandemic. Archives of Clinical Neuropsychology. doi: 10.1093/arclin/acaa073 Google ScholarPubMed
Rabin, L.A., Paolillo, E., & Barr, W.B. (2016). Stability in test-usage practices of clinical neuropsychologists in the United States and Canada over a 10-year period: A follow-up survey of INS and NAN members. Archives of Clinical Neuropsychology, 31(3), 206230. doi: 10.1093/arclin/acw007 CrossRefGoogle Scholar
Raspall, T., Donate, M., Boget, T., Carreno, M., Donaire, A., Agudo, R., … Salamero, M. (2005). Neuropsychological tests with lateralizing value in patients with temporal lobe epilepsy: Reconsidering material-specific theory. Seizure, 14(8), 569576. doi: 10.1016/j.seizure.2005.09.007 CrossRefGoogle ScholarPubMed
Reise, S.P., Widaman, K., Bauer, R.M., Drane, D.L., Loring, D.W., Umfleet, L.G., … Bilder, R.M. (2021). Item response theory analyses of Matrix Reasoning: Towards a new short form or adaptive Test? Paper presented at the Annual Meeting of the International Neuropsychological Society, San Diego, CA.Google Scholar
Strauss, E., Satz, P., & Wada, J. (1990). An examination of the crowding hypothesis in epileptic patients who have undergone the carotid amytal test. Neuropsychologia, 28(11), 12211227.CrossRefGoogle ScholarPubMed
Teng, E.L., & Manly, J.J. (2005). Neuropsychological testing: helpful or harmful? Alzheimer Disease and Associated Disorders, 19(4), 267271.CrossRefGoogle ScholarPubMed
Zahodne, L.B., Manly, J.J., Smith, J., Seeman, T., & Lachman, M.E. (2017). Socioeconomic, health, and psychosocial mediators of racial disparities in cognition in early, middle, and late adulthood. Psychology & Aging, 32(2), 118130. doi: 10.1037/pag0000154 CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Measures being obtained as part of the National Neuropsychology Network

Supplementary material: File

Loring et al. supplementary material

Loring et al. supplementary material 1

Download Loring et al. supplementary material(File)
File 66.4 KB
Supplementary material: File

Loring et al. supplementary material

Loring et al. supplementary material 2

Download Loring et al. supplementary material(File)
File 14.6 KB
Supplementary material: File

Loring et al. supplementary material

Loring et al. supplementary material 3

Download Loring et al. supplementary material(File)
File 205.9 KB
Supplementary material: File

Loring et al. supplementary material

Loring et al. supplementary material 4

Download Loring et al. supplementary material(File)
File 19.8 KB