Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-05T09:10:39.544Z Has data issue: false hasContentIssue false

Detecting significant change in neuropsychological test performance: A comparison of four models

Published online by Cambridge University Press:  01 May 1999

NANCY R. TEMKIN
Affiliation:
Department of Neurological Surgery, University of Washington, Seattle, WA Department of Biostatistics, University of Washington, Seattle, WA
ROBERT K. HEATON
Affiliation:
Department of Psychiatry, University of California, San Diego, La Jolla, CA
IGOR GRANT
Affiliation:
Department of Psychiatry, University of California, San Diego, La Jolla, CA VA San Diego Healthcare System, San Diego, CA
SUREYYA S. DIKMEN
Affiliation:
Department of Neurological Surgery, University of Washington, Seattle, WA Department of Rehabilitation Medicine, University of Washington, Seattle, WA Department of Psychiatry and Behavioral Sciences, University of Washington, Seattle, WA
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

A major use of neuropsychological assessment is to measure changes in functioning over time; that is, to determine whether a difference in test performance indicates a real change in the individual or just chance variation. Using 7 illustrative test measures and retest data from 384 neurologically stable adults, this paper compares different methods of predicting retest scores, and of determining whether observed changes in performance are unusual. The methods include the Reliable Change Index, with and without correction for practice effect, and models based upon simple and multiple regression. For all test variables, the most powerful predictor of follow-up performance was initial performance. Adding demographic variables and overall neuropsychological competence at baseline significantly but slightly improved prediction of all follow-up scores. The simple Reliable Change Index without correction for practice performed least well, with high error rates and large prediction intervals (confidence intervals). Overall prediction accuracy was similar for the other three methods; however, different models produce large differences in predicted scores for some individuals, especially those with extremes of initial test performance, overall competency, or demographics. All 5 measures from the Halstead–Reitan Battery had residual (observed − predicted score) variability that increased with poorer initial performance. Two variables showed significant nonnormality in the distribution of residuals. For accurate prediction with smallest prediction–confidence intervals, we recommend multiple regression models with attention to differential variability and nonnormality of residuals. (JINS, 1999, 5, 357–369.)

Type
Research Article
Copyright
© 1999 The International Neuropsychological Society