Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-11T06:36:17.480Z Has data issue: false hasContentIssue false

Neuropsychological Test Performance and Cognitive Reserve in Healthy Aging and the Alzheimer's Disease Spectrum: A Theoretically Driven Factor Analysis

Published online by Cambridge University Press:  08 October 2012

Meghan B. Mitchell*
Affiliation:
Geriatric Research Education and Clinical Center, Edith Nourse Rogers Memorial Veterans Hospital, Bedford, Massachusetts Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts Harvard Medical School, Boston, Massachusetts
Lynn W. Shaughnessy
Affiliation:
Harvard Medical School, Boston, Massachusetts Department of Psychiatry, Massachusetts Mental Health Center, Boston, Massachusetts Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston, Massachusetts
Steven D. Shirk
Affiliation:
Geriatric Research Education and Clinical Center, Edith Nourse Rogers Memorial Veterans Hospital, Bedford, Massachusetts Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts
Frances M. Yang
Affiliation:
Harvard Medical School, Boston, Massachusetts Institute for Aging Research, Hebrew Senior Life, Boston, Massachusetts Department of Internal Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
Alireza Atri
Affiliation:
Geriatric Research Education and Clinical Center, Edith Nourse Rogers Memorial Veterans Hospital, Bedford, Massachusetts Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts Harvard Medical School, Boston, Massachusetts
*
Correspondence and reprint requests to: Meghan Mitchell, 200 Springs Road, 182b, Edith Nourse Rogers Memorial Veterans Hospital, Bedford, MA 01730. E-mail: Meghan.Mitchell2@va.gov
Rights & Permissions [Opens in a new window]

Abstract

Accurate measurement of cognitive function is critical for understanding the disease course of Alzheimer's disease (AD). Detecting cognitive change over time can be confounded by level of premorbid intellectual function or cognitive reserve and lead to under- or over-diagnosis of cognitive impairment and AD. Statistical models of cognitive performance that include cognitive reserve can improve sensitivity to change and clinical efficacy. We used confirmatory factor analysis to test a four-factor model composed of memory/language, processing speed/executive function, attention, and cognitive reserve factors in a group of cognitively healthy older adults and a group of participants along the spectrum of amnestic mild cognitive impairment to AD (aMCI-AD). The model showed excellent fit for the control group (χ2 = 100; df = 78; CFI = .962; RMSEA = .049) and adequate fit for the aMCI-AD group (χ2 = 1750; df = 78; CFI = .932; RMSEA = .085). Although strict invariance criteria were not met, invariance testing to determine if factor structures are similar across groups yielded acceptable absolute model fits and provide evidence in support of configural, metric, and scalar invariance. These results provide further support for the construct validity of cognitive reserve in healthy and memory impaired older adults. (JINS, 2012, 18, 1–10)

Type
Research Articles
Copyright
Copyright © The International Neuropsychological Society 2012

Introduction

Alzheimer's disease (AD) is the most common form of dementia with an estimated 5.4 million cases in the United States (Alzheimer's Association, 2011). Measurement of cognitive deficits in AD is crucial for early interventions, which may delay clinical decline (Atri, Rountree, Lopez, & Doody, Reference Atri, Rountree, Lopez and Doody2012; DeKosky, Reference DeKosky2003). Existing models of cognition in normal aging and AD have identified latent variables representing cognitive domains (Johnson, Storandt, Morris, Langford, & Galvin, Reference Johnson, Storandt, Morris, Langford and Galvin2008). A benefit of such latent variable approaches is that it potentially allows for purer measures of cognition (i.e., latent variables as opposed to a summation of individual indicator variables) that take into account covariance with other latent variables, and can thus provide comprehensive models of cognition with demonstrated convergent and discriminant validity (Satz, Cole, Hardy, & Rassovsky, 2010). Latent variable analysis also simplifies statistical models by reducing the number of indicator variables to their latent constructs, thereby reducing problems with multicollinearity and restrictions of statistical power. A further obstacle to measuring cognition and detecting change over time in AD is the problem of appropriately considering baseline level of premorbid intellectual function or cognitive reserve; if unaccounted for, these may lead to under- or over-diagnosis of AD and hamper the detection of change or efficacy in clinical trials (Rentz et al., Reference Rentz, Huh, Faust, Budson, Scinto, Sperling and Daffner2004). There is thus a need for statistical models of cognition that take into account cognitive reserve.

Cognitive reserve is typically defined as the brain's capacity to maintain cognitive function despite neurologic damage or disease (Stern, Reference Stern2009). Thus, an ideal measure of cognitive reserve would quantify the discrepancy between cognitive functioning and the extent of brain disease or damage. While recent advances in neuroimaging techniques enable the quantification of AD-related pathology or correlates of degeneration such as measurement of amyloid load via PET imaging (Roe et al., Reference Roe, Mintun, Ghoshal, Williams, Grant, Marcus and Morris2010) and atrophy via MRI cortical thickness measurement (Dickerson et al., Reference Dickerson, Bakkour, Salat, Feczko, Pacheco, Greve and Buckner2009), these techniques are not widely available, and do not capture all aspects of AD neuropathology (e.g., neurofibrillary tangles). In light of these difficulties with quantifying cognitive reserve, operational definitions typically use “proxy” measures that reflect lifetime experiences in cognitively stimulating activities. Cognitive reserve is typically associated with higher educational and occupational attainment (Stern et al., Reference Stern, Gurland, Tatemichi, Tang, Wilder and Mayeux1994), and more involvement in physical, social, and leisure activities (Wilson, Scherr, Schneider, Tang, & Bennett, Reference Wilson, Scherr, Schneider, Tang and Bennett2007). Individuals with higher cognitive reserve show more resilience to multiple forms of neurologic insult, slower rate of decline in normal cognitive aging, and delayed onset of decline in AD (Stern, Reference Stern2009). Interventions aimed at increasing the level of involvement in activities associated with cognitive reserve (e.g., increasing socialization, physical, or cognitive activity), suggest that while some aspects of cognitive reserve may be genetically predetermined (Lee, Reference Lee2003), other aspects may be modifiable (Wilson et al., Reference Wilson, Scherr, Schneider, Tang and Bennett2007).

Cognitive reserve is also associated with higher levels of executive functioning (Siedlecki et al., Reference Siedlecki, Stern, Reuben, Sacco, Elkind and Wright2009), leading some to question cognitive reserve's uniqueness as a theoretical construct. Siedlecki and colleagues (Reference Siedlecki, Stern, Reuben, Sacco, Elkind and Wright2009) investigated the construct validity of cognitive reserve in a series of latent variable models comparing the factor loadings of neuropsychological tests and cognitive reserve variables on the constructs of memory, processing speed, executive function, and cognitive reserve, and found reasonably good indices of model fit across three samples of cognitively normal adults. There were, however, less consistent findings across samples with regard to the intercorrelations between the cognitive reserve and executive functioning indicator variables and constructs, with one out of the three samples showing a strong intercorrelation between cognitive reserve and executive functioning constructs (r = .90), and two of three samples showing cognitive reserve variables to have strong factor loadings not only on the cognitive reserve construct, but also on the executive functioning construct (factor loadings ranging from .54 to .93). The authors concluded that their results support convergent validity of the cognitive reserve construct, as their indicator variables of cognitive reserve consistently loaded on the cognitive reserve factor. However, given the divergence in their findings, discriminant validity was not entirely supported and the authors suggested the possibility that executive functioning and cognitive reserve are highly related constructs that may not be distinct.

In a critical review of the literature on the construct validity of cognitive reserve, Satz and colleagues (2011) suggest that construct validity be assessed with factor analysis to determine if indicator variables commonly used as proxies for cognitive reserve (e.g., years of education, premorbid IQ, engagement in cognitively challenging activities) load onto the same or separate factors as other constructs previously suggested to be potentially overlapping (Satz et al., Reference Satz, Cole, Hardy and Rassovsky2011). The authors propose a hypothetical model that includes four subcomponents: (1) executive function (e.g., measures of response inhibition, fluency, error monitoring, selective attention, cognitive switching, reasoning); (2) processing resources (e.g., measures of divided attention, processing speed, and working memory); (3) complex mental activity (e.g., engagement in cognitively stimulating activities, social networks, education, literacy, occupational history; and (4) intelligence or g, indicated by measures of fluid and crystallized intelligence (Satz et al., Reference Satz, Cole, Hardy and Rassovsky2011). It is unclear to us from their discussion of this proposed model if their assertion is that executive functioning, processing resources, mental activity, and general intellectual function are subcomponents of cognitive reserve, or if their model is intended to define cognitive reserve as complex mental activities, and test cognitive reserve's validity in a model with these cognitive constructs. Nonetheless, their hypothetical model underscores the importance of evaluating overlap in the constructs of cognitive reserve and the related constructs of executive function, processing resources, and intellectual functioning.

The aim of our study was to establish a latent variable model of neuropsychological test performance that incorporates cognitive reserve and may later serve as a basis to study longitudinal change in normal aging and along the spectrum of aMCI to AD (aMCI-AD; Locascio & Atri, Reference Locascio and Atri2011). We sought to examine the factor structure of cognitive reserve variables in combination with neuropsychological measures to evaluate cognitive reserve as a distinct construct, and to validate a model of cognitive domains and cognitive reserve, the CDCR model, in a group of cognitively normal older adults, and a group with memory impairment along the aMCI-AD spectrum. While we were unable to fully evaluate all possible constructs that potentially overlap with the construct of cognitive reserve, we hypothesized that indicators of cognitive reserve would load onto a different factor than indicators of processing resources and executive function, consistent with the hypotheses by both Siedlecki et al. (Reference Siedlecki, Stern, Reuben, Sacco, Elkind and Wright2009) and Satz et al. (Reference Satz, Cole, Hardy and Rassovsky2011).

Method

Participants

A total of 559 participants from the Massachusetts Alzheimer's Disease Research Center (MADRC) longitudinal cohort study of memory and aging were administered the standard battery of tests implemented by the Alzheimer Disease Center (ADC) Uniform Data Set (UDS; Beekly et al., Reference Beekly, Ramos, Lee, Deitrich, Jacka, Wu and Kukull2007; Morris et al., Reference Morris, Weintraub, Chui, Cummings, Decarli, Ferris and Kukull2006). The MADRC implemented the UDS in September 2005, and initial visits ranging from September 2005 to August 2009 were included. Inclusion criteria were: (1) Age ≥ 50 years, and (2) UDS clinical research diagnosis of cognitively normal with Clinical Dementia Rating score (CDR) equal to 0 (Morris, Reference Morris1993), amnestic Mild Cognitive Impairment (aMCI) with CDR = 0.5, or Possible or Probable Alzheimer's disease (PrAD) with CDR ≥ 0.5. Participants were then assigned to two groups according to their CDR: a control group comprised of clinically cognitively normal older adult participants (n = 294) with CDR = 0, and an aMCI-AD group comprised of participants with cognitive impairment across the AD-spectrum (n = 265) ranging from aMCI to PrAD and with CDR ≥ 0.5.

Procedure

Participants underwent standardized UDS clinical research evaluations and were accompanied by a reliable informant. The cognitive test battery was administered by a trained psychometrician and consisted of the standard UDS neuropsychological battery and supplemental tests (see next paragraph) administered after completion of the UDS tests. Informed consent was obtained for all participants according to an approved protocol by the Partners Healthcare Human Subjects Research Institutional Review Board.

Measures

Measures from the UDS neuropsychological battery included the Wechsler Memory Scale-Revised (WMS-R) subtests Logical Memory IA and IIA (LMI and LMII), Digit Span Forward and Backward (Digit Fwd and Digit Bkwd; Wechsler, Reference Wechsler1987), Semantic Fluency (Animals and Vegetables: SEMFLU; Morris et al., Reference Morris, Heyman, Mohs, Hughes, van Belle, Fillenbaum and Clark1989), Boston Naming Test (BNT, 30 item – odd numbered; Mack, Freed, Williams, & Henderson, Reference Mack, Freed, Williams and Henderson1992), Wechsler Adult Intelligence Scale-Revised (WAIS-R) Digit Symbol Coding subtest (Dig Sym; Wechsler, Reference Wechsler1987), and Trail Making Test Parts A and B (Trails A and Trails B; Reitan & Wolfson, Reference Reitan and Wolfson1985). Additional measures were the American National Adult Reading Test (AMNART; Grober & Sliwinski, Reference Grober and Sliwinski1991), and the Free and Cued Selective Reminding Test (FCSRT; Grober, Lipton, Hall, & Crystal, Reference Grober, Lipton, Hall and Crystal2000). The AMNART, a single word reading test, correlates with overall intellectual functioning, and is used as an estimate of premorbid functioning, as it is often relatively preserved despite cognitive decline (Grober & Sliwinski, Reference Grober and Sliwinski1991). The FCSRT examines explicit memory using both free and cued recall over three trials, following a study phase of 16 pictures, and summarized by the total number of freely recalled items (FCSRT Free) and the total number of recalled items in both free and cued conditions (FCSRT Total; maximum performance = 48; Grober, Sanders, Hall, & Lipton, Reference Grober, Sanders, Hall and Lipton2010).

Statistical Analysis

Under a latent variable modeling (LVM) framework, confirmatory factor analysis (CFA) using Blom-transformed scores for each indicator variable was conducted. Blom transformations are normative percentile ranks that are based on the normal distribution of responses for each score (Blom, Reference Blom1958). An analytic approach grounded in modern measurement theory (Embretson & Reise, Reference Embretson and Reise2000) and item response theory (IRT) -based structural equation modeling, a specific type of LVM (Gallo, Anthony, & Muthén, Reference Gallo, Anthony and Muthén1994; Muthén, Reference Muthén1989) was used. This method was originally presented as a model of unobserved quantities measured by formative causes and reflective indicators (Hauser & Goldberger, Reference Hauser and Goldberger1971).

Our general approach was to test a series of theoretically based models using Mplus version 6.0 (Muthén & Muthén, 2010) and following standard practice in the factor analytic field, we modified our model based on model diagnostics (modification indices; Hayden et al., Reference Hayden, Jones, Zimmer, Plassman, Browndyke, Pieper and Welsh-Bohmer2011). Specifically, our initial model separated memory and language into two separate constructs, but we found that the intercorrelations between indicator variables across these two theoretical constructs were high, leading to cross-factor loadings. We thus hypothesized a four-factor CDCR model, which included memory/language, attention, processing speed/executive function, and cognitive reserve factors.

We first conducted CFA on each group separately to test for model fit and examine factor loadings in the two samples. We next conducted a series of two-group CFAs to provide an assessment of the stability of our model across diagnostic group (factorial invariance) by systematically increasing the constraints on the model to test for four increasingly rigorous types of invariance: configural, metric, scalar, and strict invariance (Hayden et al., Reference Hayden, Jones, Zimmer, Plassman, Browndyke, Pieper and Welsh-Bohmer2011). Configural invariance is met when indicator variables load on the same factors across groups. Metric invariance is met with adequate model fit when factor loadings are held constant across groups. Scalar invariance is met when model fit is adequate when both factor loadings and intercepts are held constant across group. Strict invariance is met when model fit is adequate when factor loadings, intercepts, and residual variances are all constrained to be equal across groups. Model fit was assessed with the root mean square error of approximation (RMSEA) (Browne & Cudek, Reference Browne and Cudek1993; Muthén, Khoo, & Francis, Reference Muthén, Khoo and Francis1998) and the comparative fit index (CFI) (Bentler & Chou, Reference Bentler and Chou1988; Muthén, Reference Muthén1998). The RMSEA provides a measure of discrepancy per model degree of freedom and approaches zero as fit improves. For the CFI, values greater than 0.90 generally indicate adequate fit (Bentler, Reference Bentler1990; Muthén, Reference Muthén1989). Browne and Cudek (Reference Browne and Cudek1993) recommended rejecting models with RMSEA values greater than 0.1; Hu and Bentler (Reference Hu and Bentler1998) suggested that values close to 0.06 or less indicate adequate model fit. In addition to evaluating absolute model fit, we tested for relative fit using the Satorra-Bentler χ2 test, which provides a statistical comparison of relative fit of nested models with increasing levels of constraints from the respective, less constrained model (Satorra & Bentler, Reference Satorra and Bentler2001), as well as examining change in CFI following Cheung and Rensvold's (Reference Cheung and Rensvold2002) general guideline that a change in CFI of more than 0.01 may indicate worse model fit.

Results

Participant demographics and characteristics were similar to the larger NACC UDS data set (Weintraub et al., Reference Weintraub, Salmon, Mercaldo, Ferris, Graff-Radford, Chui and Morris2009). Consistent with typical group comparisons of older adults with normal cognition versus those with memory impairment, the aMCI-AD group tended to be older (mean age for aMCI-AD = 76, controls = 70, p < .001) but level of education was not significantly different between groups (mean education for aMCI-AD = 15.8, controls = 16.4, p > .05). The control group recalled, on average, 13 units from Story A on LMII, while the aMCI-AD group recalled an average of 6 units. Descriptive statistics further summarizing demographic, neuropsychological, and cognitive reserve variables are provided in Table 1.

Table 1 Descriptive statistics

*Indicates significant differences between groups (p = 0.001); **Indicates p < 0.001.

aMCI-AD = amnestic mild cognitive impairment–Alzheimer's disease; AMNART = American National Adult Reading Test.

Confirmatory Factor Analysis

As depicted in Figure 1, we tested a hypothesized CDCR model with a latent variable structure in which measures of memory and language (i.e., measures associated with semantic processing: LMI, LMII, FCSRT Free, FCSRT Total, BNT, and SEMFLU) loaded on one factor, measures of attention and verbal working memory loaded on a second factor (Digit Fwd and Digit Bkwd), measures of processing speed and executive function loaded on a third factor (Trails A, Trails B, and Dig Sym), and proxy measures of cognitive reserve loaded on a fourth factor (AMNART and Education). We did not control for demographic covariates such as age or sex in our latent variable models.

Fig. 1 The CDCR (cognitive domains and cognitive reserve) model results: Results of confirmatory factor analysis (CFA) of cognitive reserve and neuropsychological measures in a group of healthy older adults (n = 294). χ 2 = 100.12, df = 78, CFI = .962, RMSEA = .049. Variables in boxes represent observed measures, variables in ovals represent latent variables, numbers along straight arrows represent factor loadings for each indicator variable on the respective latent variable, and numbers along curved arrows represent correlations between latent variables and correlations between indicator variables. LMI & LMII = Wechsler Memory Scale-Revised (WMS-R) subtests Logical Memory IA and IIA; FCSRT Free & FCSRT Total = Free and Cued Selective Reminding Test Free Recall and Total Recall; BNT = Boston Naming Test (30 item – odd numbered); SEMFLU = Semantic Fluency (Animals and Vegetables); Digit Fwd & Digit Bkwd = WMS-R subtests Digit Span Forward and Backward; Trails A & Trails B = Trail Making Test Parts A and B; Dig Sym = Wechsler Adult Intelligence Scale-Revised (WAIS-R) Digit Symbol Coding subtest; AMNART = American National Adult Reading Test; CFI = comparative fit index; RMSEA = root mean square error of approximation; aMCI = amnestic mild cognitive impairment; AD = Alzheimer's disease.

Allowing for residual correlations between LMI and LMII, and between FCSRT summary scores, the hypothesized four-factor CDCR model demonstrated reasonable fit across both samples. The CDCR model showed excellent fit for the control group (χ2 = 100.12; df = 78; CFI = .962; RMSEA = .049) and approached good fit for the aMCI-AD group (χ2 = 1750.02; df = 78; CFI = .932; RMSEA = .085).

In the control group, the memory and language factor indicator variables varied in the strength of their factor loadings, with some indicators showing relatively small factor loadings (LMI, LMII, and FCSRT Total loadings ranging from .21 to .38) and the remainder indicator variables showing relatively stronger loadings. In contrast to the variability in memory and language measure factor loadings, the factor loadings for all indicator variables in the attention, processing speed/executive function, and cognitive reserve factors were consistently strong, ranging from .67 to .81. We additionally modeled the correlations between latent variables to examine the shared variance between hypothetically distinct constructs. Based on Cohen's interpretation of correlation magnitudes, generally a correlation greater than 0.5 is large, 0.3–0.5 is medium, 0.1–0.3 is small, and less than 0.1 is trivial (Cohen, Reference Cohen1988). The memory and language factor showed large correlations with all other factors, with correlations ranging from r = .53 for the cognitive reserve factor, r = .61 for the attention factor, and r = .71 for the processing speed/executive function factor. The remainder of the inter-factor correlations were in the small-to-medium range, and the cognitive reserve factor was distinct from the processing speed/executive function factor (r = .43). Figure 1 depicts the factor structure, factor loadings, and intercorrelations between factors in the control group.

In the aMCI-AD group, there was more consistency in the strength of indicator variables’ factor loadings on all latent variables, with the indicator variables for the memory and language factor demonstrating loadings ranging from .69 to .88, attention factor indicators ranging from .76 to .87, processing speed/executive function factor indicators ranging from .80 to .86, and cognitive reserve factor indicators ranging from .64 to .73. The inter-factor correlations between latent variables in the aMCI-AD group showed a similar pattern to that of the control group, with the memory and language factor showing large correlations with the processing speed/executive function factor (r = .70) and attention factor (r = .56). The cognitive reserve factor showed medium-sized correlations with processing speed/executive function (r = .43) and the memory and language (r = .39). A full summary of factor structure and loadings in the aMCI-AD group is portrayed in Figure 2.

Fig. 2 The CDCR (cognitive domains and cognitive reserve) model results in a group with aMCI or AD (n = 265). χ 2 = 1750.02, df = 78, CFI = .932, RMSEA = .085. LMI & LMII = Wechsler Memory Scale-Revised (WMS-R) subtests Logical Memory IA and IIA; FCSRT Free & FCSRT Total = Free and Cued Selective Reminding Test Free Recall and Total Recall; BNT = Boston Naming Test (30 item – odd numbered); SEMFLU = Semantic Fluency (Animals and Vegetables); Digit Fwd & Digit Bkwd = WMS-R subtests Digit Span Forward and Backward; Trails A & Trails B = Trail Making Test Parts A and B; Dig Sym = Wechsler Adult Intelligence Scale-Revised (WAIS-R) Digit Symbol Coding subtest; AMNART = American National Adult Reading Test; CFI = comparative fit index; RMSEA = root mean square error of approximation; aMCI = amnestic mild cognitive impairment; AD = Alzheimer's disease.

We next conducted a series of two-group factor analyses across four levels of invariance testing by first evaluating absolute model fit at each level of invariance testing, and then comparing relative fit of each nested model using the Satorra-Bentler scaled χ2 test to determine if each successive level of invariance testing resulted in model fit that was significantly different from the previous, less restricted model (Satorra & Bentler, Reference Satorra and Bentler2001) as well as change in CFI (ΔCFI) of 0.01 or greater as suggested by Cheung and Rensvold (Reference Cheung and Rensvold2002). Using the criteria of an RMSEA of 0.06 or less or CFI of 0.90 or more to evaluate absolute model fit, we found the hypothesized four-factor CDCR model to demonstrate reasonable fit across both samples assuming configural invariance (RMSEA = 0.067; CFI = 0.944; χ2 = 262.51; df = 117). The configural invariance model produced estimated factor scores in the control group for all factors that were approximately 0.000 with SDs ranging from 0.807 to 0.912, all of which were not significantly different from those for the aMCI-AD group (all M = 0.000, SDs ranging from 0.838 to 0.939; all p > .05).

At each level of invariance testing, relative fit deteriorated significantly (all S-B χ2p < .001; ΔCFI > 0.01), but absolute model fit remained acceptable assuming metric invariance (RMSEA = 0.076; CFI = 0.923; χ2 = 342.343, df = 130), which again produced estimated factor scores that were not significantly different by group (Control group memory/language M = 0.000; SD = 0.722; attention M = 0.000; SD = 0.761; processing speed/executive function M = 0.000; SD = 0.845; and cognitive reserve M = 0.000; SD = 0.778; aMCI-AD group memory/language M = 0.000; SD = 1.066; attention M = 0.000; SD = 0.996; processing speed/executive function M = 0.000; SD = 1.002; and cognitive reserve M = 0.000; SD = 0.868; all p > .05).

Absolute fit for the model testing scalar invariance was also acceptable (RMSEA = 0.080; CFI = 0.905; χ2 = 385.764; df = 139), and produced estimated factor scores that were all significantly different by group (Control group memory/language M = 0.000; SD = 0.714; attention M = 0.000; SD = 0.764; processing speed/executive function M = 0.000, SD = 0.845; and cognitive reserve M = 0.000; SD = 0.778; aMCI-AD group memory/language M = −2.180; SD = 1.053; attention M = −0.851; SD = 0.999; processing speed/executive function M = −1.532; SD = 1.001; and cognitive reserve M = -0.424; SD = 0.864, all p < 0.001).

Indices of both relative and absolute model fit did not support strict invariance (RMSEA = 0.087; CFI = 0.877; χ2 = 471.145; df = 151; ΔCFI = 0.028). Estimated factor scores for the strict invariance model were significantly different between groups for all but the cognitive reserve factor (Control group memory/language M = 0.000; SD = 0.725; attention M = 0.000; SD = 0.824; processing speed/executive function M = 0.000; SD = 0.855; and cognitive reserve M = 0.389; SD = 0.310; aMCI-AD group memory/language M = −2.179; SD = 1.038; attention M = −0.857; SD = 0.957; processing speed/executive function M = −1.525; SD = 0.988; and cognitive reserve M = −0.391; SD = 0.393). Examination of model modification indices suggested that the memory and language indicators were contributing most to model misfit starting with the model testing for metric invariance. This suggests that factor loadings for memory and language measures were different in controls versus the aMCI group and model misfit arose by constraining these loadings to be equivalent across groups. We thus tested partial metric invariance by allowing memory and language factor loadings to vary by group and found that model fit was similar to the model testing configural invariance (RMSEA = 0.069; CFI = 0.936; χ2 = 291.177; df = 125; ΔCFI = 0.008). Partial scalar invariance (allowing memory and language loadings and intercepts to vary by group), similarly showed little change from the model testing partial metric invariance, (RMSEA = 0.073; CFI = 0.924; χ2 = 331.284; df = 134; ΔCFI = 0.012), but the model testing for partial strict invariance (allowing memory and language loadings, intercepts, and residual variances to vary by group) showed significant deterioration in model fit (RMSEA = 0.080; CFI = 0.899; χ2 = 410.000; df = 147; ΔCFI = 0.025). A summary of standardized factor loadings, absolute model fit, and relative model fit statistics across levels of invariance testing are summarized in Table 2.

Table 2 Standardized factor loadings and indices of model fit for confirmatory factor analysis (CFA) models for control participants (n = 294) and participants with memory impairment along the Alzheimer's disease spectrum (aMCI-AD; n = 265) across levels of factorial invariance

AMNART = American National Adult Reading Test; S-B χ2 = Satorra-Bentler scaled χ2 test (Satorra & Bentler, Reference Satorra and Bentler2001); RMSEA = root mean square error of approximation; CFI = comparative fit index.

Discussion

These results provide support for a four-factor CDCR model of several related aspects of cognition and cognitive reserve in both normal aging and memory impaired older adults along the AD clinical spectrum. There was some evidence for the stability of the model across groups in that indices of absolute model fit were acceptable when assuming configural, metric, and scalar invariance, but not strict invariance. Across all levels of invariance testing, however, there was a significant change in model fit from the previous, less constrained model, and model modification indices suggested that the source of misfit was due to group differences in factor loadings of the memory and language measures. This finding is not surprising, as we were comparing performance from a cognitively normal group to a memory-impaired group of participants. Tests of partial invariance suggested that when allowing memory and language measures to vary by group, there was evidence for partial metric and scalar invariance, but not strict invariance. This suggests that, while memory and language performance differs by group in terms of factor loadings and intercepts, the other factors in our model remain relatively stable across groups, but residual variation in test performance was significantly different across groups for all factors included in our model.

Regarding the relationship between the cognitive reserve and processing speed/executive function constructs, our results indicate that these theoretical constructs may have less shared variance than previously observed and provide supporting evidence for the discriminant validity of cognitive reserve as a unique construct. While the CDCR model in both normal and memory impaired samples of older adults demonstrated medium-to-large correlations across all constructs, the correlations in both samples between the cognitive reserve and processing speed/executive function factors were in the medium range. This is illustrated in Figures 1 and 2, where it is shown that the intercorrelation between cognitive reserve and processing speed/executive function is 0.431 in the control group, and 0.433 in the aMCI-AD group, both of which constitute what is considered medium-sized correlations (Cohen, Reference Cohen1988). In contrast, intercorrelations between latent variables that have previously been established to be distinct constructs in latent variable models, such as memory and executive functioning (Salthouse, Atkinson, & Berish, Reference Salthouse, Atkinson and Berish2003), showed even higher inter-factor correlations in both samples (r = 0.711 for the control group and r = 0.697 for the aMCI-AD group). These results thus support that cognitive reserve is not just processing speed or executive function. Functional neuroimaging evidence also supports this assertion; in a study probing the relationship between task-related activation on a working memory task and measures of cognitive reserve, Stern and colleagues demonstrated a cognitive reserve-related network that clearly included frontal areas, but also included medial temporal areas (Stern et al., Reference Stern, Zarahn, Habeck, Holtzer, Rakitin, Kumar and Brown2008).

There nonetheless remains an interesting theoretical relationship between cognitive reserve and executive function, which may both be subserved by the same, similar, or highly overlapping frontal networks and cognitive systems. For example, it is plausible to conceive that individuals with more cognitive flexibility, a commonly referenced component of executive function, would be able to adapt better to brain dysfunction than those with limited cognitive flexibility. As was argued by Satz et al. (Reference Satz, Cole, Hardy and Rassovsky2011), there are likely several cognitive processes that contribute to and are related to cognitive reserve. Although our model was not identical to the hypothetical model presented by Satz et al., both models suggest that cognitive reserve is distinct from the executive function and processing resources factors in their model and from the processing speed/executive function and attention factors in our CDCR model.

This study has several strengths; it assesses a theoretically driven latent-variable model of the interrelations of cognitive reserve with other cognitive factors in both normal aging and across the aMCI-AD clinical spectrum; uses a large sample of well-characterized participants representative of the UDS and ADC research populations (Weintraub et al., Reference Weintraub, Salmon, Mercaldo, Ferris, Graff-Radford, Chui and Morris2009); and uses known neuropsychological measures, many of which have defined age-, education-, and gender-adjusted normative ranges available (Grober & Sliwinski, Reference Grober and Sliwinski1991; Shirk et al., Reference Shirk, Mitchell, Shaughnessy, Sherman, Locascio, Weintraub and Atri2011; Weintraub et al., Reference Weintraub, Salmon, Mercaldo, Ferris, Graff-Radford, Chui and Morris2009). The use of the FCSRT is a particular strength, as it is an increasingly recognized measure to discriminate between retrieval problems associated with normal aging and storage-based memory dysfunction characteristic of AD (Grober et al., Reference Grober, Sanders, Hall and Lipton2010; Rami et al., 2012). The usage of the AMNART is also important as it correlates highly with verbal IQ and general intelligence, or g, is relatively stable through adulthood, and is relatively resistant to major decline until the later stages of dementia (Patterson, Graham, & Hodges, Reference Patterson, Graham and Hodges1994). This study is also the first to examine the construct validity of cognitive reserve in the UDS framework—a framework and dataset that is large, longitudinal, expanding, and can be used in the future to further validate and refine the CDCR model.

The study also has several inherent limitations, some of which may be remedied or assessed through future research. Participants were a majority White and highly educated, and while consistent with the UDS sample nationally, this limits generalizability and likely limited the model's fit due to the relatively narrow range of variation in the indicator variables for cognitive reserve (i.e., AMNART and education), and, to a lesser degree, for the cognitive domain indicator variables. Having only two indicator variables each for cognitive reserve and attention in our model also places substantial burden on the data-model covariance fit structure and may have limited model fits; the model fits were nonetheless acceptable. This does not prove that a four-factor model, or this four-factor CDCR model, is either necessary or the correct model; it does, however, provide evidence that the four-factor CDCR model is sufficient to explain the observed data. Future datasets with appropriate additional indicator variables for these constructs, preferably consisting of tests with low inter-correlations within each domain, should improve accuracy and model fits.

Another set of limitations relates to the battery of tests included in our analysis. Ideally, we would have had a battery that included sufficient indicators of processing speed, executive function, memory, and language to model each of these as their own, distinct constructs. While less than ideal, in the context of this dataset's limitations, we hypothesized a joint processing speed/executive function construct, and a joint memory/language construct. These combinations were driven by the particular nature of the available tests. For the processing speed/executive function construct, all tests included require speed of information processing, but two of the tests (Trails B and Dig Sym) have additional executive components including maintaining cognitive set. For the memory and language construct, all tests included require verbal processing or semantic knowledge and performance heavily relies on integrated medial temporal and frontal network hubs. It is noteworthy that in both the controls and AD-spectrum groups semantic fluency was the highest loaded test on this combined (memory/language) latent construct – a measure that arguably best represents and relies on such an amalgamated construct of memory and language, and their shared frontal and medial temporal networks. It is also important to note that when performance range and variability increase, as in the AD-spectrum group, the loadings of this combined memory/language latent variable onto memory tests increase substantially to the 0.7–0.8 range.

The interplay of AD pathology and disease burden in modulating the relationship of neuropsychological domains and cognitive reserve merits further investigation within the UDS framework. Given the large scope of this national database, our model could be further validated and extended longitudinally (Locascio & Atri, Reference Locascio and Atri2011) in this larger sample, particularly when complemented by similar supplementary measures to the basic UDS battery collected by several UDS sites including proxies of cognitive reserve. The cognitive constructs in the current study can be used to produce latent trait scores for future extensions of the CDCR model to potentially predict and relate to other outcomes of interest. While these loadings demonstrate magnitude of inter-correlations, a limitation of a cross-sectional model is that it cannot indicate directionality of cause-effect changes dynamically as participants remain stable or experience cognitive decline. However, with increasing evidence for cognitive domain specific differences in relation to dementia and response to treatments (Persson, Wallin, Levander, & Minthon, Reference Persson, Wallin, Levander and Minthon2009; Tinklenberg et al., Reference Tinklenberg, Brooks, Tanke, Khalid, Poulsen, Kraemer and Yesavage1990; Zec et al., Reference Zec, Landreth, Vicari, Belman, Feldman, Andrise and Kumar1992), latent trait scores for each domain can be used in future research as starting points in longitudinal models in the presence of multiple covariates and outcomes.

The use of latent variables for cognition in a model that also includes cognitive reserve provides a promising approach for tracking cognitive changes in AD to assess treatment outcomes (Atri, Shaughnessy, Locascio, & Growdon, Reference Atri, Shaughnessy, Locascio and Growdon2008) or early detection of deviation from normal aging (Stern, Reference Stern2009). The memory/language and processing speed/executive function latent variables may be particularly sensitive measures for tracking AD-related cognitive decline, and the latent cognitive reserve variable could be tested in a longitudinal model to determine how it moderates rate of cognitive decline. These types of measurement and analytic factors may play an even more crucial role in successful trial design as the AD clinical trial field focuses on earlier stages of disease and extends the duration of studies, where increasingly sensitive measures are needed to quantify differences in group trajectories due to treatment-related disease modification. Such methods could also allow for earlier detection, by providing a more individualized approach to determining relative decline while adjusting for estimated premorbid level of cognitive functioning and taking into account how potential proxies for cognitive reserve may not only modify cognitive test performance but also moderate levels of observed neuroimaging biomarkers of AD such as PIB amyloid load (Rentz et al., Reference Rentz, Locascio, Becker, Moran, Eng, Buckner and Johnson2010). In summary, this model further supports the construct validity of cognitive reserve, that it shares similarities but is not the same as processing resources or executive functions, and lays the groundwork to test the CDCR model longitudinally.

Acknowledgments

We thank the Clinical, Administrative, and Biostatistics and Bioinformatics Cores of the Massachusetts Alzheimer's Disease Research Center (NIA 5 P50AG05134 Growdon and Hyman), and the Bedford Division of the New England Geriatric Research Education and Clinical Center (GRECC) at the ENRM Veterans Administration (VA) Hospital. Dr. Mitchell is supported by a VA Advanced Fellowship in Geriatrics through the New England Geriatric Research Education and Clinical Center (GRECC). The contents of this study do not represent the views of the Department of Veterans Affairs or the United States Government. We also thank Dr. Daniel Mungas and the Friday Harbor Advanced Psychometric Methods for Cognitive Aging Research workshop (R13AG030995, PI: Dan Mungas), Dr. Joseph Locascio, Dr. Rebecca England, Dr. Dorene Rentz, Dr. Alden Gross, Dr. Richard Jones, and Dr. Liang Yap for providing significant assistance, guidance, teaching, resources and/or helpful comments. Finally, and most importantly, we express our deep gratitude for the commitment of the MADRC UDS Longitudinal Cohort study participants without whose generous contribution and dedication this research would not be possible. This study was funded by NIA K23AG027171 (Atri). The authors have no conflicts of interest with the present study.

References

Alzheimer's Association. (2011). 2011 Alzheimer's disease facts and figures. Alzheimer's & Dementia, 4(2), 110133.Google Scholar
Atri, A., Rountree, S.D., Lopez, O.L., Doody, R.S. (2012). Validity, significance, strengths, limitations, and evidentiary value of real-world clinical data for combination therapy in Alzheimer's disease: Comparison of efficacy and effectiveness studies. Neuro-degenerative Diseases, 10, 170174.CrossRefGoogle ScholarPubMed
Atri, A., Shaughnessy, L.W., Locascio, J.J., Growdon, J.H. (2008). Long-term course and effectiveness of combination therapy in Alzheimer disease. Alzheimer Disease and Associated Disorders, 22(3), 209221.CrossRefGoogle ScholarPubMed
Beekly, D.L., Ramos, E.M., Lee, W.W., Deitrich, W.D., Jacka, M.E., Wu, J., Kukull, W.A. (2007). The National Alzheimer's Coordinating Center (NACC) database: The uniform data set. Alzheimer Disease and Associated Disorders, 21(3), 249258.CrossRefGoogle Scholar
Bentler, P.M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107(2), 238246.CrossRefGoogle ScholarPubMed
Bentler, P., Chou, C. (1988). Practical issues in structural equation modeling. In J. Long (Ed.), Common problems/proper solutions: Avoiding error in quantitative research. Newbury Park, CA: Sage.Google Scholar
Blom, G. (1958). Statistical estimates and transformed beta variables. New York: John Wiley & Sons, Inc.Google Scholar
Browne, M., Cudek, R. (1993). Alternative ways of assessing model fit. In K. Bollen & J. Long (Eds.), Testing structural equation models (pp. 136162). Thousand Oaks, CA: Sage.Google Scholar
Cheung, G.W., Rensvold, R.B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9(2), 233255.CrossRefGoogle Scholar
Cohen, J.D. (1988). Statistical power analysis for the behavioral sciences, 2nd edition. Hillsdale, NJ: L. Erlbaum Associates.Google Scholar
DeKosky, S. (2003). Early intervention is key to successful management of Alzheimer disease. Alzheimer Disease and Associated Disorders, 17(Suppl. 4), S99S104.CrossRefGoogle ScholarPubMed
Dickerson, B.C., Bakkour, A., Salat, D.H., Feczko, E., Pacheco, J., Greve, D.N., Buckner, R.L. (2009). The cortical signature of Alzheimer's disease: Regionally specific cortical thinning relates to symptom severity in very mild to mild AD dementia and is detectable in asymptomatic amyloid-positive individuals. Cerebral Cortex, 19(3), 497510.CrossRefGoogle ScholarPubMed
Embretson, S.E., Reise, S.P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Gallo, J.J., Anthony, J.C., Muthén, B.O. (1994). Age differences in the symptoms of depression: A latent trait analysis. Journals of Gerontology, Psychological Sciences, 49(6), 251264.CrossRefGoogle ScholarPubMed
Grober, E., Lipton, R.B., Hall, C., Crystal, H. (2000). Memory impairment on free and cued selective reminding predicts dementia. Neurology, 54(4), 827832.CrossRefGoogle ScholarPubMed
Grober, E., Sanders, A.E., Hall, C., Lipton, R.B. (2010). Free and cued selective reminding identifies very mild dementia in primary care. Alzheimer Disease and Associated Disorders, 24(3), 284290.CrossRefGoogle ScholarPubMed
Grober, E., Sliwinski, M. (1991). Development and validation of a model for estimating premorbid verbal intelligence in the elderly. Journal of Clinical and Experimental Neuropsychology, 13(6), 933949.CrossRefGoogle Scholar
Hauser, R.M., Goldberger, A.S. (1971). The treatment of unobservable variables in path analysis. In H.L. Costner (Ed.), Sociological methodology (pp. 81117). San Francisco: Jossey-Bass.Google Scholar
Hayden, K.M., Jones, R.N., Zimmer, C., Plassman, B.L., Browndyke, J.N., Pieper, C., Welsh-Bohmer, K.A. (2011). Factor structure of the National Alzheimer's Coordinating Centers Uniform Dataset Neuropsychological Battery: An evaluation of invariance between and within groups over time. Alzheimer Disease and Associated Disorders, 25(2), 128137.CrossRefGoogle ScholarPubMed
Hu, L., Bentler, P. (1998). Fit indices in covariance structure analysis: Sensitivity to underparameterized model misspecifications. Psychological Methods, 4, 424453.CrossRefGoogle Scholar
Johnson, D., Storandt, M., Morris, J., Langford, Z., Galvin, J. (2008). Cognitive profiles in dementia: Alzheimer disease vs healthy brain aging. Neurology, 71(22), 17831789.CrossRefGoogle ScholarPubMed
Lee, J.H. (2003). Genetic evidence for cognitive reserve: Variations in memory and related cognitive functions. Journal of Clinical and Experimental Neuropsychology, 25(5), 594613.CrossRefGoogle ScholarPubMed
Locascio, J., Atri, A. (2011). An overview of longitudinal data analysis methods for neurological research. Dementia and Geriatric Cognitive Disorders Extra, 1, 330357.CrossRefGoogle ScholarPubMed
Mack, W.J., Freed, D.M., Williams, B.W., Henderson, V.W. (1992). Boston Naming Test: Shortened versions for use in Alzheimer's disease. Journal of Gerontology, 47(3), P154P158.CrossRefGoogle ScholarPubMed
Morris, J.C. (1993). The Clinical Dementia Rating (CDR): Current version and scoring rules. Neurology, 43(11), 24122414.CrossRefGoogle ScholarPubMed
Morris, J.C., Heyman, A., Mohs, R.C., Hughes, J.P., van Belle, G., Fillenbaum, G., Clark, C. (1989). The Consortium to Establish a Registry for Alzheimer's Disease (CERAD). Part I. Clinical and neuropsychological assessment of Alzheimer's disease. Neurology, 39(9), 11591165.Google Scholar
Morris, J.C., Weintraub, S., Chui, H.C., Cummings, J., Decarli, C., Ferris, S., Kukull, W.A. (2006). The Uniform Data Set (UDS): Clinical and cognitive variables and descriptive data from Alzheimer Disease Centers. Alzheimer Disease and Associated Disorders, 20(4), 210216.CrossRefGoogle ScholarPubMed
Muthén, B. (1998). The development of heavy drinking and alcohol related problems from ages 18 to 37 in a U.S. National Sample. Los Angeles: Graduate School of Education and Information Studies.Google Scholar
Muthén, L.K., Muthén, B.O. (1998–2010). Mplus user's guide: Sixth Edition. Los Angeles, CA: Muthén & Muthén.Google Scholar
Muthén, B., Khoo, S.-T., Francis, D. (1998). Multi-stage analysis of sequential developmental processes to study reading progress: New methodological developments using general growth mixture modeling (CSE Technical Report No. 489). Los Angeles: UCLA Graduate School of Education and Information Studies.Google Scholar
Muthén, B.O. (1989). Latent variable modeling in heterogeneous populations. Meetings of Psychometric Society (1989, Los Angeles, California and Leuven, Belgium). Psychometrika, 54(4), 557585.CrossRefGoogle Scholar
Patterson, K.E., Graham, N., Hodges, J.R. (1994). Reading in dementia of the Alzheimer type: A preserved ability? Neuropsychology, 8(3), 395412.CrossRefGoogle Scholar
Persson, C., Wallin, A., Levander, S., Minthon, L. (2009). Changes in cognitive domains during three years in patients with Alzheimer's disease treated with donepezil. BMC Neurology, 9(1), 7.CrossRefGoogle ScholarPubMed
Rami, L., Solé-Padullés, C., Fortea, J., Bosch, B., Lladó, A., Antonell, A., Molinuevo, J.L. (2012). Applying the new research diagnostic criteria: MRI findings and neuropsychological correlations of prodromal AD. International Journal of Geriatric Psychiatry, 27, 127134.CrossRefGoogle ScholarPubMed
Reitan, R.M., Wolfson, D. (1985). The Halstead-Reitan Neuropsychological Test Battery (2nd ed.). Tucson, AZ: Neuropsychology Press.Google Scholar
Rentz, D.M., Huh, T.J., Faust, R.R., Budson, A.E., Scinto, L.F., Sperling, R.A., Daffner, K.R. (2004). Use of IQ-adjusted norms to predict progressive cognitive decline in highly intelligent older individuals. Neuropsychology, 18, 3849.CrossRefGoogle ScholarPubMed
Rentz, D.M., Locascio, J.J., Becker, J.A., Moran, E.K., Eng, E., Buckner, R.L., Johnson, K.A. (2010). Cognition, reserve, and amyloid deposition in normal aging. Annals of Neurology, 67(3), 353364.CrossRefGoogle ScholarPubMed
Roe, C.M., Mintun, M.A., Ghoshal, N., Williams, M.M., Grant, E.A., Marcus, D.S., Morris, J.C. (2010). Alzheimer disease identification using amyloid imaging and reserve variables: Proof of concept. Neurology, 75(1), 4248.CrossRefGoogle ScholarPubMed
Salthouse, T., Atkinson, T., Berish, D. (2003). Executive functioning as a potential mediator of age-related cognitive decline in normal adults. Journal of Experimental Psychology. General, 132(4), 566594.CrossRefGoogle ScholarPubMed
Satorra, A., Bentler, P. (2001). A scaled difference chi-square test statistic for moment structure analysis. Psychometrika, 66(4), 507514.CrossRefGoogle Scholar
Satz, P., Cole, M.A., Hardy, D.J., Rassovsky, Y. (2011). Brain and cognitive reserve: Mediator(s) and construct validity, a critique. Journal of Clinical and Experimental Neuropsychology, 33(1), 121130.CrossRefGoogle Scholar
Shirk, S., Mitchell, M., Shaughnessy, L., Sherman, J., Locascio, J., Weintraub, S., Atri, A. (2011). A web-based normative calculator for the uniform data set (UDS) neuropsychological test battery. Alzheimers Research & Therapy, 3(6), 32.CrossRefGoogle ScholarPubMed
Siedlecki, K., Stern, Y., Reuben, A., Sacco, R., Elkind, M., Wright, C. (2009). Construct validity of cognitive reserve in a multiethnic cohort: The Northern Manhattan Study. Journal of the International Neuropsychological Society, 15(04), 558569.CrossRefGoogle Scholar
Stern, Y. (2009). Cognitive reserve. Neuropsychologia, 47, 20152028.CrossRefGoogle ScholarPubMed
Stern, Y., Gurland, B., Tatemichi, T.K., Tang, M.X., Wilder, D., Mayeux, R. (1994). Influence of education and occupation on the incidence of Alzheimer's disease. Journal of the American Medical Association, 271(13), 10041010.CrossRefGoogle ScholarPubMed
Stern, Y., Zarahn, E., Habeck, C., Holtzer, R., Rakitin, B., Kumar, A., Brown, T. (2008). A common neural network for cognitive reserve in verbal and object working memory in young but not old. Cerebral Cortex, 18(4), 959.CrossRefGoogle Scholar
Tinklenberg, J., Brooks, J.O., Tanke, E.D., Khalid, K., Poulsen, S.L., Kraemer, H.C., Yesavage, J.A. (1990). Factor analysis and preliminary validation of the Mini-Mental State Examination from a longitudinal perspective. International Psychogeriatrics, 2(02), 123134.CrossRefGoogle ScholarPubMed
Wechsler, D. (1987). Wechsler adult intelligence scale-revised. San Antonio: Psychological Corporation.Google Scholar
Wechsler, D.A. (1987). Wechsler memory scale - revised. San Antonio: Psychological Corporation.Google Scholar
Weintraub, S., Salmon, D., Mercaldo, N., Ferris, S., Graff-Radford, N.R., Chui, H., Morris, J.C. (2009). The Alzheimer's Disease Centers’ Uniform Data Set (UDS): The neuropsychologic test battery. Alzheimer Disease and Associated Disorders, 23(2), 91101.CrossRefGoogle ScholarPubMed
Wilson, R., Scherr, P., Schneider, J., Tang, Y., Bennett, D. (2007). Relation of cognitive activity to risk of developing Alzheimer disease. Neurology, 69(20), 19111920.CrossRefGoogle ScholarPubMed
Zec, R., Landreth, E., Vicari, S., Belman, J., Feldman, E., Andrise, A., Kumar, V. (1992). Alzheimer Disease Assessment Scale: A subtest analysis. Alzheimer Disease and Associated Disorders, 6(3), 164181.CrossRefGoogle Scholar
Figure 0

Table 1 Descriptive statistics

Figure 1

Fig. 1 The CDCR (cognitive domains and cognitive reserve) model results: Results of confirmatory factor analysis (CFA) of cognitive reserve and neuropsychological measures in a group of healthy older adults (n = 294). χ2 = 100.12, df = 78, CFI = .962, RMSEA = .049. Variables in boxes represent observed measures, variables in ovals represent latent variables, numbers along straight arrows represent factor loadings for each indicator variable on the respective latent variable, and numbers along curved arrows represent correlations between latent variables and correlations between indicator variables. LMI & LMII = Wechsler Memory Scale-Revised (WMS-R) subtests Logical Memory IA and IIA; FCSRT Free & FCSRT Total = Free and Cued Selective Reminding Test Free Recall and Total Recall; BNT = Boston Naming Test (30 item – odd numbered); SEMFLU = Semantic Fluency (Animals and Vegetables); Digit Fwd & Digit Bkwd = WMS-R subtests Digit Span Forward and Backward; Trails A & Trails B = Trail Making Test Parts A and B; Dig Sym = Wechsler Adult Intelligence Scale-Revised (WAIS-R) Digit Symbol Coding subtest; AMNART = American National Adult Reading Test; CFI = comparative fit index; RMSEA = root mean square error of approximation; aMCI = amnestic mild cognitive impairment; AD = Alzheimer's disease.

Figure 2

Fig. 2 The CDCR (cognitive domains and cognitive reserve) model results in a group with aMCI or AD (n = 265). χ2 = 1750.02, df = 78, CFI = .932, RMSEA = .085. LMI & LMII = Wechsler Memory Scale-Revised (WMS-R) subtests Logical Memory IA and IIA; FCSRT Free & FCSRT Total = Free and Cued Selective Reminding Test Free Recall and Total Recall; BNT = Boston Naming Test (30 item – odd numbered); SEMFLU = Semantic Fluency (Animals and Vegetables); Digit Fwd & Digit Bkwd = WMS-R subtests Digit Span Forward and Backward; Trails A & Trails B = Trail Making Test Parts A and B; Dig Sym = Wechsler Adult Intelligence Scale-Revised (WAIS-R) Digit Symbol Coding subtest; AMNART = American National Adult Reading Test; CFI = comparative fit index; RMSEA = root mean square error of approximation; aMCI = amnestic mild cognitive impairment; AD = Alzheimer's disease.

Figure 3

Table 2 Standardized factor loadings and indices of model fit for confirmatory factor analysis (CFA) models for control participants (n = 294) and participants with memory impairment along the Alzheimer's disease spectrum (aMCI-AD; n = 265) across levels of factorial invariance