We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
There is a compelling need for innovative intervention strategies for patients with affective disorders, given their increasing global prevalence and significant associated disability and impaired functioning. This study aimed to investigate whether a comprehensive multimodule individualized intervention (AWARE), targeting known mediators of functioning, improves functioning in affective disorders.
Methods
AWARE was a randomized, controlled, rater-blind clinical trial conducted at two centers in the Capital Region of Denmark (Clinicaltrials.gov, NCT 04701827). Participants were adults with bipolar disorder or major depressive disorder and impaired functioning. Participants were randomized to the six-month AWARE intervention or treatment as usual (TAU). The AWARE intervention is based on the International Classification of Functioning, Disability and Health (ICF) Brief Core Set for Bipolar and Unipolar Disorder.
The primary outcome was observation-based functioning using the Assessment of Motor and Process Skills (AMPS). Secondary outcomes were functioning, QoL, stress, and cognition.
Results
Between February 2021 and January 2023, 103 patients were enrolled; 50 allocated to AWARE treatment and 53 to TAU (96 included in the full analysis set). There was no statistically significant differential change over time between groups in the primary outcome (AMPS), however, both groups showed a statistically significant improvement at endpoint. The AWARE intervention had a statistically significant effect compared with TAU on secondary outcomes of patient-reported functioning, stress and cognition.
Conclusion
Compared with TAU, the AWARE intervention was ineffective at improving overall functioning on the primary outcome, presumably due to the short duration of the intervention. Further development of effective treatments targeting functioning is needed.
Gloria HY Wong, The University of Hong Kong,Bosco HM Ma, Hong Kong Alzheimer's Disease Association,Maggie NY Lee, Hong Kong Alzheimer's Disease Association,David LK Dai, Hong Kong Alzheimer's Disease Association
This chapter covers a selection of tools and resources for dementia diagnosis and management in primary care based on the experience of a community-based dementia early detection service, for use by trained allied health and social care professionals and primary care physicians to promote communication across disciplines. Considering the large and growing number of validated tools available for outcome assessment and detection of dementia, our goal here is to share useful materials for quick reference rather than a comprehensive summary of available tools and resources. Readers will find in this chapter a sample form to facilitate history-taking, with an explanation of the needed information and recommended use of the General Practitioner Assessment of Cognition (GPCOG); a quick overview of the clinical features suggestive of non-Alzheimer’s disease; a checklist for physical examination and investigation; a sample cognitive and functioning report of an early intervention service with an explanation of the important information to include; common spontaneously reported symptoms in an early intervention service; and lists of useful resources, infographics, and educational material for explaining dementia diagnosis and management.
Virtual reality (VR) offers the prospect of a safe and effective adjunct therapeutic modality to promote mental health and reduce distress from symptoms in palliative care patients. Common physiological and psychological symptoms experienced at the end of life may impact the person’s participation in day-to-day activities that bring them meaning. The purpose of this study was to examine the effect of VR interventions on occupational participation and distress from symptoms.
Objectives
To describe the stimulus, results, and learnings from a single-site pilot study of virtual reality therapy in a specialist palliative care setting.
Methods
Participants engaged in a VR session lasting from 9 to 30 minutes related to coping with pain, inner peace and mindfulness, adventure, and bucket list.
Methods measures
The pilot prospective quantitative observational cohort study was conducted from November 2021 through March 2022 using a pre-post VR intervention research design. Quantitative data was collected using patient-rated assessments and a wireless pulse oximeter. Occupational performance, satisfaction, and distress symptoms were measured using the Canadian Occupational Performance Measure and the Palliative Care Outcomes Collaboration Symptom Assessment Scale (PCOC SAS). The intervention and study design adhered to international guidelines.
Results
Ten participants engaged in the VR interventions. Data showed significantly improved occupational performance and satisfaction scores (p < .001), decreases in PCOC SAS distress from pain (p = .01), fatigue (p < .001), and heart rate (p = .018). No adverse side effects were observed.
Significance of results
Outcomes included an analysis of virtual reality’s effectiveness to alleviate symptom burden and increase occupational participation for palliative care patients. Of specific interest to the research team was the application of virtual reality in a community–based and inpatient palliative care context to supplement allied health services and its feasibility of integration into standard palliative care.
Conclusion
VR therapy showed positive improvements in the participants’ occupational performance, satisfaction, and distress from pain and fatigue.
Extensive research shows that tests of executive functioning (EF) predict instrumental activities of daily living (IADLs) but are nevertheless often criticized for having poor ecological validity. The Modified Six Elements Test (MSET) is a pencil-and-paper test that was developed to mimic the demands of daily life, with the assumption that this would result in a more ecologically valid test. Although the MSET has been extensively validated in its ability to capture cognitive deficits in various populations, support for its ability to predict functioning in daily life is mixed. This study aimed to examine the MSET’s ability to predict IADLs assessed via three different modalities relative to traditional EF measures.
Method:
Participants (93 adults aged 60 – 85) completed the MSET, traditional measures of EF (Delis-Kaplan Executive Function System; D-KEFS), and self-reported and performance-based IADLs in the lab. Participants then completed three weeks of IADL tasks at home, using the Daily Assessment of Independent Living and Executive Skills (DAILIES) protocol.
Results:
The MSET predicted only IADLs completed at home, while the D-KEFS predicted IADLs across all three modalities. Further, the D-KEFS predicted home-based IADLs beyond the MSET when pitted against each other, whereas the MSET did not contribute beyond the D-KEFS.
Conclusions:
Traditional EF tests (D-KEFS) appear to be superior to the MSET in predicting IADLs in community-dwelling older adults. The present results argue against replacing traditional measures with the MSET when addressing functional independence of generally high-functioning and cognitive healthy older adult patients.
Cognition in MCI has responded poorly to pharmacological interventions, leading to use of computerized training. Combining computerized cognitive training (CCT) and functional skills training software (FUNSAT) produced improvements in 6 functional skills in MCI, with effect sizes >0.75. However, 4% of HC and 35% of MCI participants failed to master all 6 tasks. We address early identification of characteristics that identify participants who do not graduate, to improve later interventions.
Methods:
NC participants (n = 72) received FUNSAT and MCI (n = 92) participants received FUNSAT alone or combined FUNSAT and CCT on a fully remote basis. Participants trained twice a week for up to 12 weeks. Participants “graduated” each task when they made one or fewer errors on all 3–6 subtasks per task. Tasks were no longer trained after graduation.
Results:
Between-group comparisons of graduation status on baseline completion time and errors found that failure to graduate was associated with more baseline errors on all tasks but no longer completion times. A discriminant analysis found that errors on the first task (Ticket purchase) uniquely separated the groups, F = 41.40, p < .001, correctly classifying 94% of graduators. An ROC analysis found an AUC of .83. MOCA scores did not increase classification accuracy.
Conclusions:
More baseline errors, but not completion times, predicted failure to master all FUNSAT tasks. Accuracy of identification of eventual mastery was exceptional. Detection of risk to fail to master training tasks is possible in the first 15 minutes of the baseline assessment. This information can guide future enhancements of computerized training.
Self- and informant-ratings of functional abilities are used to diagnose mild cognitive impairment (MCI) and are commonly measured in clinical trials. Ratings are assumed to be accurate, yet they are subject to biases. Biases in self-ratings have been found in individuals with dementia who are older and more depressed and in caregivers with higher distress, burden, and education. This study aimed to extend prior findings using an objective approach to identify determinants of bias in ratings.
Method:
Participants were 118 individuals with MCI and their informants. Three discrepancy variables were generated including the discrepancies between (1) self- and informant-rated functional status, (2) informant-rated functional status and objective cognition (in those with MCI), and (3) self-rated functional status and objective cognition. These variables served as dependent variables in forward linear regression models, with demographics, stress, burden, depression, and self-efficacy as predictors.
Results:
Informants with higher stress rated individuals with MCI as having worse functional abilities relative to objective cognition. Individuals with MCI with worse self-efficacy rated their functional abilities as being worse compared to objective cognition. Informant-ratings were worse than self-ratings for informants with higher stress and individuals with MCI with higher self-efficacy.
Conclusion:
This study highlights biases in subjective ratings of functional abilities in MCI. The risk for relative underreporting of functional abilities by individuals with higher stress levels aligns with previous research. Bias in individuals with MCI with higher self-efficacy may be due to anosognosia. Findings have implications for the use of subjective ratings for diagnostic purposes and as outcome measures.
Malnutrition is a major problem among older adults with type 2 diabetes mellitus (T2DM). Some studies suggest that well glycaemic control increases the risk of frailty due to reduced intake. Therefore, it could be hypothesised that adequate glycaemic controlled patients may be at risk of malnutrition. This study aimed to examine, in older adults with T2DM, the association between adequate glycaemic control and malnutrition as well as identify the risk factors for malnutrition. Data including general characteristics, health status, depression, functional abilities, cognition and nutrition status were analysed. Poor nutritional status is defined as participants assessed with the Mini Nutritional Assessment as being at risk of malnutrition or malnourished. Adequate glycaemic control refers to an HbA1c level that meets the target base in the American Diabetes Association 2022 guidelines with individualised criteria. There were 287 participants with a median (interquartile range) age of 64 (61–70) years, a prevalence of poor nutrition, 15 %, and adequate glycaemic control, 83·6 %. This study found no association between adequate glycaemic control and poor nutrition (P = 0·67). The factors associated with poor nutritional status were low monthly income (adjusted OR (AOR) 4·66, 95 % CI 1·28, 16·98 for income < £118 and AOR 7·80, 95 % CI 1·74, 34·89 for income £118–355), unemployment (AOR 4·23, 95 % CI 1·51, 11·85) and cognitive impairment (AOR 5·28, 95 % CI 1·56, 17·93). These findings support the notion that older adults with T2DM should be encouraged to maintain adequate glycaemic control without concern for malnutrition, especially those who have low income, unemployment or decreased cognitive functions.
Our objective was to evaluate the association of antioxidant intake and the inflammatory potential of the diet with functional decline in older men. A diet history questionnaire was used to collect dietary intake data from men aged ≥ 75 years (n 794) participating in the Concord Health and Aging in Men Project cohort study. Intake of vitamins A, C, E and Zn were compared with the Australian Nutrient Reference Values to determine adequacy. The Energy-adjusted Dietary Inflammatory Index (E-DIITM) was used to assess the inflammatory potential of the diet. Physical performance data were collected via handgrip strength and walking speed tests, and activities of daily living (ADL) and instrumental activities of daily living (IADL) questionnaires, at baseline and 3-year follow-up (n 616). Logistic regression analysis was used to identify associations between diet and incident poor physical function and disability. Both poor antioxidant intake and high E-DII scores at baseline were significantly associated with poor grip strength and ADL disability at 3-year follow-up. No significant associations with walking speed or IADL disability were observed. Individual micronutrient analysis revealed a significant association between the lowest two quartiles of vitamin C intake and poor grip strength. The lowest quartiles of intake for vitamins A, C, E and Zn were significantly associated with incident ADL disability. The study observed that poor antioxidant and anti-inflammatory food intake were associated with odds of developing disability and declining muscle strength in older men. Further interventional research is necessary to clarify the causality of these associations.
Modern perspectives of rehabilitation after traumatic brain injury (TBI) emphasize the importance of individualized holistic approaches (i.e., physical and psychological adjustment) and collaboration toward goals (e.g., among the survivor, rehabilitation professionals, family/friends, etc.). Recent research has sought to employ a holistic, value-based approach (via the Valued Living Questionnaire) to measuring goals and whether those with TBI are acting in accordance with them, and quality of life outcomes. However, no research has examined whether rehabilitation practices are consistent with survivor values using this framework. The aim of the current study was to investigate the impact of value-consistent rehabilitation practices on quality of life and psychological adjustment outcomes in those with TBI.
Participants and Methods:
The current study included a sample of 73 adults with a history of TBI (M years since injury = 7.6, SD = 9.7) between the ages of 18 and 72 (Mage = 44.0 years, SD = 13.1; 73% female, 90.4% white) who had participated in outpatient rehabilitation. Individuals were recruited from brain injury support groups on Facebook and completed a series of surveys measuring TBI severity [Ohio State University Traumatic Brain Injury Identification Method-Short Form (OSU-TBI-ID)], value-consistent rehabilitation practices [modified Valued Living Questionnaire (VLQ)], life satisfaction [Life Satisfaction Questionnaire-9 (LiSat-9)], and psychological flexibility [Acceptance & Action Questionnaire - Acquired Brain Injury (AAQ-ABI)]. Discrepancy scores were calculated to compare perceived importance of and how helpful rehabilitation was for each VLQ domain. Bivariate Pearson correlations were conducted to investigate the relationships between value-consistent rehabilitation, life satisfaction, and psychological flexibility.
Results:
The VLQ domains with the greatest discrepancies were spirituality (-2.26), marriage/intimate relations (-2.06), and family relations (-2.02) such that rehabilitation helped less in these domains despite their importance. Greater levels of value-consistent rehabilitation were related to higher levels of life satisfaction overall (r = 0.40, p < 0.001) and lower levels of reactive avoidance of emotions related to one’s brain injury (r = -0.26, p = 0.03). In terms of specific domains of life satisfaction, greater value-consistent rehabilitation was related to higher levels of vocational (r= 0.44, p < .001), physical self-care (r = 0.28, p = 0.018), and friendship satisfaction (r= 0.41, p < .001).
Conclusions:
Our findings suggest rehabilitation practices may not be acting proportionately with TBI survivor values. Moreover, our results suggest value-consistent rehabilitation is important for long term quality of life and psychological adjustment outcomes. Future work should seek to identify factors that optimize opportunity for individualized treatment.
Assessment of medication management, an instrumental activity of daily living (IADL), is particularly important among Veterans, who are prescribed an average of 2540 prescriptions per year (Nguyen et al., 2017). The Pillbox Test (PT) is a brief, performance-based measure that was designed as an ecologically valid measure of executive functioning (EF; Zartman, Hilsabeck, Guarnaccia, & Houtz, 2013), the cognitive domain most predictive of successful medication schedule management (Suchy, Ziemnik, Niermeyer, & Brothers, 2020). However, a validation study by Logue, Marceaux, Balldin, and Hilsabeck (2015) found that EF predicted performance on the PT more so than processing speed (PS), but not the language, attention, visuospatial, and memory domains combined. Thus, this project sought to increase generalizability of the latter study by replicating and extending their investigation utilizing a larger set of neuropsychological tests.
Participants and Methods:
Participants included 176 patients in a mixed clinical sample (5.1% female, 43.2% Black/African American, 55.7% white, Mage = 70.7 years, SDage = 9.3, Medu = 12.6 years, SDedu = 2.6) who completed a comprehensive neuropsychological evaluation in a VA medical center. All participants completed the PT where they had five minutes to organize five pill bottles using a seven-day pillbox according to standardized instructions on the labels. Participants also completed some combination of 26 neuropsychological tests (i.e., participants did not complete every test as evaluations were tailored to disparate referral questions). Correlations between completed tests and number of pillbox errors were evaluated. These tests were then combined into the following six domains: language, visuospatial, working memory (WM), psychomotor/PS, memory, and EF. Hierarchical multiple regression was completed using these domains to predict pillbox errors.
Results:
Spearman’s correlation coefficients indicated that 25 tests had a weak to moderate relationship with PT total errors (rs = 0.23 -0.51); forward digit span was not significantly related (rs = 0.13). A forced-entry multiple regression was run to predict PT total errors from the six domains. The model accounted for 29% of the variance in PT performance, F(6, 169) = 11.56, p < .001. Of the domains, psychomotor/PS made the greatest contribution, f(169) = 2.73, p = .007, followed by language, f(169) = 2.41, p = .017, and WM, f(169) = 2.15, p = .033. Visuospatial performance and EF did not make significant contributions (ps>.05). Next, two hierarchical multiple regressions were run. Results indicated that EF predicted performance on the PT beyond measures of PS, AR2 = .02, p = .044, but not beyond the combination of all cognitive domains, AR2 = .00, p = .863.
Conclusions:
Results of this study partially replicated the findings of Logue et al. (2015). Namely, EF predicted PT performance beyond PS, but not other cognitive domains. However, when all predictors were entered into the same model, visuospatial performance did not significantly contribute to the prediction of pillbox errors. These results suggest that providers may benefit from investigating medication management abilities when deficits in PS, WM, and/or language are identified. Further research is needed to better understand which domains best predict PT failure.
The accurate assessment of instrumental activities of daily living (iADL) is essential for those with known or suspected Alzheimer's disease or related disorders (ADRD). This information guides diagnosis, staging, and treatment planning, and serves as a critical patient-centered outcome. Despite its importance, many iADL measures used in ADRD research and practice have not been sufficiently updated in the last 40-50 years to reflect how technology has changed daily life. For example, digital technologies are routinely used by many older adults and those with ADRD to perform iADLs (e.g., online financial management, using smartphone reminders for medications.) The purpose of the current study was to a) asses the applicability of technology-related iADL items in a clinical sample; b) evaluate whether technology-based iADLs are more difficult for those living with ADRD than their traditional counterparts; and c) test if adding technology-based iADL items changes the sensitivity and specificity of iADL measures to ADRD.
Participants and Methods:
135 clinically referred older adults (mean age 75.5 years) undergoing neuropsychological evaluation at a comprehensive multidisciplinary memory clinic were included in this study [37% with mild cognitive impairment (MCI) and 51.5% with dementia]. Collateral informants completed the Functional Activities Questionnaire (FAQ; Pfeffer, 1982) as well as 11 items created to parallel the FAQ wording that assessed technology-related iADLs such as digital financial management (i.e. online bill pay), everyday technology skills (i.e. using a smartphone; remembering a password), and other technology mediated activities (i.e. visiting internet sites; online shopping).
Results:
Care partners rated tech iADLs items as applicable for the majority of items. For example, technology skill items were applicable to 90.4% of the sample and online financial management questions were applicable for 76.4% of participants. Applicability ratings were similar across patients in their 60's and 70's, and lower in those over age 80. Care partners indicated less overall impairment on technology-related iADLs (M =1.22, SD =.88) than traditional FAQ iADLs (M =1.36, SD = .86), t(129) = 3.529, p =.001). A composite of original FAQ paperwork and bill pay items (M = 1.62, SD = 1.1) was rated as more impaired than digital financial management tasks (M = 1.30, SD = 1.09), t(122) = 4.77, p <.001). In terms of diagnostic accuracy, tech iADL items (AUC= .815, 95% CI [.731, -.890]) appeared to perform comparably to slightly better than the traditional FAQ (AUC =.788, 95% CI [.705, .874]) at separating MCI and dementia, though the difference between the two was not statistically significant in this small pilot sample.
Conclusions:
Technology is rapidly changing how older adults and those with ADRD perform a host of iADLs. This pilot study suggests broad applicability of tech iADL to the lives of those with ADRD and highlights how measurement of these skills may help identify trends in iADL habits that may help to mitigate the impact of ADRD on daily functions. Further, this data suggests the need to refine and improve upon existing iADL measures to validly capture the evolving technological landscape of those living with ADRD.
Mild decline in independent functioning is a core diagnostic criterion for Mild Cognitive Impairment. Performance-based assessments have been considered the gold standard to identify subtle deficits in functioning. Existing assessments were largely designed using demographically homogenous samples (white, highly educated, middle class) and often assume tasks are performed similarly across populations. The current study aimed to validate the utility of the Performance Assessment of Self-Care Skills (PASS) in determining cognitive status in a sample of predominantly African American, low-income older adults.
Participants and Methods:
Cognition and functional capacity were measured in n=245 older participants (aged 50+ years) who were recruited from a larger community study located in Pittsburgh, PA. Cognitive status was defined by a mean split on the Modified Mini Mental Status Examination (3MS) score (84/100). Participants above the cutoff were classified as unlikely cognitive impairment (UCI) and those below classified as potential cognitive impairment (PCI). Functional capacity was assessed using the number of cues provided on three PASS subtasks: shopping, medication management, and critical information retrieval (higher score = worse functioning). Self-reported cognitive and functional decline was assessed via the Everyday Cognition (ECog) questionnaire (higher score = greater decline). Generalized linear models compared performance scores between groups adjusting for literacy (WRAT3), age, and education. Receiver operating characteristic curve (ROC) analyses were run for select functional performance scores to assess their predictive ability in discriminating between PCI and UCI.
Results:
Compared to the UCI group (N = 179), the PCI group (N = 66) was older (68 vs. 65 years, p = 0.05), less educated (11 years vs. 12 years, p < 0.01), had lower WRAT3 z-scores (0.19 vs. -0.55, p < .01), and required more cues on the shopping (4.33 vs. 8.54, p < 0.01) and medication management PASS subtasks (2.74 vs. 6.56, p < .01). Both groups reported elevated levels of subjective cognitive complaints on the ECog (1.46 vs. 1.56, p = .09) and performed similarly on the critical information retrieval PASS subtask (0.25 vs 0.54, p = .06). When discerning between UCI and PCI groups, the PASS Shopping subtask had an optimal cut-off score of 4, sensitivity of 0.86, specificity of 0.47, positive predictive value (PPV) of 0.37, and area under the curve (AUC) of 0.71. PASS Medication Management had an optimal cut-off score of 3, sensitivity of 0.77, specificity of 0.56, PPV of 0.39, and AUC of 0.74.
Conclusions:
Subjective functional decline and performance on the critical information retrieval subtask were not associated with cognitive groups. PASS shopping and medication management had moderately high AUCs, suggesting they can reliably distinguish between groups. However, both tasks also exhibited low PPVs, low levels of specificity, and high levels of sensitivity, making them strong “rule-out” tests but poor “rule-in” tests in this sample. Because accurate assessment of functioning is useful for MCI and critical to dementia diagnosis, it is imperative we understand how these tasks function across different populations. Future work should 1) validate measures of functional ability across different populations and 2) develop population-appropriate assessments for use in clinical and research settings.
Older age is associated with an increase in altruistic behaviors such as charitable giving. However, few studies have investigated the cognitive correlates of financial altruism in older adults. This study investigated the cognitive correlates of financial altruism measured using an altruistic choice paradigm in a community-based sample of older adults.
Participants and Methods:
In the present study, a sample of older adults (N = 67; M age = 69.21, SD = 11.23; M education years = 15.97, SD = 2.51; 58.2% female; 71.6% Non-Hispanic White) completed a comprehensive neuropsychological assessment and an altruistic choice paradigm in which they made decisions about allocating money between themselves and an anonymous person.
Results:
In multiple linear regression analyses that controlled for age, education, and sex, financial altruism was negatively associated with performance on cognitive measures typically sensitive to early Alzheimer’s Disease. These included CVLT-II Short Delay Free Recall (Beta=-0.26, p=0.03); CVLT-II Long Delay Cued Recall (Beta=-0.32, p=0.04), Craft Story 21 Delayed Recall (Beta=-0.32, p=0.01), and Animal Fluency (Beta=-0.27, p=0.02). Findings held when responses were grouped according to how much was given (Gave Equally, Gave More, Gave Less) for word list memory and story memory measures.
Conclusions:
Findings of this study point to a negative relationship between financial altruism and cognitive functioning in older adults on measures known to be sensitive to Alzheimer’s Disease (AD). Findings also point to a potential link between financial exploitation risk and AD in older age.
Approximately 6.5 million Americans ages 65 and older have Alzheimer’s disease and related dementias, a prevalence projected to triple by 2060. While subtle impairment in cognition and instrumental activities of daily living (IADLs) arises in the mild cognitive impairment (MCI) phase, early detection of these insidious changes is difficult to capture given limitations. Traditional IADL assessments administered infrequently are less sensitive to early MCI and not conducive to tracking subtle changes that precede significant declines. Continuous passive monitoring of IADLs using sensors and software in home environments is a promising alternative. The purpose of this study was to determine which remotely monitored IADLs best distinguish between MCI and normal cognition.
Participants and Methods:
Participants were 65 years or older, independently community-dwelling, and had at least one daily medication and home internet access. Clinical assessments were performed at baseline. Electronic pillboxes (MedTracker) and computer software (Worktime) measured daily medication and computer habits using the Oregon Center for Aging and Technology (ORCATECH) platform. The Survey for Memory, Attention, and Reaction Time (SMART; Trail A, Trail B, and Stroop Tests) is a self-administered digital cognitive assessment that was deployed monthly. IADL data was aggregated for each participant at baseline (first 90 days) in each domain and various features developed for each. The receiver operating characteristic area under the curve (ROC-AUC) was calculated for each feature.
Results:
Traditional IADL Questionnaires.
At baseline, 103 participants (normal n = 59, Mage = 73.6±5.5; MCI n = 44, Mage = 76.0±6.1) completed three functional questionnaires (Functional Activities Questionnaire; Measurement of Everyday Cognition (ECog), both self-report and informant). The Informant ECog demonstrated the highest AUC (72% AUC, p = <.001).
Remotely monitored in-home IADLs and self-administered brief online cognitive test performance.
Eighty-four had medication data (normal n = 48, Mage = 73.2±5.4; MCI n = 36, Mage = 75.6±6.9). Four features related to pillbox-use frequency (73% AUC) and four features related to pillbox-use time (62% AUC) were developed. The discrepancy between self-reported frequency of use versus actual use was the most discriminating (67% AUC, p = .03).
Sixty-six had computer data (normal n = 38, Mage = 73.6±6.1; MCI n = 28, Mage = 76.6±6.8). Average usage time showed 64% AUC (p = .048) and usage variability showed 60% AUC (p = .18).
One hundred and two completed the SMART (normal n = 59, Mage = 73.6±5.5; MCI n = 43, Mage = 75.9±6.2). Eleven features related to survey completion time demonstrated 80% AUC in discriminating cognition. Eleven features related to the number of clicks during the survey demonstrated 70% AUC. Lastly, seven mouse movement features demonstrated 71% AUC.
Conclusions:
Pillbox use frequency combined features and self-administered brief online cognitive test combined features (e.g., completion times, mouse cursor movements) have acceptable to excellent ability to discriminate between normal cognition and MCI and are relatively comparable to informant rated IADL questionnaires. General computer usage habits demonstrated lower discriminatory ability. Our approach has applied implications for detecting and tracking older adults’ declining cognition and function in real world contexts.
Persistent brain fog is common in adults with Post-Acute Sequelae of SARS-CoV-2 infection (PASC), in whom it causes distress and in many cases interferes with performance of instrumental activities of daily living (IADL) and return-to-work. There are no interventions with rigorous evidence of efficacy for this new, often disabling condition. The purpose of this pilot is to evaluate the efficacy, on a preliminary basis, of a new intervention for this condition termed Constraint-Induced Cognitive therapy (CICT). CICT combines features of two established therapeutic approaches: cognitive speed of processing training (SOPT) developed by the laboratory of K. Ball and the Transfer Package and task-oriented training components of Constraint-Induced Movement therapy developed by the laboratory of E. Taub and G. Uswatte.
Participants and Methods:
Participants were > 3 months after recovery from acute COVID symptoms and had substantial brain fog and impairment in IADL. Participants were randomized to CICT immediately or after a 3-month delay. CICT involved 36 hours of outpatient therapy distributed over 4-6 weeks. Sessions had three components: (a) videogamelike training designed to improve how quickly participants process sensory input (SOPT), (b) training on IADLs following shaping principles, and (c) a set of behavioral techniques designed to transfer gains from the treatment setting to daily life, i.e., the Transfer Package. The Transfer Package included (a) negotiating a behavioral contract with participants and one or more family members about the responsibilities of the participants, family members, and treatment team; (b) assigning homework during and after the treatment period; (c) monitoring participants’ out-of-session behavior; (d) supporting problem-solving by participants and family members about barriers to performance of IADL; and (e) making follow-up phone calls. IADL performance, brain fog severity, and cognitive impairment were assessed using validated, trans-diagnostic measures before and after treatment and three months afterwards in the immediate-CICT group and on parallel occasions in the delayed-CICT group (aka waitlist controls).
Results:
To date, five were enrolled in the immediate-CICT group; four were enrolled in the wait-list group. All had mild cognitive impairment, except for one with moderate impairment in the immediate-CICT group. Immediate-CICT participants, on average, had large reductions in brain fog severity on the Mental Clutter Scale (MCS, range = 0 to 10 points, mean change = -3.7, SD = 2.0); wait-list participants had small increases (mean change = 1.0, SD = 1.4). Notably, all five in the immediate-CICT group had clinically meaningful improvements (i.e., changes > 2 points) in performance of IADL outside the treatment setting as measured by the Canadian Occupational Performance Measure (COPM) Performance scale; only one did in the wait-list group. The advantage for the immediate-CICT group was very large on both the MCS and COPM (d’s = 1.7, p’s < .05). In follow-up, immediate-CICT group gains were retained or built-upon.
Conclusions:
These preliminary findings warrant confirmation by a large-scale randomized controlled trial. To date, CICT shows high promise as an efficacious therapy for brain fog due to PASC. CICT participants had large, meaningful improvements in IADL performance outside the treatment setting, in addition to large reductions in brain fog severity.
An understanding of factors that contribute to informant ratings of patients’ functional abilities is crucial, not only because these ratings are used to diagnose individuals with mild cognitive impairment (MCI) versus dementia, but also because these ratings are commonly used as outcome measures in clinical trials. While these ratings are assumed to be largely accurate, research shows they are subject to biases. Caregiver distress, higher caregiver educational attainment, and higher patient age are associated with a higher discrepancy between informant and patient reports of functional abilities. Studies on informant ratings of functional abilities that simultaneously control for patient objective cognitive abilities remain sparse. The current study aims to evaluate caregiver characteristics as predictors of informant-rated functional status while controlling for patient objective cognitive abilities in MCI.
Participants and Methods:
Individuals with a clinical diagnosis of MCI (Albert, 2011 criteria) were referred to the Cognitive Empowerment Program (CEP), a comprehensive lifestyle program addressing modifiable risk factors associated with progression. This study included cross-sectional data from 118 newly enrolled individuals and their caregivers who served as informants. Patient cognitive functioning was assessed with the Montreal Cognitive Assessment (MoCA). Predictors of interest included caregiver-rated functional abilities (Functional Activities Questionnaire; FAQ), caregiver burden (Zarit Burden Interview; ZBD), caregiver depressive symptoms (Center for Epidemiological Studies Depression scale; CES-D), caregiver stress (Perceived Stress Scale; PSS), and caregivers’ self-rated communicative effectiveness (Communicative Effectiveness Index; CETI). Hierarchical linear regression models were run to predict FAQ while controlling for patient MoCA scores. Separate models were run for the caregiver variables of interest including caregiver age, ZBD, CES-D, PSS, and CETI.
Results:
Caregivers were 75.6% spouses, 17.1% adult children, 3.3% unmarried partners/cohabitating partners, and 4.1% friends. The mean age of individuals with MCI was 74.7 years (SD: 6.96, mean education = 16.2±2.60 years; 47% female) and the mean age of caregivers was 66.4 (SD: 12.88, mean education = 16.3±2.34; 66% female). Worse ratings of functional abilities on the informant-rated FAQ were found for patients with lower MoCA scores (ß = .242, p = .008). Importantly, while controlling for MoCA scores, worse ratings of functional abilities on the FAQ were found for informants with lower age (ß = -0.269, p = .003), higher perceived stress (ß = 0.267, p = .003), higher caregiver burden (ß = 0.289, p < 0.001), and lower self-rated communication effectiveness (ß = -0.324, p < .001). Caregiver depression (ß = 0.089, p = .084) and education (ß = -0.137, p = .147) were not significant predictors of functional ability ratings while controlling for MoCA scores.
Conclusions:
Results of the current study highlight the potential for biases in informant ratings regarding functional abilities in MCI. Informant ratings were found to be significantly influenced by caregiver age, stress, burden, and communicative effectiveness. A key finding is that younger caregivers, such as adult children, may report greater functional impairment in individuals with MCI. The current findings have implications for the use of perceived functional ratings, both for diagnostic purposes and as outcome measures in clinical trials.
Patients and their families often ask clinicians to estimate when full-time care (FTC) will be needed after Alzheimer's Disease (AD) is diagnosed. Although a few algorithms predictive algorithms for duration to FTC have been created, these have not been widely adopted for clinical use due to questions regarding precision from limited sample sizes and lack of an easy, user friendly prediction model. Our objective was to develop a clinically relevant, data-driven predictive model using machine learning to estimate time to FTC in AD based on information gathered from a) clinical interview alone, and b) clinical interview plus neuropsychological data.
Participants and Methods:
The National Alzheimer's Coordinating Center dataset was used to examine 3,809 participants (M age at AD diagnosis = 76.05, SD = 9.76; 47.10% male; 87.20% Caucasian) with AD dementia who were aged >50 years, had no history of stroke, and not dependent on others for basic activities of daily living at time of diagnosis based on qualitative self or informant report. To develop a predictive model for time until FTC, supervised machine learning algorithms (e.g., gradient descent, gradient boosting) were implemented. In Model 1, 29 variables captured at the time of AD diagnosis and often gathered in a clinical interview, including sociodemographic factors, psychiatric conditions, medical history, and MMSE, were included. In Model 2, additional neuropsychological variables assessing episodic memory, language, attention, executive function, and processing speed were added. To train and test the algorithm(s), data were split into a 70:30 ratio. Prediction optimization was examined via cross validation using 1000 bootstrapped samples. Model evaluation included assessment of confusion matrices and calculation of accuracy and precision.
Results:
The average time to requiring FTC after AD diagnosis was 3.32 years (Range = 0.53-14.57 years). For the clinical interview only model (Model 1), younger age of onset, use of cholinesterase inhibitor medication, incontinence, and apathy were among the clinical variables that significantly predicted duration to FTC, with the largest effects shown for living alone, a positive family history of dementia, and lower MMSE score. In Model 2, the clinical predictors remained significant, and lower Boston Naming Test and Digit-Symbol Coding scores showed the largest effects in predicting duration to FTC among the neuropsychological measures. Final prediction models were further tested using five randomly selected cases. The average estimated time to FTC using the clinical interview model was within an average of 5.2 months of the recorded event and within an average of 5.8 months for the model with neuropsychological data.
Conclusions:
Predicting when individuals diagnosed with AD will need FTC is important as the transition often carries significant financial costs related to caregiving. Duration to FTC was predicted by clinical and neuropsychological variables that are easily obtained during standard dementia evaluations. Implementation of the model for prediction of FTC in cases showed encouraging prognostic accuracy. The two models show promise as a first step towards creation of a user friendly prediction calculator that could help clinicians better counsel patients on when FTC after AD diagnosis may occur, though the development of separate models for use in more diverse populations will be essential.
Social determinants of health (SDoH) are structural elements of our living and working environments that fundamentally shape health risks and outcomes. The Healthy People 2030 campaign delineated SDoH into five distinct categories that include: economic stability, education access/quality, healthcare access, neighborhood and built environment, and social and community contexts. Recent research has demonstrated that minoritized individuals have greater disadvantage across SDoH domains, which has been linked to poorer cognitive performance in older adulthood. However, the independent effects of SDoH on everyday functioning across and within racial groups remains less clear. The current project explored the association between SDoH factors and 10-year change in everyday functioning in a large sample of community-dwelling Black and White older adults.
Participants and Methods:
Data from 2,505 participants without dementia enrolled in the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) study (age M=73.5; 76% women; 28% Black/African American). Sociodemographic, census, and industry classification data were reduced into five SDoH factors: economic stability, education access and quality, healthcare access and quality, neighborhood and built environment, and social and community contexts. The Observed Tasks of Daily Living, a performance-based measure of everyday functioning with tasks involving medication management, finances, and telephone use, was administered at baseline, 1-, 2-, 3-, 5, and 10-year follow up visits. Mixed-effects models with age as the timescale tested (1) racial group differences in OTDL trajectories, (2) race x SDOH interactions on OTDL trajectories, and (3) associations between SDoH and OTDL trajectories stratified within Black and White older adults. Covariates included sex/gender, vocabulary score, Mini-Mental Status Examination, depressive symptoms, visual acuity, general health, training group status, booster status, testing site, and recruitment wave.
Results:
Black older adults had a steeper decline of OTDL performance compared to Whites (linear: b = -.25, quadratic b=-.009, ps < .001). There was a significant race x social and community context interaction on linear OTDL trajectories (b =.06, p=.01), but no other significant race x SDoH interactions were observed (bs =-.007-.05, ps=.73-.11). Stratified analyses revealed lower levels of social and community context were associated with steeper age-related linear declines in OTDL performance in Black (b = .08, p=.001), but not White older adults (b =.004, p=.64). Additionally, lower levels of economic stability were associated with steeper age-related linear declines in OTDL performance in Black (b =.07, p=.04), but not White older adults (b =.01, p=.35). Finally, no significant associations between other SDoH and OTDL trajectories were observed in Black (bs = -.04-.01, ps =.09-.80) or White (bs = -.02-.003, ps=.07-.96) older adults.
Conclusions:
SDoH, which measure aspects of structural racism, play an important role in accelerating age-related declines in everyday functioning. Lower levels of economic and community-level social resources are two distinct SDoH domains associated with declines in daily functioning that negatively impact Black, but not White, older adults. It is imperative that future efforts focus on both identifying and acting upon upstream drivers of SDoH-related inequities. Within the United States, this will require addressing more than a century of antiBlack sentiment, White supremacy, and unjust systems of power and policies designed to intentionally disadvantage minoritized groups.
Cognitive reserve has been linked to functional ability, and depression has been shown to be associated with more functional impairment in older adults. While cognitive reserve and depression are associated and have each been shown to impact functional impairment, the independent impact of cognitive reserve on functional ability after accounting for depressive symptoms has not been explored. For the purpose of this study, years of education served as a proxy for cognitive reserve, which is consistent with the literature. It was predicted that higher levels of education would be associated with better functional ability regardless of age and severity of depressive symptoms.
Participants and Methods:
Participants (ages 55 to 90) were drawn from the Alzheimer’s Disease Neuroimaging Initiative (N=3407); participants with major depression were not included. Subsyndromal depressive symptoms were measured using the Geriatric Depression Scale (GDS < 6) and functional impairment was assessed using the Functional Activities questionnaire. A three-stage hierarchical regression was conducted with functional ability as the dependent variable.
Results:
Age, entered at stage one of the regression model, was a significant predictor (F(1,1427) =49.75, p<.001) and accounted for 3.4% of the variance in functional ability. Adding depressive symptoms to the regression model led to a significant increase in variance explained (F(1,1426) = 64.57, p<.001), accounting for an additional 4.2% of the variance in functional ability. Adding years of education to the regression model explained an additional 1.4% of variance in functional ability and this increase in variance explained was significant (F(1,1425) = 22.53, p<.001).
Conclusions:
Cognitive reserve (operationalized as higher levels of education) was associated with higher functional ability even after accounting for age and depressive symptoms.
Socioeconomic factors, spanning from childhood to mid-adulthood, were examined in an older adult Black cohort to better understand their influence on the ability to complete instrumental activities of daily living. Previous research with socioeconomic factors has primarily focused on cognitive changes rather than everyday functioning. Additionally, research that has been conducted examining functioning has been with predominantly White samples.
Participants and Methods:
Data on Black participants were obtained from Rush University’s Memory and Aging Project (MAP), Minority Aging Research Study (MARS), and the Latino CORE study (CORE). Participants (n = 1,273) were predominately female (79.9%) and ranged in age from 54 - 97 years (M = 73 years old). Participants were stratified into two groups based on their consensus diagnosis: no cognitive impairment (NCI; 76.1%) and mild cognitive impairment (MCI). Linear regression analyses were utilized on each group to examine predictors of decreased functioning in instrumental activities of daily living. Predictors included income levels during childhood, at age 40, and current income level. Additionally, sex, education level, and parental education levels were included in the models.
Results:
Impairment of functioning in instrumental activities of daily living was predicted by the age of the participants at the time of their visit in both NCI and MCI groups (p < 0.001). Current income levels for the NCI participants significantly predicted functioning in IADLs (p < 0.001). This relationship was not present for the MCI group, rather, total family income at age 40 better predicted functioning (p = 0.043).
Conclusions:
Previous research has found that early and mid-life socioeconomic circumstances have cascading and complex effects on late life cognition. These same associations may be applicable to functioning with instrumental activities of daily living as they are with cognition. In the present study, current income levels were influential on the functioning of participants without cognitive impairment. Although, when examining those with mild cognitive impairment, mid-life economic circumstances were more impactful on everyday functioning. While the economic status of both groups were predictors of functioning, these findings highlight the importance of better understanding socioeconomic factors across the lifespan and all levels of cognition.