People aged over 50 years currently represent 37 percent of the population in Europe, and population projections foresee that the number of people aged over 60 will increase by approximately 2 million people per annum in the coming decades and it is expected that, by 2060, this group will represent around 30 percent of the total population (1). Dementia and cognitive impairment are age related conditions that constitute a major public health challenge due to their prevalence and consequences in the older population. Forms of mild cognitive impairment (MCI) have been reported to be a risk factor for dementia affecting more than 20 percent of those over 70 years (Reference Bruscoli and Lovestone2). Recent studies suggest that slowing the progression of dementia by 1 year would lead to a better quality of life for people living with dementia and a significant cut in the related socioeconomic costs (Reference Geldmacher, Kirson and Birnbaum3).
In this context, the early detection of dementia is the first step to initiate timely treatments, to manage the disease and to reduce morbidity (Reference Huntley, Gould, Liu, Smith and Howard4). There is no evidence to support screening of asymptomatic individuals, but the monitoring and evaluation of persons suspected of cognitive impairment is justified as they have an increased risk for developing dementia (Reference Yu, Lee and Jang5). A computational model-based prediction found that the reduction in cognitive decline and dementia depends on initial screening age, screening frequency, and specificity (Reference Furiak, Kahle-Wrobleski and Callahan6).
Information and communication technologies (ICT) is an umbrella term that refers to any communication device or application comprising computer and network hardware and software, radio, television, mobile phones, wireless signals, and the various services and applications associated with them (videoconferencing, tele-healthcare, distance learning, etc.). In the neuropsychological assessment field, new screening instruments should capitalize on new technological advances (Reference Snyder, Jackson and Petersen7); ICT devices have been increasingly used for neuropsychological assessment, with good correlations with well-established paper-and-pencil neurocognitive testing batteries.
ICT instruments for cognitive impairment early detection and assessment can be grouped into four categories: electronic devices (personal computers, laptops, mobile phones, tablets, etc.), Internet-based devices, monitoring devices (which measure users’ behavior in different areas), and virtual reality (which immerse the user in a more complex and integral sensorial experience). Computerized test batteries have been reported to have advantages compared with paper-and-pencil neurocognitive testing batteries in areas, such as the standardization of administration and stimulus presentation, the automatic collection of data, the reduction of human error in administration, accurate measures of response latencies, automated comparison with an individual's prior performance and with age-related norms, efficiencies of staffing and costs (Reference Zygouris and Tsolaki8), tailoring tests to the examinee's level of performance, minimizing floor and ceiling effects (Reference Wild, Howieson, Webbe, Seelye and Kaye9), and their potential to capture time-related information such as spatial planning strategies (Reference Kim, Hsiao and Do10). On the other hand, older adults’ limited familiarity with computers (Reference Zygouris and Tsolaki8) and a general lack of psychometric standards (Reference Schlegel and Gilliland11) have been raised as an obstacle for these instruments.
In a review about computerized cognitive testing for older adults (Reference Zygouris and Tsolaki8) seventeen test batteries were identified which had adequate discriminant validity and test–retest reliability; the authors concluded that a large number of available batteries could be beneficial to the clinician or researcher. However, they warn clinicians about the necessity to choose the correct battery for each application considering variables such as cost, the need for a specialist either for administration or for scoring, and the length of administration.
In a previous review (Reference Wild, Howieson, Webbe, Seelye and Kaye9), the authors identified eighteen computerized test batteries, of which eleven were appropriate for older adults; they recommended that test batteries should be evaluated on a one to one basis due to the variability they displayed. In a comparative study of tools for the assessment of cognition the authors reviewed sixteen assessment instruments, of which fourteen were computer based (Reference Snyder, Jackson and Petersen7). Their goal was to identify measures capable of assessing cognitive changes before noticeable decline suggestive of MCI or early Alzheimer's disease. They concluded that there was no single recommended “gold standard” battery but, rather, a subset of instruments to choose from, based on individual study needs. They recommended researchers compare performance on a given cognitive test/battery with changes in known disease-related biomarkers (structural MRI, cerebrospinal fluid, etc.). A review of computerized tests for older adults in primary care settings (Reference Tierney12) identified eleven test batteries from which three were judged potentially appropriate for assessment in primary care based on good test–retest reliability, large normative samples, a comprehensive description of patient cognitive performance, and the provision of an overall score or probability of MCI.
Usability is a key aspect of ICT programs development. The International Organization for Standardization (ISO) defines usability as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use” (13). It comprises concepts as understandability, learnability, acceptability, user experience, operability, and attractiveness (Reference Zapata, Fernandez-Aleman, Idri and Toval14). User experience is a subjective feeling related to having a satisfactory experience when using technology (Reference Rauschenberger, Schrepp, Cota, Olschner and Thomaschewski15). There is a need to better understand the usability of ICT for persons with dementia, their preferences for specific interfaces, and their acceptance of different technologies (Reference Meiland, Innes and Mountain16). Consultation with people with dementia (PWD) and their carers is crucial to address usability in the design of ICT-based instruments (Reference Span, Hettinga, Vernooij-Dassen, Eefsting and Smits17).
Despite the previous reviews of this subject, two fundamental aspects remain conspicuous by their absence: usability and the possibility of home-based self-administration. Thus, it is necessary to analyze the state of the art of this area in the available instruments to address this issue if necessary. The objective of this systematic review is to analyze the current available ICT-based instruments for cognitive decline early screening and detection in terms of validity, reliability, and usability.
METHODS
A protocol was developed for this systematic review (Supplementary File 1) following the PRISMA reporting guidelines; the supporting PRISMA checklist is available as Supplementary File 2.
Types of Interventions
This systematic review centered on ICT-based instruments assessing or monitoring older adults with potential cognitive decline. This included electronic devices (ED) (personal computers, laptops, tablets, phones, or mobile phones, etc.), Internet (I), monitoring devices (MD), and virtual reality (VR). Due to the profuse amount of instruments in this area, we decided to focus in this study on the study of electronic devices.
Inclusion and Exclusion Criteria
All studies describing ICT-based instruments for the screening, evaluation and assessment of cognitive and functional decline in older adults published between 2010 and 2015 were included. Screening and assessment instruments not validated for older adults, not discriminating results for older adults, or which did not provide minimum normative data (e.g., mean age of participants, diagnosis, etc.) were excluded.
Selection of Studies
A search was performed in July 2015 of the databases Medline and PsycINFO with the search terms (Dementia OR Alzheimer) AND (computer OR ICT) AND (screening OR diagnosis OR assessment OR evaluation) and yielded 13,893 papers (3,891 after the exclusion of duplicates). Of them, 1,668 where published between 2010 and 2015. On the basis of the inclusion criteria, the titles, keywords, and abstracts were assessed by the first author obtaining a total of eighty-nine relevant papers in this first stage of the selection process. Those eighty-nine, papers were then assessed by two authors on the basis of abstracts and full copies of the article when needed. Any disagreement about the inclusion of papers was discussed in a consensus meeting. Seventeen further studies were found through hand search, tracking cited references in other studies, and relevant previous literature reviews in this area.
Data Synthesis
The selected studies were analyzed by two reviewers with a standardized data extraction form, as suggested by the Cochrane Handbook for Systematic Reviews of Interventions. Tests, early detection tools, and screening instruments were grouped according to their main purpose into cognitive test batteries, measures of isolated tasks, behavioral measures (measures of motor and sensory processes), and diagnostic tools (used by clinicians to help them in the diagnostic process).
Self-administration was defined as “test-taking that is unsupervised after the test platform has been set up, and can occur in the clinic or home setting” (Reference Jacova, McGrenere and Lee18). Cognitive domains were depicted as described by the authors in the article. Concurrent validity was reported as correlations with other previously validated instruments. Discriminant Validity was reported as sensitivity and specificity rates and/or capacity to distinguish people with and without cognitive impairment. When discriminant validity was reported as lack of correlation with unrelated measures the information was also included.
Quality Assessment
Schlegel and Gilliland (Reference Schlegel and Gilliland11) have proposed twenty critical elements that constitute a competent quality assessment for computer-based test batteries grouped in four clusters (module information, test functionality, data recording, and interface usability/anomalous behavior). These elements can be summarized in a systematic list of problems sorted by instrument and identified by severity of problem from 1 (severely affects test integrity) to 8 (affects look and feel). A checklist with these items was used for the quality assessment of the instruments.
RESULTS
The reviewers agreed that thirty-four articles covering thirty-one instruments met the inclusion criteria. Figure 1 presents a flowchart illustrating the selection process. The instruments and their characteristics are summarized in Tables 1 (Descriptive data) and 2 (Psychometric properties). All the selected articles were cross sectional descriptive studies, which is coherent with the fact that all of them validated a test or test battery. See Supplementary File 3 for the references of the reviewed articles. A list of instruments reviewed in the previous literature is provided in Supplementary File 4; twenty-three of the thirty-one instruments included in this review had not been included in the previous literature reviews.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171229094802833-0085:S0266462317000800:S0266462317000800_fig1g.gif?pub-status=live)
Figure 1. Flowchart of study selection.
Table 1. ICT Instruments Descriptive Data: Electronic Devices (PC, Laptop, Tablet, iPad, Mobile Phone)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171229094802833-0085:S0266462317000800:S0266462317000800_tab1.gif?pub-status=live)
Notes. AD, Alzheimer disease; Adm., administered by; B, behavioral measure; cADAS, computerized Alzheimer's Disease Assessment Scale: Cognitive Subscale; CADi, Cognitive Assessment for Dementia, iPad version; CAD-PAD, Clinical Approach to Diagnosis of Pre-dementia Alzheimer's Disease; CAMCI, Computerized Assessment of MCI; CANS-MCI, Computer-Administered Neuropsychological Screen for Mild Cognitive Impairment; CANTAB-PAL, CANTAB Paired Associate Learning; CDR, Cognitive Drug Research computerized assessment; CRRST, Cued-Recall Retrieval Speed Test; C-TOC, Cognitive Testing on Computer; LW, dementia with Lewy bodies; Domains, The cognitive domains were depicted as described by the authors; DT, Diagnostic Tool; Ed, Level of Education Reported (Yes/No); FTD, Frontotemporal Dementia; H, Healthy; Ha, Hippocampal; HGT, Hidden Goal Task; IT, Isolated Task; Lang., Language; MCI, Mild Cognitive Impairment; MCS, Mobile Cognitive Screening; n, sample size; NCGG-FAT, National Center for Geriatrics and Gerontology functional assessment tool; NIHTB-CB, NIH Toolbox Cognition Battery; NR, not reported; P, potentially able to be delivered at home; PC, personal computer; PSMT, Picture Sequence Memory Test; PWD, people With dementia; RAVLT, Rey Auditory Verbal Learning Test; SA, self-administered; SCIT, Subtle Cognitive Impairment Test; SD, standard deviation; SDRST, Spatial Delayed Recognition Span Task; SR, scores reported; T, administration time in minutes; TB, test battery; TDAS, Touch Panel-type Dementia Assessment Scale; TPST, Touch-Panel Computer Assisted Screening Tool; TPT, The Placing Test; Ty, type of intervention; VD, vascular dementia; VECP, Visual Exogenous Cuing Paradigm; VPC, Visual Paired Comparison; VSM, Visuo-spatial memory test.
Table 2. ICT Instruments Psychometric Data: Electronic Devices (PC, Laptop, tablet, iPad, mobile phone)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171229094802833-0085:S0266462317000800:S0266462317000800_tab2.gif?pub-status=live)
Notes. Abbreviations of instrument names can be seen in Table 1. ADAS-cog, Alzheimer's Disease Assessment Scale Cognitive Subscale; BVMT-R, Brief Visuospatial Memory Test-Revised; cADAS, Alzheimer's Disease Assessment Scale; CVLT, California Verbal Learning Test; DRS, Dementia Rating Scale; HC, healthy controls; HDS-R, Hasegawa Dementia Scale-revised; ICC, intraclass correlation coefficient; MCI, mild cognitive impairment; MMSE, Mini Mental State Examination; NPT, neuropsychological tests; P&P, paper and pencil; PC, personal computer; PPVT, Peabody Picture Vocabulary Test; PWD, people With dementia; RAVLT, Rey Auditory Verbal Learning Test; WCST, Wisconsin Card Sorting Test; WMS-R, Wechsler Memory Scale Revised.
Study Quality Assessment
The total score of the studies in Schlegel and Gilliland checklist (2007) ranged from 2/20 (10 percent) to 20/20 (100 percent). The average score was 15.40, equivalent to 77 percent of the possible marks. Table 3 shows the checklist with the scores of each instrument. Module information and version control was the better quality area, with 92 percent of the possible marks accomplished. Data recording got 88 percent of the possible marks, and test functionality 71 percent. The weakest areas of the instruments were usability (18 percent) and anomalous behavior reporting (29 percent).
Table 3. Methodological Quality of Included Studies (Schlegel and Gilliland, 2007)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171229094802833-0085:S0266462317000800:S0266462317000800_tab3.gif?pub-status=live)
Notes. NA, not applicable; NR, not reported; cADAS, Alzheimer's Disease Assessment Scale.
Descriptive Data
Of the thirty-one instruments, 52 percent (n = 16) used a PC, 26 percent (n = 8) a tablet, 13 percent (n = 4) a laptop, one was set in a mobile phone, one used the telephone, and another one used a specifically designed technology. Three of the tablet-based instruments could also be displayed in a personal computer. The most common input device was the touchscreen in 48 percent (n = 15) of the instruments, followed by buttons or keys in 29 percent (n = 9); of which five had two buttons simplified input pads. Other input modalities were mouse (n = 3), microphone or voice recognition (n = 2), eye tracker (n = 1), and multiple devices (n = 1). Fifty-five percent (n = 17) of the instruments were test batteries, 36 percent (n = 11) individual tasks, two diagnostic tools, and one a behavioral measure.
The instruments were validated with a total of 4,307 participants, 1,104 of whom were PWD (M = 74.65; SD = 3.98), 1,057 people with MCI (M = 74.84; SD = 4.46), and 2,146 healthy older adults (M = 73.59; SD = 5.12). Eighty-four percent (n = 26) were administered to healthy older adults, 58 percent (n = 18) to people with MCI, and 65 percent (n = 20) to PWD. Seventy-nine percent of the articles (n = 27) provided information about the years of education of the participants and 94 percent reported exact results and quantitative normative data. The instruments’ administration time ranged from 5 to 44.2 minutes (M = 21.99; SD = 12.05). Sixty-eight percent (Reference O'Halloran, Kemp, Salmon, Tariot and Schneider19) were self-administered; of them, 13 percent (n = 4) were completely self-administered while 19 percent (n = 6) had to be initiated by a technician, 29 percent (n = 9) needed assistance or supervision, and one had to be corrected by a professional. Twenty-six percent (n = 8) were administered by a technician and three did not report the way of administration.
Six percent (n = 2) were delivered at home, 39 percent (n = 12) were delivered at a clinic or laboratory but had the potential of being delivered at home, and 55 percent (n = 17) could only be delivered at a clinic. Ninety-four percent (n = 29) had cognitive outcomes while the remaining two were diagnostic tools assessing the risk to convert to AD.
Usability
Results about usability and understandability are summarized in Table 1. Nineteen percent (n = 6) of the instruments reported outcomes about usability defined as acceptability, efficiency, and stability. In a single paper, the development of the instruments was carried out in several stages, including in each step the suggestions from the usability assessment performed in the previous step through an iterative process (Reference Jacova, McGrenere and Lee18). In another case, the researchers used a computerized system including a Perception Response Evaluation (PRE) module that established whether a participant met minimum perceptual and response requirements for taking various tests (Reference O'Halloran, Kemp, Salmon, Tariot and Schneider19).
Additionally, 22 percent (n = 7) of the instruments provided information about understandability. In three cases, understandability was used as a synonym for the participants’ ability to complete the assessment, but it was not assessed with tests or questionnaires, with one exception (COGVAL) that used a nonstandardized questionnaire (Reference Solís-Rodríguez20). In one study (Reference Memoria, Yassuda, Nakano and Forlenza21), the test instructions were automatically reiterated by the computer program when the pattern of errors suggested that instructions were misunderstood. User experience was assessed in only one instrument (Reference Jacova, McGrenere and Lee18) and other two articles addressed it generically (Reference Onoda, Hamano and Nabika22;Reference Fredrickson, Maruff and Woodward23)
Psychometric Properties
Twenty-three (74 percent) instruments provided information about concurrent validity. Of them, five were validated against well stablished neuropsychological test batteries (e.g., Alzheimer's Disease Assessment Scale: Cognitive Subscale [ADAS-Cog]), seven were validated against brief tests (e.g., Mini Mental State Examination [MMSE], Montreal Cognitive Assessment [MoCA], Hasegawa Dementia Scale-revised [HDS-R]), and eleven against individual tasks or parts of batteries.
Twenty-four (77 percent) instruments reported information about discriminant validity, obtaining in general good levels of sensitivity and specificity in detecting population with cognitive impairment.
Regarding internal consistency, six instruments provided information about intra-class correlation, and eleven about test–retest reliability. Two instruments had had a factor analysis performed and seven provided cutoff points for cognitive impairment.
DISCUSSION
Even though computer-based testing has been used for more than 65 years in research until recently assessment was always carried out by a trained professional in a clinical context (clinic, laboratory, hospital, etc.). General access to personal computers, tablets, and smartphones has opened a wide new horizon of opportunities for community-based assessments that can be self-administered or administered by a carer improving accessibility and the potential for early detection without compromising validity and reliability. However, the results of this review indicate that, despite the range of different and accessible technologies developed in the past years, most of the instruments are still delivered through a personal computer, only eight using a tablet and one a mobile phone. It is necessary to design screening instruments that can be delivered through the most accessible technologies like tablets and smartphones.
One of the strengths and potentials of ICT-based devices is the possibility of being delivered at home, eliminating the need to travel to a healthcare facility. This would allow early screening and detection to be more feasible in comparison with traditional paper and pencil instruments, yet most of the instruments could only be delivered at a clinic (55 percent). As a matter of fact, even though 39 percent of the instruments had the potential to be home delivered (based on the technology needed and automated completion), most of them still needed the assistance of a technician to be administered. In some cases the role of the technician included aspects that the current technology can overcome with remote control or automatic systems like collecting demographic data (Reference Tierney, Naglie and Upshur24); side by side supervision (Reference O'Halloran, Kemp, Salmon, Tariot and Schneider19); or repeating the instructions (Reference Memoria, Yassuda, Nakano and Forlenza21).
This might be caused by a gap between the health system capacity to work with automatically generated data and current ICT development. An effort should be made to develop completely self-administered instruments and to design software that can be initiated by end users or their carers at home. In addition, clinicians and health care systems should develop their capacity to gather and use remote automatically generated clinical data for diagnostic and screening purposes. Ethical concerns about home-based assessments should be addressed, obtaining informed consent from persons with dementia due to possible difficulties understanding complex technology and loss of awareness over time of the data being collected.
Usability
Of the areas analyzed in this review, usability is the most under reported, with only six studies including it into their design process. The fact that 81 percent of the instruments did not address the subject of usability, and 78 percent did not assess understandability poses a concern over their design processes. There seems to be a lack of consensus of the scope of the term; in one of the five studies, for example, usability was taken as a synonym for acceptability (Reference Fredrickson, Maruff and Woodward23).
Integration of electronic devices in the assessment and treatment of older adults with cognitive impairment has raised critics and skepticism, being regarded as solutions not acknowledging their interests, needs, and values. In this context, it is essential to incorporate person centered design (Reference Brooker25) to the development of ICT-based instruments for early screening and detection of cognitive decline. The usability of the system and the application of user-centered design are more important than the level of education or the familiarity with ICT (Reference Schikhof, Mulder and Choenni26). ICT instruments can be embedded in a person-centered model; a good example of this is the provision of feedback sessions after the completion of the assessment to ensure patient and family understanding of diagnosis and prognosis, to answer questions, and to collaboratively discuss recommendations and their implementation (Reference Harrell, Wilkins, Connor and Chodosh27). The interface of the devices should be designed according to individual's age, gender, and preferences, personalizing their appearance (Reference Sun, Burke and Mao28).
While the previous findings of the literature recommend touchscreens as the best interface for older people (Reference Canini, Battista and Della Rosa29), still almost half of the instruments do not include this technology. The match of person and technology has to be considered as it is a key factor in the decision to use technology or not. The inclusion of older adults with cognitive decline in the design and evaluation of these instruments is fundamental, as well as assessing users’ experience (Reference Hassenzahl and Tractinsky30). Unfortunately, this was not the case in most of the instruments reviewed. User experience information is necessary for the design and adaptation of the technology to the participant's desires, thoughts, learning style, and aesthetics.
Lack of computer experience has been repeatedly reported as a characteristic that decreased the odds of independent completion of tests and correct understanding (Reference Tierney, Naglie and Upshur24). The evidence found in this systematic review suggests that this situation could be overcome by the introduction of pre-assessment practices. Pretest training sessions are often used to let participants become familiar with the novel technology (Reference Allain, Foloppe and Besnard31–Reference Friedman, Yelland and Robinson35). Practice and training before using electronic devices is advisable, as older adults can learn to use them and improve their performance. Another field to be explored in future studies is the comparison of individuals’ test scores in different contexts: does the performance of the assessed person change because of the presence or absence of the clinician? Does it get worst or better in independent and automatic evaluation compared with face to face assessments? Another direction to move forward is to increase the accessibility of the instruments by carrying out trials that assess their suitability for independent administration. Usability assessment is vitally important if tests are to be administered independently.
The assessment of usability can be performed through different methods. The ISO/IEC 9126-4 metrics recommends that usability assessments should comprise: effectiveness (the accuracy and completeness with which users achieve specified goals), efficiency (the resources expended in relation to the effectiveness), and user satisfaction (comfort and acceptability of use). There are specific usability assessment tools such as the “Usefulness, satisfaction and ease of use questionnaire” (Reference Lund36), the Everyday Technology Use Questionnaire (Reference Rosenberg, Kottorp, Winblad and Nygård37), the After Scenario Questionnaire (Reference Lewis38), and the System Usability Scale (Reference Lewis, Sauro and Kurosu39). There is also a questionnaire that captures perceived usability and acceptance according to the technology acceptance model (Reference Venkatesh, Morris, Davis and Davis40). In addition, there are also empirical ways in which usability can be measured through observation (e.g., difficulty to release the touchscreen after pressing it, number of times the users pressed the screen, number of times they requested help from the technician, and why help was requested, etc.). Automated evaluation mechanisms should also be adopted to improve the empirical methods used to assess usability (Reference Baez, Couto and Herrera41).
Validity and Reliability
A quality assessment evaluation should represent a required initial step before psychometric properties and validity evaluation, and it should be performed by someone independent of the developer of the instrument (Reference Schlegel and Gilliland11). The methodological quality of the instruments was good according to Schlegel and Gilliland checklist, but only four scored 100 percent of the items (Reference Kim, Hsiao and Do10;Reference Jacova, McGrenere and Lee18;Reference Solís-Rodríguez20;Reference Fredrickson, Maruff and Woodward23), showing a potential for quality improvement, especially in the fields of usability and test functionality.
The validation of the instruments reviewed was carried out with healthy older adults as well as PWD and MCI as distinct groups. This is an asset to be highlighted as it has been reported that persons with cognitive impairment are likely to have decreased ability to manage everyday technology (Reference Nygård and Starkhammar42). People with dementia have greater impairment than people with MCI (Reference Malinowsky, Almkvist, Kottorp and Nygard43). The fact that researchers have validated their instruments for the three groups provides clinicians with the tools needed to make clinical decisions regarding the assessment of the different populations. Most of the instruments obtained acceptable values of specificity and sensitivity. Still, only seven studies provided cutoff points for cognitive impairment. It would be advisable for researchers to make an effort to provide cutoff points for their instruments, as they are essential for screening purposes.
In terms of concurrent validity, most of the instruments were validated against brief tests (MMSE) or individual tasks. This is an aspect to be improved in the validation of screening instruments, as brief batteries like MMSE have significant limitations for early detection of cognitive decline (Reference Ismail, Rajji and Shulman44). Ecological validity of the assessments was not assessed in any of the instruments. Bardram et al. (2006) raised awareness about the necessity to use technological assessments in a real world setting, outside the laboratory, and to carry out longitudinal studies that assess the evolution of the relationship between the end user and technology (Reference Bardram, Hansen, Mogensen and Soegaard45). The mean duration of administration varied across instruments, but in general, it remains as an added value of ICT-based instruments as they achieve good levels of specificity and sensitivity with reasonably brief assessments. There is a need to develop longitudinal studies to analyze the reliability of early detection of cognitive impairment and inherent risk to develop dementia.
Test Batteries versus Individual Tasks
The existence of tests of specific domains, such as visuospatial function, that present good specificity and sensitivity for the detection of cognitive impairment opens the debate about the cost/benefits of performing full assessment batteries for screening purposes. On the other hand, many screening tools are weighted toward assessment of memory impairment; however, deficits in other areas are crucial for differential diagnosis (Reference Ahmed, de Jager and Wilcock46). In this regard, the next step should be the design of brief screening instruments that assess key markers for early detection. Indeed, some computer-based batteries have been analyzed to see if specific subtests would have enough sensitivity to discriminate healthy older people from people with cognitive impairment. Automated speech recognition technology is a promising field (Reference Tierney12), and research on brain-computer interfaces could offer in the near future an opportunity for the assessment, diagnosis, and treatment of people with communication impairments (Reference Liberati, Dalboni da Rocha and van der Heiden47).
LIMITATIONS
As pointed out elsewhere (Reference Snyder, Jackson and Petersen7), some of these instruments are subject to proprietary issues, such as license fees, which leave them out of reach for the general public, or copyright aspects, which prevent researchers and clinicians from modifying them. Researchers, grant funders, and the industry should strive to deliver open access instruments. Even though wide scale cognitive screening can reliably identify individuals with cognitive impairment, additional neuropsychological, clinical, and biomarker data are necessary to identify prodromal dementia (Reference Harrison48). The instruments reviewed in this study are not meant to replace neuropsychological assessment, and cannot carry out a dementia diagnosis on their own; they are instruments that allow the identification of those subjects that could be referred to specialized units.
CONCLUSIONS
As ICT develop, clinicians and health services fall behind in using technological advances for improving health care for older people. Electronic devices for dementia and cognitive impairment early detection and assessment are still in their infancy in terms of accessibility and usability issues. Innovative and comprehensive instruments with the capacity to be delivered in the community are still to be developed and the current existing gap between research and applied technological solutions integrated in the health care services and policies should be narrowed. All in all, we have all what is necessary to tackle the problem of early detection of cognitive impairment in older adults, now the challenge is to find the way to integrate the existing solutions in user friendly and accessible instruments.
SUPPLEMENTARY MATERIAL
Supplementary File 1: https://doi.org/10.1017/S0266462317000800
Supplementary File 2: https://doi.org/10.1017/S0266462317000800
Supplementary File 3: https://doi.org/10.1017/S0266462317000800
Supplementary File 4: https://doi.org/10.1017/S0266462317000800
CONFLICTS OF INTEREST
The authors have nothing to disclose.