Introduction
Cognitive behaviour therapy (CBT) is the most commonly practised psychotherapy in the UK National Health Service (NHS) (Department of Health, 2001) and in other healthcare settings. However, demand outstrips the ability of the NHS to deliver this intervention to all who might benefit and the structure of most CBT services, characterized by ‘9–5’ working and hourly appointments, makes this therapy inaccessible to many (Lovell & Richards, Reference Lovell and Richards2000).
Computerized CBT (cCBT) is one of several self-help therapies that aim to offer CBT to patients while reducing the number and cost of therapists needed. Four older meta-analyses on self-help have found CBT by bibliotherapy to be as effective as therapist-led CBT but did not look at cCBT specifically (Scogin et al. Reference Scogin, Stephens and Calhoon1990; Gould, Reference Gould1993; Marrs, Reference Marrs1995; Cuijpers, Reference Cuijpers1997; reviewed by Bower et al. Reference Bower, Richards and Lovell2001). The National Institute of Clinical Excellence (NICE) has recently published updated guidance (Department of Health, 2006), based on a report from the Health Technology Assessment Programme (Kalenthaler et al. Reference Kaltenthaler, Brazier, De Nigris, Tumur, Ferriter, Beverley, Parry, Rooney and Sutcliffe2006), which recommends some cCBT packages for use in mild and moderate depression and anxiety.
Traditional systematic reviews of cCBT, such as those commissioned by NICE, have evaluated the clinical and cost effectiveness of cCBT (Bornas et al. Reference Bornas, Rodrigo, Barceló and Toledo2002; Department of Health, 2002; Kaltenthaler et al. Reference Kaltenthaler, Shackley, Stevens, Beverley, Parry and Chilcott2002; Lewis et al. Reference Lewis, Anderson, Araya, Elgie, Harrison, Proudfoot, Schmidt, Sharp, Weightman and Williams2003) and have focused on randomized evaluations. However, concerns remain around the acceptability and adverse consequences of cCBT in comparison with therapist-led CBT. These are questions that might be addressed by methods other than randomized controlled trials summarized in traditional systematic effectiveness reviews (Dixon-Woods & Fitzpatrick, Reference Dixon-Woods and Fitzpatrick2001).
The purpose of this review was to systematically examine the barriers to the uptake of cCBT from a wider range of source types than previous reviews, including the NICE guidelines. We focused specifically on the acceptability, accessibility and adverse consequences associated with cCBT, and utilized both quantitative and qualitative data to enhance the interpretation of numerical findings.
Method
We conducted a systematic review according to best practice guidelines (NHS Centre for Reviews and Dissemination, 2001) and synthesized qualitative and quantitative data together according to an ‘integrative method’ originally proposed by Thomas et al. (Reference Thomas, Harden, Oakley, Oliver, Sutcliffe, Rees, Brunton and Kavanagh2004).
Detailed methods and references are available in the web version of this paper. In brief, we searched electronic databases up to July 2005 for studies of a variety of research designs and from both primary and secondary care settings on cCBT, defined as interventions where the computer took a lead in decision making and was more than a medium. We extracted both qualitative and quantitative data on acceptability, accessibility and adverse consequences. Study quality was assessed in a manner relevant to the style of research. Initial data extraction was by design of study, then extracted into qualitative and quantitative sets around each research question. In particular, we looked at missing data or areas of concern from the harder facts of the quantitative data and then looked to see if the richer but potentially less reliable qualitative data were able to give insight into these (Thomas et al. Reference Thomas, Harden, Oakley, Oliver, Sutcliffe, Rees, Brunton and Kavanagh2004). Meta-analysis was performed on appropriate quantitative data.
Results
Study flow
From 2410 abstracts identified by the computerized search, 46 manuscripts were included in the review, relating to 36 individual research studies. The Quality of Reporting of Meta-analyses (QUOROM) table (Moher et al. Reference Moher, Cook, Eastwood, Olkin, Rennie and Stroup1999) is available in the online Appendix (Table A1).
Study funding
It is hard to be clear on this as not all studies gave information on conflict of interest. Some studies were overtly funded, some were conducted by staff employed by the same company who used and developed the software, and some were conducted by staff who had shares in the software (see online Appendix, Table A2).
Study characteristics and data synthesis
Initial data abstraction tables were produced and data were further abstracted into quantitative and qualitative sets. Table A2 (online) shows all the quantitative data: dropping out included being lost to follow-up and not returning post-evaluation data. Table A3 (online) shows the largest area of qualitative data: that relating to acceptability. This table in particular gives many quotes from papers illustrating the type of qualitative data that were used.
Acceptability
Quantitative data
Examining the flow of participants through a trial can be informative in assessing acceptability (see Table A2, online). The presence of a Consolidated Standards of Reporting Trials (CONSORT) diagram (Moher et al. Reference Moher, Schulz and Altman2001) made this task easier. While many were invited to take part in the studies (40 372 people), far fewer started the study (3895 people, median 38%, range 4–84%) and an even smaller number finished (2416 people). Different recruitment strategies may explain some of the variation in progression. Alternatives were not typically offered and studies were not performed within a ‘stepped-care’ framework. A median of 83% (range 26–100%) of those entering a study (all groups) finished the study, but if studies that did not follow an intention-to-treat methodology are removed, this falls to a median of 79%.
People in the cCBT arm were twice as likely to drop out, though this was not seen in all studies. Statistically, this was not significant [pooled odds ratio (OR)=2.03, 95% confidence interval 0.81–5.09] but does show a strong trend. As in other meta-analyses of adverse effects data (Ioannidis et al. Reference Ioannidis, Chew and Lau2002), there was substantial heterogeneity between the studies, hence a random-effects method was used [I 2 (variation in OR attributable to heterogeneity)=77.5%]. This is shown graphically in Fig. 1. The control arm differed between studies but was typically an active intervention. Dropping out of a research trial is different to dropping out of treatment, but this still raises an area of concern about acceptability.
Data on the number of modules completed showed that just because a person completed the scientific study did not mean they had completed a course of cCBT. Percentage completers ranged from 12–100% with a median of only 56%. There were many reasons for drop-outs, but often no reason was given. Personal circumstances played a major role, including travel (for those studies based around a clinic computer). Internet-based studies did not have this limitation, but lack of time was still an issue. ‘Therapy’ was given as a reason for dropping out by 100 people, but it was used in an all-inclusive manner and it was not possible to tell from the data whether this was cCBT, the control intervention or both. Information technology (IT) was not given as a common reason for drop-out after randomization.
Qualitative data
For those who participated in trials, satisfaction levels with therapy and content were high (<60% ‘good’ or ‘very good’). Concepts were easy to understand and better than bibliotherapy. People used cCBT who would not have ‘bothered’ their general practitioner (GP). One study showed a non-significant trend that the therapist was more helpful than the computer, but another study showed that 44% would prefer cCBT and only 12% would prefer a therapist. Only 9% would not consider using a computer. Clients valued a flexible system and would be prepared to pay up to £10 per session (Graham et al. Reference Graham, Franses, Kenwright and Marks2000). Some felt cCBT was helpful, more durable and better than non-cCBT therapies experienced and would recommend it to a friend. Others found it often too demanding, patronizing or fast-paced and though they felt ‘understood’ by the computer, there was slight preference for therapist-led therapy and an English accent (for the UK trials). Some felt glad it was available as they would not have otherwise sought help from their GP.
Therapists were less positive. In two surveys (Williams & Garland, Reference Williams and Garland2002; Whitfield & Williams, Reference Whitfield and Williams2004) they voiced concerns over harm to the client, effectiveness and compliance and were worried about finding space and receiving institutional backing. They saw an advantage over written self-help materials, but did not see the computer replacing therapists – rather as more of a supplement. GPs were more positive: ‘I was disappointed when (the research) ended because we'd had (the computer) in place for quite a while and it had become a fairly integral part of the service’ (E. Keaverny and K. Blackburn unpublished observations).
People had generally positive views about the technology involved. Older people were felt to have more challenges and some saw the interface as ‘cold’. Satisfactory training was given to clients, though therapists felt they needed more training and had concerns over data protection and security (not shared by clients). It should be noted that the clients who had progressed far enough in studies to give this type of feedback would have been a self-selecting group who are able to work with computers. (The majority of the references for this section are shown in Table A3, online Appendix.)
Accessibility
Quantitative data
Education and social class
Four studies (Selmi et al. Reference Selmi, Klein, Greist, Sorrell and Erdman1990; Gilroy et al. Reference Gilroy, Kirkby, Daniels, Menzies and Montgomery2000, Reference Gilroy, Kirkby, Daniels, Menzies and Montgomery2003; Heading et al. Reference Heading, Kirkby, Martin, Daniels, Gilroy and Menzies2001; Marks et al. Reference Marks, Mataix-Cols, Kenwright, Cameron, Hirsch and Gega2003; Gega & Marks, Reference Gega and Marks2004) gave details of employment: two-thirds were currently employed, with roughly equal numbers of unemployed and students. Some studies took place entirely on university campuses (Newman, Reference Newman1997, Reference Newman1999; Newman et al. Reference Newman, Consoli and Taylor1999) and three (Selmi et al. Reference Selmi, Klein, Greist, Sorrell and Erdman1990; Marks et al. Reference Marks, Mataix-Cols, Kenwright, Cameron, Hirsch and Gega2003; Proudfoot et al. Reference Proudfoot, Goldberg, Mann, Everitt, Marks and Gray2003b; Gega & Marks, Reference Gega and Marks2004) reported high levels of users having completed basic education (81%, 88%, 100%) and university degrees (28%, 50%). This is in contrast to more typical levels of 21% employment in disabled primary care populations and 14% higher education students nationally (Office for National Statistics, 2006).
Computer literacy
Study participants had high levels of computer experience: 62% used at work (White et al. Reference White, Jones and McGarry1998); 35% used daily (Gega & Marks, Reference Gega and Marks2004). This is compared with a general population use rate of 38% in one study (Clarke et al. Reference Clarke, Reid, Eubanks, O'Connor, DeBar, Kelleher, Lynch and Nunley2002).
Staff support
The actual amount of staff support needed varied considerably, zero up to 150 min per client, but was noted in one paper to be 3.7 times less than therapist-led CBT (Marks et al. Reference Marks, Kenwright, McDonough, Whittaker and Mataix-Cols2004).
Qualitative data
Education and social class
Four descriptive studies noted than their clients came from socially deprived (White et al. Reference White, Jones and McGarry2000; Marks et al. Reference Marks, Mataix-Cols, Kenwright, Cameron, Hirsch and Gega2003; Gega & Marks, Reference Gega and Marks2004) or mixed urban/rural (Whitfield et al. Reference Whitfield, Hinshelwood, Pashely, Campsie and Williams2006; G. Whitfield et al. unpublished observations) areas with a range of social classes (Carr et al. Reference Carr, Ghosh and Marks1988). There was no mention of the reading age of the programs used, but one study used the National Adult Reading Test as part of their assessment (Gilroy et al. Reference Gilroy, Kirkby, Daniels, Menzies and Montgomery2000, Reference Gilroy, Kirkby, Daniels, Menzies and Montgomery2003) and many studies excluded people who could not read or write (Proudfoot et al. Reference Proudfoot, Goldberg, Mann, Everitt, Marks and Gray2003a, Reference Proudfoot, Ryden, Everitt, Shapiro, Goldberg, Mann, Tylee, Marks and Gray2004; McCrone et al. Reference McCrone, Knapp, Proudfoot, Ryden, Cavanagh, Shapiro Ilson, Gray, Goldberg, Mann, Everitt and Tylee2004; Whitfield et al. Reference Whitfield, Hinshelwood, Pashely, Campsie and Williams2006; G. Whitfield et al. unpublished observations).
Computer literacy
Several trials and surveys (Carr et al. Reference Carr, Ghosh and Marks1988; Osgood-Hynes et al. Reference Osgood-Hynes, Greist, Marks, Baer, Heneman, Wenzel, Manzo, Parkin, Spierings, Dottl and Vitse1998; Fox et al. Reference Fox, Acton, Wilding and Corcoran2004; Grime, Reference Grime2004; Whitfield et al. Reference Whitfield, Hinshelwood, Pashely, Campsie and Williams2006; G. Whitfield et al. unpublished observations) mentioned that training in the cCBT package included computer training and took up to 90 min to do this; one (Osgood-Hynes et al. Reference Osgood-Hynes, Greist, Marks, Baer, Heneman, Wenzel, Manzo, Parkin, Spierings, Dottl and Vitse1998) offered the first use of the program with supervision. Two observational studies used a ‘familiarization’ module lasting 2 days (Newman, Reference Newman1997, Reference Newman1999) or 1 week (Newman et al. Reference Newman, Consoli and Taylor1999).
Staff support
This varied considerably between studies. The majority offered some degree of staff contact per visit to the clinic, though some studies said this was more to be seen to be providing contact than an objective identified need. This contact was often said to be little used, though one study (Fox et al. Reference Fox, Acton, Wilding and Corcoran2004) noted ‘substantial administrative support was needed’ on a range of clinical and non-clinical issues. A range of staff were used (usually either supervised junior staff or qualified therapists or psychiatrists). GPs saw staff support as important: ‘someone to answer any queries … on a sort-of day-to-day basis of getting people through the system’ (E. Keaverny and K. Blackburn unpublished observations).
Adjuncts to cCBT
A number of studies used additional material to the program such as online discussion rooms, email reminders, bibliotherapy and lengthy handbooks. One trial (Grime, Reference Grime2001, Reference Grime2004) noted that a supportive environment was key to uptake of the intervention. One service delivery report (E. Keaverny and K. Blackburn unpublished observations) noted that a lack of paper support could prove difficult: ‘I found it hard to come away and do the homework and try to remember how things should be laid out.’
Adverse effects
Quantitative data
There were very few data on clear adverse effects of cCBT. Most studies only began reporting numbers once the trial had started, meaning higher-risk clients had often already been excluded. Table A2 (online) lists 255 subjects excluded from starting seven trials due to risk, often a high score on a measure of suicidality.
Qualitative data
Participants were often recruited by non-clinical means (newspaper advertisements, etc.), but just over half the studies included a risk assessment, or a proxy assessment such as the administration of a scale such as the Beck Depression Inventory (Beck et al. Reference Beck, Ward, Mendelson, Mock and Erbaugh1961), which includes an item on suicidal ideation. A number of the cCBT programs offer scales like this each time the client logs on, but only a few studies specifically mentioned they repeated risk assessments periodically throughout the trial. The majority of studies made some attempt to contact drop-outs, either by phone, post or email, but this was usually to determine the reason for drop-out. Only one study (White et al. Reference White, Jones and McGarry2000) specifically mentioned that alternatives were offered to those who could not attend screening. Only two studies (Proudfoot et al. Reference Proudfoot, Goldberg, Mann, Everitt, Marks and Gray2003a, Reference Proudfoot, Ryden, Everitt, Shapiro, Goldberg, Mann, Tylee, Marks and Gray2004; Gega & Marks, Reference Gega and Marks2004; McCrone et al. Reference McCrone, Knapp, Proudfoot, Ryden, Cavanagh, Shapiro Ilson, Gray, Goldberg, Mann, Everitt and Tylee2004) made a specific mention of the place of cCBT within a stepped-care structure, and then only in the discussion.
Conclusions
This review looks at a body of research previously mainly assessed for efficacy, and demonstrates the value of combining both quantitative and qualitative data in order to address questions regarding the role and value of new and innovative technologies. The studies included in this review vary in their methodological quality, with the sources for qualitative data being generally poor. It was not possible to rank studies according to quality as different measures were used for different study types. However, when interpreting qualitative data some attempt has been made to assign more weight to higher-quality studies (see Table A3, online).
Whilst cCBT is a good intervention for some people, there are barriers to its uptake that may substantially limit its impact. A patient journey for a hypothetical person who might be suitable for cCBT is given in Table 1. Computerized therapy might be considered a form of self-help (Lovell & Richards, Reference Lovell and Richards2000), and as such might fit into an organizational system known as ‘stepped care’. For a stepped-care intervention to work, the following assumptions must be met: ‘the equivalence of cCBT in terms of clinical outcomes, efficiency in terms of resource use and costs and the acceptability …. to patients and therapists’ (Bower & Gilbody, Reference Bower and Gilbody2005).
cCBT, Computerized cognitive behavioural therapy.
The efficacy of cCBT is not the topic of this review, but the demonstration of ‘equivalence’ also requires a consideration of adverse consequences. If only 56% of people complete a course of cCBT, then it is harder to predict the long-term effect of therapy, especially as closing modules may contain important work on relapse prevention. Clients may be twice as likely to drop out from cCBT as from other therapies or ‘treatment as usual’ (Fig. 1). Other reviews of psychotherapy trials (Churchill et al. Reference Churchill, Hunot, Corney, Knapp and McGuire2001) show similar drop-out rates in the two arms and, although the main difference here is that a computer is providing the intervention, this cannot be proven as the cause of the discrepancy. Also, some trials had ‘usual GP care’ as the control arm, meaning that people may have dropped out of the control arm because of attitudes to participating in research. The qualitative data gave some information on the causes for drop-out, but these did not appear to be specific to cCBT and are in conflict to the generally positive comments given, which mostly came from those completing therapy.
Is cCBT efficient?
There is a reduction in therapist time, but it is not to zero. Screening and support still seem to be necessary to some degree. Also, any reduction is irrelevant if costs are transferred to the patient or to other service providers. The only cost-effectiveness paper published so far (McCrone et al. Reference McCrone, Knapp, Proudfoot, Ryden, Cavanagh, Shapiro Ilson, Gray, Goldberg, Mann, Everitt and Tylee2004) suggests that this is not the case, but more economic data are needed to confirm if this applies to all cCBT programs.
Is cCBT acceptable?
Acceptability is high among those who make it as far as participating in studies, but initial uptake rates suggest that this may not be the whole picture and there are no data on attitudes among the general public. Low uptake rates for research trials do not necessarily reflect low uptake rates for future therapies, but if clients are unwilling to choose offered treatments, then it is unlikely that stepped-care protocols will be accepted. Therapists are less enthusiastic than trial participants about cCBT; however, they often had low rates of knowledge about the programs or benefits. Their feeling that cCBT will enhance rather than replace therapist-led therapy is consistent with their attitudes to self-help in general (Audin et al. Reference Audin, Bekker, Barkham and Foster2003). It may be that greater exposure to and training about cCBT will change these views. The provision of cCBT in a stepped-care model inherently emphasizes the ongoing role of individual therapy for more complex cases.
In addition, cCBT may be less acceptable and accessible for people with visual impairment, poor IT provision and with a lower educational level. It is known that typical CBT language has a reading age of 17 years (Williams & Garland, Reference Williams and Garland2002), which will be a barrier to many. cCBT run from a clinic still excludes those with no suitable clinic nearby, though Internet-based programs offer a possible way round this for those with better IT provision.
Summary
With demonstrated efficacy, cCBT is a promising technology, but there are still major barriers to widespread uptake with little known about some areas of concern. It is unlikely that cCBT will be acceptable to all, meaning it will only fulfil a partial place in a stepped-care model. Further research is needed to examine the role of cCBT embedded within a system of stepped care, rather than as a single stand-alone component, particularly for those who drop out of treatment.
Acknowledgements
The funding for the research time of R. W. was provided by the Yorkshire Deanery. The Max Hamilton Fund (University of Leeds) covered project costs. We are indebted to the School of Health and Related Research, University of Sheffield, for access to the search results from the recent NICE review (Department of Health, 2006; Kaltenthaler et al. Reference Kaltenthaler, Brazier, De Nigris, Tumur, Ferriter, Beverley, Parry, Rooney and Sutcliffe2006).
Declaration of Interest
None.
Note
Supplementary material accompanies this paper on the Journal's website (http://journals.cambridge.org).