Hostname: page-component-7b9c58cd5d-bslzr Total loading time: 0.001 Render date: 2025-03-16T00:44:02.654Z Has data issue: false hasContentIssue false

Preliminary Evaluation of an Online Training Package in Cognitive Behaviour Therapy: Satisfaction Ratings and Impact on Knowledge and Confidence

Published online by Cambridge University Press:  28 November 2011

David Westbrook*
Affiliation:
Oxford Cognitive Therapy Centre, Warneford Hospital, UK
Freda McManus
Affiliation:
Oxford Cognitive Therapy Centre, and University of Oxford, Warneford Hospital, UK
Gavin Clark
Affiliation:
University of Reading, UK
James Bennett-Levy
Affiliation:
University of Sydney, Lismore, Australia
*
Reprint requests to David Westbrook, Oxford Cognitive Therapy Centre, Warneford Hospital, Oxford OX3 7JX, UK. E-mail: david.westbrook@oxfordhealth.nhs.uk
Rights & Permissions [Opens in a new window]

Abstract

Background: Online CBT training is in its infancy. The initial studies have varied program characteristics and trainee groups, but results appear promising. At this stage, there is a need to evaluate programs with different characteristics to determine which are useful, and which are not. Method: This paper reports a preliminary evaluation of an online CBT training package, OCTC Online, which is distinguished from other online programs by its particularly strong focus on video presentations by trainers, accompanying PowerPoint slides, and video demonstrations of key clinical techniques. Participants (N = 94) completed online rating scales and questionnaires assessing (a) their satisfaction with the training; (b) their self-rated knowledge and confidence about the topics discussed (pre- and post-training); and (c) a multiple choice questionnaire (MCQ) objective test of knowledge (also pre- and post-training). Results: Results showed that on average students were highly satisfied with the online training modules, their self-rated confidence increased significantly, and so did their scores on the MCQ. Conclusions: The study has significant limitations but nevertheless contributes to the growing body of evidence that online training may have a useful part to play in enhancing therapists’ knowledge of CBT theory and techniques, and their confidence in using the techniques.

Type
Research Article
Copyright
Copyright © British Association for Behavioural and Cognitive Psychotherapies 2011

Introduction

With the increasing acceptance of cognitive behaviour therapy (CBT) as an effective treatment for a range of conditions (e.g. NICE, 2011), there has been increased interest in CBT training, and in the “dissemination gap” between the research evidence on effective CBT treatments and the frequent lack of such evidence-based treatments in routine clinical practice (Shafran et al., Reference Shafran, Clark, Fairburn, Arntz, Barlow and Ehlers2009). One of the major challenges for the dissemination of CBT into routine clinical practice is how to train therapists both effectively and economically (Fairburn and Cooper, Reference Fairburn and Cooper2011). Rakovshik and McManus (Reference Rakovshik and McManus2010) recently reviewed the existing literature on CBT training methods and concluded that CBT training can have positive effects on therapist competence, and that such increased competence can lead to better patient outcomes; but that obtaining such outcomes may be dependent on several variables, such as the amount of training and the follow-up of training with supervised clinical practice. Of relevance to the current study, Rakovshik and McManus suggested that “. . .some mode of theoretical instruction seems integral in initial training; however, this may be provided through workshops, reading or web-based instruction. . .” (p. 512).

Web-based online training has several potential advantages. First, once the training has been set up, the running costs are low; a training video can be viewed many times without having to pay a trainer on each occasion, and is thus easily “scalable” to training large numbers of therapists (Fairburn and Cooper, Reference Fairburn and Cooper2011). Second, online training is accessible to students whenever it suits them, without any time or expense spent on travel; this may be particularly important for scattered rural communities (see Bennett-Levy and Perry, Reference Bennett-Levy and Perry2009, for a discussion of some of these issues) or for teams where services cannot be halted to release many staff for training at the same time. Third, online training has obvious advantages over written materials in that therapeutic strategies can be illustrated, not just described; and it has advantages over face-to-face training in that students can repeat the training as many times as they need.

On the other hand there are also some potential drawbacks to online training. It can be expensive to produce the training programme in the first place, requiring video and web specialists to set it up. It needs good internet infrastructure to ensure smooth delivery to students (a possible downside for the same rural audiences, highlighted in Bennett-Levy and Perry, Reference Bennett-Levy and Perry2009). In most cases it is inherently less interactive than traditional face-to-face teaching – students are in a more passive “watching” role, rather than being active participants in interactions with the trainer. It may also be more difficult for trainers to assess students’ clinical skills than it would be face-to-face.

There is a small but growing literature evaluating online CBT training. A notable feature of this literature is that much of it arises from one specific area of CBT practice – training people to use CBT with substance abusers. Sholomskas et al. (Reference Sholomskas, Syracuse-Siewert, Rounsaville, Ball, Nuro and Carroll2005) compared three methods of training substance abuse workers: a written training manual alone; the same manual plus an online training programme (described by the authors as “. . .highly text based and did not include multimedia features such as videotaped vignettes of applications of CBT. . .” [p. 113]); or the manual plus a 3-day face-to-face training seminar, followed by clinical supervision over the phone. Outcomes were assessed using the Yale Adherence Competence Scale (Carroll et al., Reference Carroll, Nich, Sifry, Frankforter, Nuro, Ball, Fentona and Rounsaville2000) to rate clinicians’ ability to demonstrate three key CBT techniques in videotaped role plays. Results showed that the seminar plus supervision condition was superior to the manual-only condition for two of the three assessments immediately after the training, with the online group intermediate and not significantly different to manual only. However these conclusions are limited by the design in two major ways: (a) the three conditions differed not only in the type of training but also in the total amount of training (a mean total of around 10 hours for manual only, 26 hours for the online group, and 33 hours for the group receiving a seminar plus supervision); (b) the seminar condition was the only one to include post-training supervision. It is thus impossible to know whether the results are due simply to the amount of training, or to the inclusion of supervision, rather than the mode of training per se.

Also in the area of training substance use workers, Weingardt and others have carried out trials of web-based CBT courses. Weingardt, Villafranca and Levin (Reference Weingardt, Villafranca and Levin2006) compared one hour of face-to-face teaching of a “Coping with craving” module from a training manual on CBT for cocaine addiction, with up to one hour of the same material delivered via a computer. Multiple choice questions to assess the students’ knowledge of the taught topics showed that both groups improved their score from pre-training to post-training, with no significant difference between the two conditions, suggesting that online training can be as effective as face-to-face teaching.

In another study, Weingardt, Cucciare, Bellotti and Lai (Reference Weingardt, Cucciare, Bellotti and Lai2009) compared two versions of a training programme for substance use workers, in both of which both the training and clinical supervision were delivered via the internet (supervision was done using web conferencing software). Workers were randomized to two groups: “high fidelity” (HF), in which participants’ training and supervision were highly structured and focused on adherence to the required protocol; and “low fidelity” (LF), which adopted a more flexible, student-led approach. Both groups improved on measures of CBT knowledge, ratings of self-efficacy and a measure of “burnout”, but there were no differences between the groups except on the burnout measure (where LF did better on one burnout factor).

Dimeff et al. (Reference Dimeff, Koerner, Woodcock, Beadnell, Brown and Skutch2009) provided training in the group skills component of dialectical behaviour therapy (DBT) to community mental health staff working with patients who either had substance use problems or were suicidal. Students were randomized to one of three training conditions: a written treatment manual; a 20-hour “interactive, multimedia online training” (OLT); or a 2-day instructor-led training workshop (ILT). Evaluation included measures of satisfaction with training, knowledge, self-efficacy, self-reported use of skills after training, and ratings of a role-played clinical test. On most measures OLT and ILT out-performed the manual, and OLT did best on knowledge; however, on the most clinically relevant measures – reported use of skills, and role-play performance – there were no significant differences between the training methods, at up to 90 days follow-up.

Finally, Bennett-Levy, Hawkins, Perry and Cromarty (unpublished observations) evaluated the impact of an introductory CBT program in Australian mental health practitioners randomly assigned to an independent group (access to the program for 12 weeks) or a supported training group (15 minutes of phone or Skype support every fortnight). Both groups demonstrated significant gains on an objective test of CBT knowledge, and self-reported gains in knowledge, skills, confidence and utilization. However, the supported training group were more likely to complete the program than the independent group.

As the above literature review shows, the study of online learning is at a relatively early stage. The available data suggest that it can be an effective way of delivering CBT training, but there are a limited number of programs that vary in (i) target audience (mostly substance abuse counsellors); (ii) range and level of content (e.g. CBT for drug and alcohol, basic or more advanced CBT skills); (iii) length of program (1–25 hours); (iv) structure of program (sequential modules versus free-to-choose); and (v) modes of delivery (text-based, role-play demonstrations, lectures, assessments). At this stage in the evolution of online training, it is important to identify these different characteristics and to gather data on the full range of programs to determine which forms of training are most effective for whom.

This is a first report on the online program in the present study, OCTC Online. OCTC Online is distinguished by being focused on using “mainstream” CBT skills with general adult populations, rather than the specialist focus of most existing research; having a large range of modules from introductory level (e.g. assessment and formulation, goal setting) through to advanced (schema therapy, compassionate mind); having a large number of presenters, who are experts in their respective fields; and by having more video-based materials (video lectures and video demonstrations) than most others.

From a previous study, it appeared that the combination of didactic video presentations, reading, and the extensive modelling of skills seen in OCTC Online should be particularly helpful for acquiring CBT knowledge, and in translating this knowledge to procedural skills (Bennett-Levy, McManus, Westling and Fennell, Reference Bennett-Levy, McManus, Westling and Fennell2009). Therefore the aim of the present study was to determine whether an online training program with the above characteristics was meeting students’ needs and helping them to learn about CBT.

Method

Background

Oxford Cognitive Therapy Centre (OCTC – part of Oxford Health NHS Foundation Trust) produced some online training modules that went live in April 2008. The modules cover a range of topics from those suitable for beginners (e.g. “Basic CBT theory”) through intermediate (e.g. “CBT for OCD”) to more advanced topics (e.g. “Enhanced reliving in PTSD”, “Mental imagery”). The modules are streamed to students’ computers via their web browsers, and the training library window contains three main areas:

  1. 1. A video window, which shows either the trainer delivering a presentation, or role-play demonstrations of the CBT strategies being discussed.

  2. 2. A PowerPoint window, which displays slides synchronized to the trainer's video presentation, much as might be done during a face-to-face presentation.

  3. 3. A “downloads” window, through which supplementary documents can be downloaded, such as a copy of the PowerPoint slides, or other relevant documents (e.g. therapy worksheets, formulations).

Modules are available on subscription or for “one off” purchase, and buyers can watch a purchased module up to 10 times per month during their period of access.

Training modules and participants

Five of the online training modules were chosen to be included in this evaluation, simply on the basis that at the time of the study they were the most commonly accessed and would therefore be likely to get a larger sample size in the recruitment period. The five [with the number of participants providing responses for that module] were:

  • - Basic CBT theory [n = 36]

  • - Doing a CBT assessment [n = 27]

  • - Understanding panic [n = 12]

  • - Treatment of panic [n = 8]

  • - Model of OCD and outline of Tx [n = 11]

Total N of participants was therefore 94, but missing data reduce numbers for some analyses (see below).

Measures

Kirkpatrick and Kirkpatrick's (Reference Kirkpatrick and Kirkpatrick2006) “four levels” model is a useful framework for evaluating training. They suggest that it is useful to distinguish the following four levels of assessment of training:

  1. 1. The reaction of students – what they thought and felt about the training;

  2. 2. Their learning – any resulting increase in knowledge or skills;

  3. 3. Their subsequent behaviour – the extent to which the training is translated into actual changed behaviour in the workplace;

  4. 4. Finally, results – the extent to which all the above changes produce improved results for the student's organization (in our context, for instance, improved outcome for patients).

Few training evaluations get much above Level 2, and this study is no exception. We assessed students “reaction” and “learning” but, given the fact that students could access the training from anywhere, we had no way to assess their clinical behaviour or their patient outcomes.

Student satisfaction ratings

After viewing a module, students rated it on several dimensions using Likert-type rating scales from 0 to 10:

  • - Useful: How useful were the topic and areas covered in the module (“Not useful” – “Very useful”)

  • - Relevance: Were the topics covered relevant to your needs (“Not relevant” – “Extremely relevant”)

  • - Overall satisfaction: How would you describe your satisfaction overall (“Not at all satisfied” – “Extremely satisfied”)

  • - Length: How did you find the length of the module (“Too short” – “Too long”: for this and the next rating “5” is the desirable rating)

  • - Pace: Were the topics covered at an appropriate pace (“Too slow” – “Too fast”)

Subjective ratings of knowledge and confidence in skills

Before and after viewing the training module, students did another set of Likert-type ratings, also from 0–10, on three aspects of their subjective perception of knowledge and skills (where “X” is the topic of the module):

  • - Theory: How much do you think you currently understand X (“No understanding” – “Very good understanding”)

  • - Understanding clinical problems: How much do currently understand how to use X to understand clinical problems (“No understanding” – “Very good understanding”)

  • - Practical application: How confident would you currently feel in being able to use X with a client (“Not at all” – “Extremely confident”)

Objective changes in knowledge: multiple choice questions

To assess any changes in knowledge arising from the training, four multiple choice questions (MCQs) were devised for each of the included modules, and students were asked to answer these before and after the training: see Figure 1 for an example. For each MCQ the student was required to choose one answer from the four given choices. The percentage of correct answers was taken as the index of students’ knowledge.

Figure 1. Example of multiple choice question (from “Basic CBT theory”)

Procedure

Data were collected between March and October 2009. The online training website was set up so that anyone accessing one of the relevant five modules would be met with a “pop up” window that asked them if they would be willing to assist by filling in (anonymous) questionnaires for the study, as described above. If they declined, they went straight through to the training page, with no further involvement in the study. If they accepted, they were presented with the measures before and after viewing the training module. We had no way of recording how many people declined, so we cannot know what proportion our sample is of all those accessing the training during this time.

Results

Subjective ratings: satisfaction and confidence

Table 1 shows that mean satisfaction ratings were high: all mean ratings are either over 8 (for dimensions where the maximum 10 is desirable), or within 0.5 points of the “ideal” score of 5 (for Length and Pace).

Table 1. Students' ratings of satisfaction and confidence across all modules

Students’ ratings of confidence in using the knowledge and skills learned were analyzed in a two-way repeated measures MANOVA (Module x Time; N = 89). The MANOVA showed significant main effects for Time: F(3,82) = 41.4, p < .001 (partial η2 = .60); and for Module: F(12,252) = 2.7, p <.005 (partial η2 = .11); but no interaction effects. In other words, there was an overall increase in the students’ confidence ratings from pre- to post-training, and some modules received higher ratings overall than others, but there were no differential training effects between different modules. Post hoc tests using Bonferoni adjustment showed that the significant Module main effect involved the “Basic CBT theory” module scoring higher than the “Understanding Panic” and “OCD” modules. See Table 1 for the overall mean scores on these dimensions before and after viewing the module.

Objective outcome: MCQs before and after training

The mean percentages correct in the MCQs for each online module are shown in Figure 2, with error bars showing the 95% confidence intervals. A two-way repeated measures ANOVA of MCQ scores (Module x Time; N = 90) revealed a significant main Time effect, F(1,85) = 86.1, p < .001 (partial η2 = .50); no significant Module effect, F(1,4) = 0.73, p > .50 (partial η2 = .033); and the Module x Time interaction approached significance, F(1,4) = 2.4, p < .06 (partial η2 = .10). This indicates that students’ knowledge significantly increased from pre- to post-training for all modules, but there is no clear evidence of any differential effect between the modules. Combining the MCQ results for all modules gave overall mean percentages correct of 63.6 (SD 24.9) pre-training, improving to 90.3 (SD 15.8) post-training. This shows a significant increase in the percentage correct from pre- to post-training, t (89) = 10.6, p < .001, with an effect size (Cohen's d) of 1.28.

Figure 2. Mean percent correct in MCQs, pre- and post-training, by training module (total N = 90), with 95% confidence intervals

Discussion

These results add to the growing body of work suggesting that online, web-based programmes can be an effective way of providing CBT training. Students were generally satisfied with the training, felt more confident that they understood the theory and techniques and could apply them in clinical situations, and showed significantly improved performance on MCQ tests of CBT knowledge.

The small scale of the study and the limitations arising from its context as an unfunded routine service evaluation mean that the results must be interpreted cautiously. Limitations include the lack of a “no-training” control group; that there was no assessment of the students’ actual clinical skills (i.e. observation of them carrying out therapy tasks); and that all assessments were carried out immediately after the training, meaning that we cannot know how enduring the effects of the training were. Furthermore, the naturalistic design of the study is both a strength and weakness. The strength is that it is likely to have been a clinically representative sample in that the participants were people who were spontaneously seeking CBT training, rather than people who were recruited specifically for the purpose of the study. However, the disadvantage is that we have no information on who the participants were (e.g. professional background, or pre-existing skills), nor on how representative of the total pool of people accessing the online training they were.

The limitations noted above mean that the positive results reported in this study require replication on larger, better controlled samples. However, the results add to the growing evidence base that suggests that it is possible to improve therapists’ CBT knowledge and skills via the use of online training. Whereas much of the previous research has investigated the effects of online training for rather specific CBT skills (e.g. Dimeff et al., Reference Dimeff, Koerner, Woodcock, Beadnell, Brown and Skutch2009; Weingardt et al., Reference Weingardt, Villafranca and Levin2006), the results from the current study extend this to mainstream CBT skills in a naturalistic population. OCTC Online also appears to have a heavier emphasis on video content (both lectures and role-play demonstrations) than any of the other reported online programs, so it is important to note that the positive results from other studies extend to this mode of online presentation.

Most of these early online training results are positive, but it is impossible to arrive at a general conclusion about the effectiveness of online training, any more than one could make general conclusions about all face-to-face training. As Rakovshik and McManus's (Reference Rakovshik and McManus2010) review suggests, there are many factors that may affect training outcomes. Further research is needed to determine in what settings, for whom, and for what skills, online training is effective, as well as what advantages it may have over other forms of training. Nevertheless, the prospect of new cost-effective, “scalable” ways of providing CBT training is an exciting one.

Acknowledgements and Declaration of interest

Our thanks to Oxford Health NHS Foundation Trust (Oxfordshire & Buckinghamshire Mental Health NHS Foundation Trust as it was at the time of this project), and specifically to Jon Allen (OBMH's Director of Nursing at that time) for investing in the production of the online modules. DW and FM are on the staff of OCTC, which is part of Oxford Health NHS Foundation Trust, and the Trust has a financial interest in the online training evaluated in this study.

References

Bennett-Levy, J., McManus, F., Westling, B. and Fennell, M. (2009). Acquiring and refining CBT skills and competencies: which training methods are perceived to be most effective? Behavioural and Cognitive Psychotherapy, 37, 571583.CrossRefGoogle ScholarPubMed
Bennett-Levy, J. and Perry, H. (2009). The promise of online cognitive behavioural therapy training for rural and remote mental health professionals. Australasian Psychiatry, 17 Supplement, 121124.CrossRefGoogle ScholarPubMed
Carroll, K. M., Nich, C., Sifry, R., Frankforter, T., Nuro, K. F., Ball, S. A., Fentona, L. and Rounsaville, B. J. (2000). A general system for evaluating therapist adherence and competence in psychotherapy research in the addictions. Drug and Alcohol Dependence, 57, 225238.CrossRefGoogle ScholarPubMed
Dimeff, L., Koerner, K., Woodcock, E., Beadnell, B., Brown, M., Skutch, J. et al. (2009). Which training method works best? A randomized controlled trial comparing three methods of training clinicians in dialectical behavior therapy skills. Behaviour Research and Therapy, 47, 921930.CrossRefGoogle Scholar
Fairburn, C. G. and Cooper, Z. (2011). Therapist competence, therapy quality, and therapist training. Behaviour Research and Therapy, 49, 373378.CrossRefGoogle ScholarPubMed
Kirkpatrick, D. and Kirkpatrick, J. (2006). Evaluating Training Programs: the four levels (3rd ed.). San Francisco: Berrett-Koehler Publisher.Google Scholar
NICE (National Institute for Health and Clinical Excellence) (2011). Mental Health and Behavioural Conditions. Available from http://guidance.nice.org.uk/Topic/MentalHealthBehavioural Accessed 16 April 2011.Google Scholar
Rakovshik, S. and McManus, F. (2010). Establishing evidence-based training in cognitive behavioral therapy: a review of current empirical findings and theoretical guidance Clinical Psychology Review, 30, 496516.CrossRefGoogle ScholarPubMed
Shafran, R., Clark, D. M., Fairburn, C. G., Arntz, A., Barlow, D. H., Ehlers, A. et al. (2009). Mind the gap: improving the dissemination of CBT. Behaviour Research and Therapy, 47, 902909.CrossRefGoogle Scholar
Sholomskas, D. E., Syracuse-Siewert, G., Rounsaville, B. J., Ball, S. A., Nuro, K. F., and Carroll, K. M. (2005). We don't train in vain: a dissemination trial of three strategies of training clinicians in cognitive-behavioral therapy. Journal of Consulting and Clinical Psychology, 73, 106115.CrossRefGoogle Scholar
Weingardt, K. R., Cucciare, M. A., Bellotti, C. and Lai, W. P. (2009). A randomized trial comparing two models of web-based training in cognitive-behavioral therapy for substance abuse counselors. Journal of Substance Abuse Treatment, 37, 219227.CrossRefGoogle ScholarPubMed
Weingardt, K. R., Villafranca, S. W. and Levin, C. (2006). Technology-based training in cognitive behavioral therapy for substance abuse counselors. Substance Abuse, 27, 1925.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. Example of multiple choice question (from “Basic CBT theory”)

Figure 1

Table 1. Students' ratings of satisfaction and confidence across all modules

Figure 2

Figure 2. Mean percent correct in MCQs, pre- and post-training, by training module (total N = 90), with 95% confidence intervals

Submit a response

Comments

No Comments have been published for this article.