Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-02-06T07:47:27.669Z Has data issue: false hasContentIssue false

A Pilot Evaluation of a Brief CBT Training Course: Impact on Trainees' Satisfaction, Clinical Skills and Patient Outcomes

Published online by Cambridge University Press:  23 September 2008

David Westbrook*
Affiliation:
Oxford Cognitive Therapy Centre, Warneford Hospital, UK
Alison Sedgwick-Taylor
Affiliation:
Gloucestershire NHS Partnership Trust, Gloucester, UK
James Bennett-Levy
Affiliation:
Oxford Cognitive Therapy Centre, Warneford Hospital, UK
Gillian Butler
Affiliation:
Oxford Cognitive Therapy Centre, Warneford Hospital, UK
Freda McManus
Affiliation:
Oxford Cognitive Therapy Centre, Warneford Hospital, UK
*
Reprint requests to David Westbrook, Oxford Cognitive Therapy Centre, Warneford Hospital, Oxford OX3 7JX, UK. E-mail: david.westbrook@obmh.nhs.uk
Rights & Permissions [Opens in a new window]

Abstract

This study reports an evaluation of a 10-day training course in cognitive behaviour therapy (CBT). The course comprised both formal CBT workshops and clinical case supervision, and was evaluated on measures of trainee satisfaction, trainee- and assessor-rated measures of CBT skill, and clinical outcomes for a subgroup of trainees' patients. The course was well received by trainees and their post-training therapy tapes were rated significantly higher on both trainee- and assessor-rated measures of CBT skills. In addition, trainees achieved significantly better outcomes with their patients after the training course than before, suggesting that the training impacted not only on their skills but also on their effectiveness in routine clinical practice. The limitations of the study and implications for future research in training are discussed.

Type
Research Article
Copyright
Copyright © British Association for Behavioural and Cognitive Psychotherapies 2008

Introduction

In recent years several development have led to an increase in the demand for CBT training and interventions in the UK. These include the publication of guidelines from the UK National Institute for Health and Clinical Excellence (NICE) concerning what treatments the National Health Service (NHS) ought to be providing for mental health problems, almost all of which cite CBT as being either the treatment of choice, or at least an important treatment component, for common mental health problems (see, e.g., NICE, 2004a, b). The government has also established a new group of staff in the NHS, so-called graduate mental health workers (GMHWs: Department of Health, 2000). These are graduates in psychology or other relevant disciplines, and one of their roles is explicitly set out as “providing brief evidence-based therapeutic interventions and self-help” in primary care settings – which in practice has mostly meant CBT. Associated with this has been a drive towards “improving access to psychological therapies”, a commitment enshrined in the government's manifesto for the 2005 election. This commitment has been further reinforced by an initiative by Lord Layard, from the London School of Economics, arguing for a massive increase in the provision of NHS psychological therapies on the grounds that the cost of such a programme could be covered by the consequent savings in health, social care and unemployment costs (Centre for Economic Performance, 2006). In May 2006 the Department of Health launched its “Improving Access to Psychological Therapies” (IAPT) programme, including two pilot sites for increasing CBT resources along the lines proposed by Layard.

All these developments imply a need for more people to be trained in CBT, and particularly a need for more economical training. The new GMHWs – and even more so the huge Layard workforce, if it were to be established – cannot possibly all be trained by the existing CBT courses in the UK, most of which are based on a one-year post-graduate diploma level of training. Lord Layard's proposal calls for up to 10,000 new therapists to be trained over a period of 7 years, about half of whom might be clinical psychologists trained along present lines but the other half of whom would be from a variety of backgrounds and would be trained specifically in CBT and other evidence-based therapies. The one-year Diploma model of training is relatively expensive, and may not be appropriate for everyone. Nor is it clear that all the new workers need such a high level of specialist training. Targets for the expansion of CBT training are therefore much more likely to be achievable if we can provide effective training at a variety of levels and in more cost-effective ways.

Another obvious implication of this increased demand for CBT training is that we need to know more about the effects of training, and in particular whether relatively brief and economical training can be effective. In general, the literature suggests that providing workshops alone has limited value, particularly in the longer term (King et al., Reference King, Davidson, Taylor, Haines, Sharp and Turner2002; Walters, Matson, Baer and Ziedonis, Reference Walters, Matson, Baer and Ziedonis2005). However, providing more extensive CBT training has been associated with improvements in clinical skill (Milne, Baker, Blackburn, James and Reichelt, Reference Milne, Baker, Blackburn, James and Reichelt1999; James, Blackburn, Milne and Reichfelt, Reference James, Blackburn, Milne and Reichfelt2001; Sholomskas et al., Reference Sholomskas, Syracuse-Siewert, Rounsaville, Ball, Nuro and Carroll2005; Bennett-Levy and Beedie, Reference Bennett-Levy and Beedie2007), and in therapist confidence (Bennett-Levy and Beedie, Reference Bennett-Levy and Beedie2007). There is also some evidence that competence can affect patient outcome (Shaw et al., Reference Shaw, Elkin, Yamaguchi, Olmsted, Vallis, Dobson, Lowery, Sotsky, Watkins and Imber1999; Kingdon, Tyrer, Seivewright, Ferguson and Murphy, Reference Kingdon, Tyrer, Seivewright, Ferguson and Murphy1996), but so far there is relatively little evidence for a direct link between training and patient outcome in any area of psychological therapy (Davidson et al., Reference Davidson, Scott, Schmidt, Tata, Thornton and Tyrer2004; Rønnestad and Ladany, Reference Rønnestad and Ladany2006). An exception to this trend, not yet published, is the study of disseminating CBT for panic to primary care counsellors reported by Grey, Salkovskis, Quigley, Ehlers and Clark, (Reference Grey, Salkovskis, Quigley, Ehlers and Clark2006).

Hence in the process of designing a newly-commissioned brief training course in CBT it was decided to attempt some evaluation of its effects, not only on trainee satisfaction but on trainees' clinical skills and clinical outcomes. This evaluation is limited by being an in-service project, carried out as part of running the course rather than as a funded research project (see below) but we hope it nevertheless provides some useful preliminary information on the effects of a shorter model of training.

Background

At the time of this project the Gloucestershire Primary Mental Health Service (PMHS) was offering mental health assessment and low intensity CBT across Gloucestershire. AST was the lead for improving access to psychological therapies in primary care and had trained 12 Graduate Mental Health Workers (GMHWs) to offer low intensity CBT to patients with mild to moderate anxiety and/or depression. However, there was a perceived need for the GMHWs, primary care and secondary mental health care staff to develop their CBT competence in order to improve access to formulation-driven CBT. To this end, AST commissioned Oxford Cognitive Therapy Centre (OCTC) to assist in the development of a brief training programme tailored to the needs of this staff group. (OCTC is a self-funding agency within an NHS mental health Trust, with a brief to deliver training and supervision commissioned by NHS and other agencies around the UK and abroad).

Method

The training programme

The Gloucestershire CBT Foundation Course consisted of 10 weekly sessions of one day each, with each day containing both formal training in CBT and clinical supervision. Each day's structure was as follows:

  • 9.30 – 11.00 Clinical supervision in groups of 4 trainees to 1 supervisor

  • 11.30 – 16.30 Workshop (with breaks for lunch and tea)

Trainees were asked to identify one or two training cases and to bring recorded excerpts of these patients' clinical sessions for discussion in supervision meetings. The supervisors were GB, DW and four experienced clinical psychologists from Gloucestershire. Training workshops were all led by OCTC staff (DW, JBL and GB) and focused on helping trainees acquire clinical skills as well as understanding CBT theory. The approach was formulation-based (see Westbrook, Kennerley and Kirk, Reference Westbrook, Kennerley and Kirk2007, Chapter 4), and trainees practised skills through numerous role-plays and other experiential exercises. The workshops covered the following topics: Introduction to course, goal-setting, and basic CBT theory (1 day); Assessment and formulation (2 days); Basic CBT therapeutic skills (3 days); Applying CBT skills to depression (2 days); Applying CBT skills to anxiety (1.5 days); Review of goals and planning for future (0.5 day).

Participants

Twenty-four trainees began the course; 16 of these came from the PMHS (12 GMHWs and 4 mental health triage nurses) and 2 more came from other parts of primary care (a health visitor and a practice nurse). The remainder came from other parts of the local mental health service (4 psychiatric nurses, 1 occupational therapist, 1 day care officer). None of the cohort had previous formal CBT training, although the GMHWs had some previous training and supervision from AST in low intensity CBT (from 3 months to 2 years experience). The other primary care staff had no previous experience of delivering CBT, and the other mental health staff had some introductory knowledge. Because of this difference in prior CBT experience, we divide the trainees into “GMHWs” and “Other professions” for some of the analyses below.

Measures

The Foundation Course was evaluated on measures of trainee satisfaction, assessor-rated measures of CBT skill, trainee-rated measures of CBT skill, and trainees' clinical outcomes.

Trainee satisfaction. At the end of each workshop trainees rated whether they would recommend the workshop to a colleague from 0 (Advise them to avoid) to 8 (Highly recommend). In addition, trainees rated the overall course on the following scales: how much the course met their needs (0 Not at all – 5 Completely); whether the course was pitched at the right level (0 Not at all – 5 Completely); whether the course had the right mix of theory and practice (0 Not at all – 5 Completely); and whether the supervision element of the course was important (0 Not at all – 5 Extremely important).

Assessor-rated measure of CBT skills. Trainees were asked to submit an audio tape of a clinical session with a training patient at two points during the course: (a) within the first 2 weeks of the course; and (b) within the 4 weeks after the end of the course. These tapes were rated by OCTC staff on the Cognitive Therapy Scale (Young and Beck, Reference Young and Beck1980) an established measure of CBT skills. Although questions have been raised about the reliability and validity of the CTS (see for example Milne, Claydon, Blackburn, James and Sheikh, Reference Milne, Claydon, Blackburn, James and Sheikh2001; Reichelt, James and Blackburn, Reference Reichelt, James and Blackburn2003), it remains the most widely used measure of CBT skill in both research trials and training settings. CTS items are rated from 0–6, with competence often being defined as a minimum score of 2 on each item, together with an overall mean score above 3. CTS items were divided into 3 clusters or subscales: General interview skills (agenda setting, client feedback, collaboration and pacing); Interpersonal skills (empathy, interpersonal effectiveness and professionalism); and Specific CBT techniques (guided discovery, case conceptualization, key cognitions, cognitive techniques, behavioural techniques and homework).

Trainee-rated measure of CBT skills. On the first and last day of the course trainees rated themselves on the Cognitive Therapy Self-Rating Scale (CTSS: Bennett-Levy and Beedie, Reference Bennett-Levy and Beedie2007). The CTSS was adapted by Bennett-Levy from the conventional Cognitive Therapy Scale (CTS: Young and Beck, Reference Young and Beck1980 – see below) with the aim of providing a measure of therapists' “self-perception” of CBT competence, as opposed to the CTS's “assessor”-rated competence. Participants rate themselves from 0 – 10 on the same dimensions as the CTS, and the items are clustered in the same way as described above for the CTS.

Trainees' clinical outcomes. The Gloucestershire Primary Mental Health Service (PMHS) had begun using the CORE PC data management system (Barkham, Mellor-Clark, Connell and Cahill, Reference Barkham, Mellor-Clark, Connell and Cahill2006) in late 2005. We were therefore able to analyse data on the patient outcomes achieved by some of the trainees both before and after the Foundation course. Unfortunately, the CORE PC system was only used by the trainees who worked in the PMHS (N = 15), so no outcome analysis was possible for other trainees.

The CORE measure comprises 34 items addressing the domains of subjective well-being, symptoms, functioning and risk; clinical comparisons are usually made using total scores (Mullin, Barkham, Mothersole, Bewick and Kinder, Reference Barkham, Mellor-Clark, Connell and Cahill2006). We extracted from the CORE PC system the pre-treatment and post-treatment CORE total scores for all patients who (a) had both these CORE scores and (b) were seen by any of the PMHS staff who took part in the Foundation course. Note that these data come from patients who may have been treated in various ways, including both telephone-based assisted self-help and more traditional face-to-face CBT. Two sets of data were extracted: (a) the “before training” set, containing all patients fitting the above criteria who completed treatment before the first day of the course (N = 23); and (b) the “after training” set, which contained all patients fitting the above criteria who started treatment after the last day of the course, up to May 2007 when the data were extracted (N = 78).

The patients seen in the PMHS did not have any formal diagnostic interview, but were allocated by clinicians to problem categories based on clinical information. Patients could belong to more than one problem category and the system used did not allocate a primary problem, so we can only describe the broad nature of problem. According to these clinical judgements, 84% of patients seen in this time period had anxiety or stress problems; 68% had depressed mood; 34% had low self-esteem; 27% had interpersonal or relationship problems; and 23% had work or academic problems. Other problems included living or welfare problems, bereavement and loss, and physical problems (percentages sum to more than 100% because of patients having multiple problems: the average number of problems identified per patient was 2.9).

Results

All analyses were conducted using SPSS version 15.

Trainee dropout

Two trainees dropped out (both “Other professions”) – one never attended and the other stopped attending after 2 sessions. Four other trainees did not submit CTSS ratings or tapes for CTS ratings, so n = 18 (10 GMHWs and 8 Others) for those analyses.

Trainee satisfaction

All 10 workshops were rated highly, with mean scores on the “Recommend to others” 0–8 scale ranging from a minimum of 7.05 to a maximum of 7.50. Overall ratings of satisfaction with the course as a whole were also high, with ratings on 0–5 scales as follows: the course met my needs (x = 4.63, SD = 0.58); the course was pitched at the right level (x = 4.54, SD = 0.66); the course had the right mix of theory and practice (x = 4.38, SD = 0.65) and the supervision element of the course was important to me (x = 4.64, SD = 0.58).

CTS ratings of taped sessions

A doubly repeated 2 × 3 × 2 MANOVA was conducted to examine the effects of training on CTS scores. Time (Tape 1 v. Tape 2) and CTS Cluster (Specific CBT, General interview and Interpersonal) were entered as within-subject factors, with professional role (GMHW v. Others) as a between-subjects factor. This analysis showed a significant main effect of Time [F(1,16) = 5.54, p < .05; effect size (partial eta squared) 0.26] and of Cluster [F(2,15) = 138.9, p < .001; partial eta squared 0.78], but no main effect for Role [F(1,16) = 1.10, p > .31] and no significant interaction effects. This implies that the whole group improved their CTS scores, irrespective of professional Role, and that this improvement was not significantly different between different CTS clusters, although trainees did tend to score differently on different clusters. Table 1 shows these CTS scores.

Table 1. Mean Cognitive Therapy Scale (CTS) scores from first and last tapes, compared to Oxford diploma trainees

Bennett-Levy and Beedie's (Reference Bennett-Levy and Beedie2007) study of the Oxford one-year CBT Diploma course was used as a benchmark against which to compare these results (the Start and End of training are across a duration of 3 terms for the Diploma group). A similar doubly-repeated MANOVA was conducted, with Time and Cluster as within-subject factors and Course (Gloucester Foundation v. Diploma) as a between-subjects factor. This analysis showed significant main effects for Time [F(1,34) = 26.06, p < .001; partial eta squared 0.43] and for Cluster [F(2,33) = 147.08, p < .001; partial eta squared 0.90] but no main effect for Course [F(1,34 = 2.64, p > .11]. There was also a significant Time x Cluster interaction [F(2,33) = 5.65, p < .01; partial eta squared 0.25] but no other significant interactions. This indicates that trainees in both courses improved on the CTS clusters, and that their scores varied between clusters, but there were no differences between the two courses. Contrast tests suggested that the Time x Cluster interaction was due to significant differences between the Specific CBT scores and both the other clusters: Specific CBT scores changed from Tape 1 to Tape 2 more than the other clusters did.

In order to benchmark the final scores of the Foundation course trainees, Tape 2 Total scores and Cluster scores were also compared in one-way ANOVA (for Total) and MANOVA (for Clusters) against the End scores for Diploma students. These analyses showed significant end of training differences in Total CTS ([F(1,34) = 4.29, p < .05; partial eta squared 0.11] but no overall differences for Clusters [F(3,32) = 1.78, p > .17].

CTSS self-rating scores

The scores on each CTSS subscale are shown in Table 2, again with data from Bennett-Levy and Beedie (Reference Bennett-Levy and Beedie2007) as a benchmark for comparison. Doubly repeated MANOVAs were performed for Foundation course CTSS scores using the same analyses as were performed for the CTS clusters. For the Foundation course, this showed the same pattern of main effects for Time [F(1,16) = 39.28, p < .001; partial eta squared 0.71] and for Cluster [F(2,15) = 41.24, p < .001; partial eta squared 0.85] but no main effect for professional Role and no significant interactions.

Table 2. Mean Cognitive Therapy Self-rating Scale (CTSS) scores at start and end of training, compared to Oxford diploma trainees

For the comparison between courses there were significant main effects for Time [F(1,39) = 87.4, p < .001; partial eta squared 0.69] and for Cluster [F(2,28) = 12.69, p < .001; partial eta squared 0.83], as well as a significant Time x Cluster interaction [F(2,38) = 12.69, p < .001; partial eta squared 0.40], but no significant interactions or main effects for Course. Contrast tests in this analysis suggested that the Time x Cluster effect was due to the Specific CBT cluster changes being different to the Interpersonal cluster changes, but not to the General interview cluster changes.

Also of interest is that analysis of the relationship between trainees' total CTSS and CTS scores found no significant correlations between CTSS and CTS either at the start or end of training: it appears that these measures were not assessing the same dimensions.

Outcomes of clinical work using CORE

The CORE results are shown in Table 3 and Figure 1; also shown as a comparison are the data from a recent paper that gives benchmarks for primary care services using CORE data gathered from a very large number of clients from many services (Mullin et al., Reference Mullin, Barkham, Mothersole, Bewick and Kinder2006; N = 11,953). Analysis of the Gloucester data using SPSS Repeated Measures GLM showed that there was a statistically significant improvement in the Foundation CORE outcomes. The Time main effect was significant [F(1,99) = 119.8, p < .001]; the Time x Training interaction was also significant [F(1,99) = 4.2, p < .0 .05]; and the Training main effect was not significant [F(1,99) = 0.034, p > .8]. Comparing Foundation course trainees' clients with the benchmark results in t tests showed no significant differences between Foundation and benchmark CORE scores at any point.

Table 3. Mean (SD) pre- and post-treatment CORE scores, for clients seen by PMHS trainees before and after CBT Foundation course, compared to benchmark in Mullin et al. (Reference Mullin, Barkham, Mothersole, Bewick and Kinder2006)

Figure 1. Pre- and post-treatment CORE scores, for clients seen before and after CBT Foundation course

Discussion

As noted above, this was an ad hoc evaluation of the Foundation Course, conducted without specific funding. It therefore has many of the important limitations common in “real world research”. Specific limitations include:

  • Delays in the trainees getting started meant that the first tape submission for CTS ratings was often up to 4 weeks into the course, rather than at the start of the course as planned.

  • Because the tape ratings were primarily for training purposes, they had to be rated and returned with feedback to trainees as quickly as possible, so raters were not blind to which were first tapes and which were second tapes.

  • Patient outcomes on the CORE took advantage of a routine clinical system that had not been introduced long before the course began. There were therefore relatively few clients available for the pre-training data set, and neither set was a random sample of patients (we extracted all those who met the criteria, but it is possible that the requirement to have both pre- and post-treatment scores produced a biased sample). Also, although it would have been interesting to test whether therapists' CTS scores were correlated with their patients' improvements on the CORE, the data we had did not allow us to perform such a test.

  • The study was uncontrolled and thus we cannot rule out the possibility that other factors, including simply the passage of time, may have impacted on the results.

Despite these limitations, our results provide some evidence that a 10-day training course in CBT was well-received by the trainees and resulted in improvements in CBT competencies (both as rated by observers and as perceived by trainees themselves), although unsurprisingly their final CTS scores tended to be lower than those achieved by trainees in a one-year CBT Diploma course. In addition, there is evidence that, for the trainees for whom outcome data were available, patient outcomes as measured by the CORE were significantly better after the training than they had been before, working with a population that seems typical of primary care, with anxiety and depressed mood the commonest problems.

The limitations of the study mean that caution is required in interpreting the results. In particular the lack of “blind” CTS ratings means it is possible that the increased ratings were a result of rater expectations rather than true improvements in skills. Against this “expectation” hypothesis, the observed improvement in patient outcomes suggests that some aspect of the clinical intervention changed over the duration of the course, in order to produce this effect. However, because of the lack of controls we obviously cannot rule out the possibility that some historical or maturational effect may have contributed to the higher CTS ratings or to the improved clinical outcomes. The extra resources needed for such controls were not possible within the constraints of this service-focused project but it must be a priority for future dissemination and implementation studies to include control groups. A further limitation of the current study is the lack of longer term follow-up and it should be a further priority for future studies to determine whether training benefits are maintained in the longer term (see the discussion of supervision below).

Given that several studies have found limited (or no) benefits from providing brief training programmes (King et al., Reference King, Davidson, Taylor, Haines, Sharp and Turner2002; Walters et al., Reference Walters, Matson, Baer and Ziedonis2005), a further question arising from the positive impact of this Foundation course is to examine which aspects of training have an impact on clinical skills and/or patient outcomes. For the time being we can only speculate, but two factors seem to us likely to be important. First, the highly interactive and practice-based nature of the course might be helpful in making sure that trainees could apply their learning in their clinical work. Second, it may be that the incorporation of clinical supervision within the training course was also important in assisting this transfer of learning into practice. In support of this latter conjecture, it is interesting that the trainees themselves rated the inclusion of supervision as highly important in the course's success (4.64 on a 0–5 scale). Mannix et al.'s recent (Reference Mannix, Blackburn, Garland, Gracie, Moorey, Reid, Standart and Scott2006) study may also offer some evidence on the central role of supervision. They successfully trained palliative care practitioners in CBT strategies but, crucially, they also showed that this increase in CBT skills and confidence was only maintained if the practitioners continued to receive clinical supervision. It may be that clinical lore about the importance of supervision for effective practice is beginning to gain empirical support.

Finally, although the results were not significant, there are some interesting suggestions that it is the Other professions rather than the GMHWs who showed the greatest gains from the course: inspection of the means suggests that on all three subscales of the CTS, the Other professions group tend to start lower and increase more, so that they are closer to the GMHWs at the end of training than they were at the beginning. The lack of statistical significance might be due to low numbers or might mean that this is not a real effect, but it is an intriguing finding that echoes some previous results and suggests other avenues for future research. It will be interesting to see whether this finding is replicated and, if so, whether it is an effect of professional background, or merely regression to the mean, in which those with lower scores tend to gain more in training (see James et al., Reference James, Blackburn, Milne and Reichfelt2001 for a similar result concerning gender: male clinicians started from a lower baseline and showed more increase in competence than females during a CBT training course; and Brosan, Reynolds and Moore, Reference Brosan, Reynolds and Moore2006, found clinical psychologists were more competent on the Interpersonal cluster than other professions).

Overall, these results are hopeful in suggesting that a relatively brief – and so relatively inexpensive – training course in CBT can improve the competence of clinicians, and therefore that there may be scope for the kind of large increases of CBT training envisaged in some UK policy developments. It should be a high priority to develop further research on the dissemination of, and training in, effective psychological therapies, and in particular to try to begin answering this variant of the “what works for whom” question: what kind of training is most cost-effective for which groups of clinicians, and what factors determine the answers to these questions?

Acknowledgements

Our thanks are due to all the trainees who took part in the first Gloucestershire CBT Foundation course and made it such a stimulating experience; to Louise Horner-Baggs, Hannah Steer, Gill Turnbull and Gill Yardley, the Gloucestershire psychologists who did most of the clinical supervision of trainees' work; to Simon Eames, from CORE IMS, for invaluable support and advice on CORE data management; and to Helen Doll for statistical advice.

The project was initially funded by Skills for Health and Wyeth Pharmaceuticals: thanks to Kate Schneider, Primary Care Practice Development Lead (CSIP SW/Skills for Health) and Mark Sheppey (South West Representative for Wyeth Pharmaceuticals); to Becky Dewdney York (CSIP SW); to Jo Fear and Sarah Greer (GMHWs) for their interest and support; and particular thanks to Caroline Andrews (Gloucestershire Primary Mental Health Development Team Lead) for her enthusiasm, support and vision.

References

Barkham, M., Mellor-Clark, J., Connell, J. and Cahill, J. (2006). A core approach to practice-based evidence: a brief history of the origins and applications of the CORE-OM and CORE System. Counselling and Psychotherapy Research, 6, 315.CrossRefGoogle Scholar
Bennett-Levy, J. and Beedie, A. (2007). The ups and downs of cognitive therapy training: what happens to trainees' perceptions of their competence during a cognitive therapy training course? Behavioural and Cognitive Psychotherapy, 35, 6175.CrossRefGoogle Scholar
Brosan, L., Reynolds, S. and Moore, R. G. (2006). Factors associated with competence in cognitive therapists. Behavioural and Cognitive Psychotherapy, 35, 179190.CrossRefGoogle Scholar
Centre for Economic Performance, Mental Health Policy Group (2006). The Depression Report: a new deal for depression and anxiety disorders. London: London School of Economics.Google Scholar
Davidson, K., Scott, J., Schmidt, U., Tata, P., Thornton, S. and Tyrer, P. (2004). Therapist competence and clinical outcome in the Prevention of Parasuicide by Manual Assisted Cognitive Behaviour Therapy Trial: the POPMACT study. Psychological Medicine, 34, 855863.CrossRefGoogle ScholarPubMed
Department of Health (2000). The NHS Plan: a plan for investment; a plan for reform. London: Department of Health.Google Scholar
Grey, N., Salkovskis, P., Quigley, A., Ehlers, A. and Clark, D. M. (2006). Dissemination of Cognitive Therapy for Panic Disorder in Primary Care. Presented at BABCP conference, Warwick, 1921 July.Google Scholar
James, I., Blackburn, I-M., Milne, D. and Reichfelt, F. (2001). Moderators of trainee therapists' competence in cognitive therapy. British Journal of Clinical Psychology, 40, 131141.CrossRefGoogle ScholarPubMed
King, M., Davidson, O., Taylor, F., Haines, A., Sharp, D. and Turner, R. (2002). Effectiveness of teaching general practitioners skills in brief cognitive behaviour therapy to treat patients with depression: randomised controlled trial. British Medical Journal, 324, 947950.CrossRefGoogle ScholarPubMed
Kingdon, D., Tyrer, P., Seivewright, N., Ferguson, B. and Murphy, S. (1996). The Nottingham Study of neurotic disorder: influence of cognitive therapists on outcome. British Journal of Psychiatry, 169, 9397.CrossRefGoogle ScholarPubMed
Mannix, K., Blackburn, I-M., Garland, A., Gracie, J., Moorey, S., Reid, B., Standart, S. and Scott, J. (2006). Effectiveness of brief training in cognitive behaviour therapy techniques for palliative care practitioners. Palliative Medicine, 20, 579584.CrossRefGoogle ScholarPubMed
Milne, D., Baker, C., Blackburn, I-M., James, I. and Reichelt, K. (1999). Effectiveness of cognitive therapy training. Journal of Behavior Therapy and Experimental Psychiatry, 30, 8192.CrossRefGoogle ScholarPubMed
Milne, D., Claydon, T., Blackburn, I-M., James, I. and Sheikh, A. (2001). Rationale for a new measure of competence in therapy. Behavioural and Cognitive Psychotherapy, 29, 2133.CrossRefGoogle Scholar
Mullin, T., Barkham, M., Mothersole, G., Bewick, B. and Kinder, A. (2006). Recovery and improvement benchmarks for counselling and the psychological therapies in routine primary care. Counselling and Psychotherapy Research, 6, 6880.CrossRefGoogle Scholar
NICE (2004a). Anxiety: management of anxiety (panic disorder, with or without agoraphobia, and generalised anxiety disorder) in adults in primary, secondary and community care. Clinical Guideline 26. London: National Institute for Health and Clinical Excellence.Google Scholar
NICE (2004b). Depression: management of depression in primary and secondary care. Clinical Guideline 23. London: National Institute for Health and Clinical Excellence.Google Scholar
Reichelt, K., James, I. and Blackburn, I-M. (2003). Impact of training on rating competence in cognitive therapy. Journal of Behavior Therapy and Experimental Psychiatry, 34, 8799.CrossRefGoogle ScholarPubMed
Rønnestad, M. and Ladany, N. (2006). The impact of psychotherapy training: introduction to the special section. Psychotherapy Research, 16, 261267.CrossRefGoogle Scholar
Shaw, B. F., Elkin, I., Yamaguchi, J., Olmsted, M., Vallis, T. M., Dobson, K. S., Lowery, A., Sotsky, S. M., Watkins, J. T. and Imber, S. D. (1999). Therapist competence ratings in relation to clinical outcome in cognitive therapy of depression. Journal of Consulting and Clinical Psychology, 67, 837846.CrossRefGoogle ScholarPubMed
Sholomskas, D., Syracuse-Siewert, G., Rounsaville, B., Ball, S., Nuro, K. and Carroll, K. (2005). We don't train in vain: a dissemination trial of three strategies of training clinicians in cognitive-behavioral therapy. Journal of Consulting and Clinical Psychology, 73, 106115.CrossRefGoogle Scholar
Walters, S., Matson, S., Baer, J. and Ziedonis, D. (2005). Effectiveness of workshop training for psychosocial addiction treatments: a systematic review. Journal of Substance Abuse Treatment, 29, 283293.CrossRefGoogle ScholarPubMed
Westbrook, D., Kennerley, H. and Kirk, J. (2007). An Introduction to Cognitive Behaviour Therapy: skills and applications. London: Sage.Google Scholar
Young, J. E. and Beck, A. T. (1980). Cognitive Therapy Scale Rating Manual. Unpublished manuscript, Centre for Cognitive Therapy, University of Pennsylvania, PA 19104, USA.Google Scholar
Figure 0

Table 1. Mean Cognitive Therapy Scale (CTS) scores from first and last tapes, compared to Oxford diploma trainees

Figure 1

Table 2. Mean Cognitive Therapy Self-rating Scale (CTSS) scores at start and end of training, compared to Oxford diploma trainees

Figure 2

Table 3. Mean (SD) pre- and post-treatment CORE scores, for clients seen by PMHS trainees before and after CBT Foundation course, compared to benchmark in Mullin et al. (2006)

Figure 3

Figure 1. Pre- and post-treatment CORE scores, for clients seen before and after CBT Foundation course

Submit a response

Comments

No Comments have been published for this article.