Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-02-06T18:16:39.278Z Has data issue: false hasContentIssue false

A qualitative comparison of cognitive-behavioural and evidence-based clinical supervision

Published online by Cambridge University Press:  03 January 2012

Derek L. Milne*
Affiliation:
Newcastle University, UK
Robert P. Reiser
Affiliation:
Palo Alto University, CA, USA
Tom Cliffe
Affiliation:
Newcastle University, UK
Lauren Breese
Affiliation:
Newcastle University, UK
Annabel Boon
Affiliation:
Newcastle University, UK
Rosamund Raine
Affiliation:
Newcastle University, UK
Phillippa Scarratt
Affiliation:
Newcastle University, UK
*
*Author for correspondence: D. L. Milne, Ph.D., 4th Floor, Ridley Building 1, Newcastle University, Newcastle upon Tyne NE1 7RU, UK. (email: d.l.milne@ncl.ac.uk).
Rights & Permissions [Opens in a new window]

Abstract

Despite the acknowledged importance of clinical supervision, controlled research is minimal and has rarely addressed the measurement or manipulation of clinical supervision, hampering our understanding and application of the different supervision methods. We therefore compared two related approaches to supervision, cognitive-behavioural (CBT) and evidence-based clinical supervision (EBCS), evaluating their relative effectiveness in facilitating the experiential learning of one supervisee. Drawing on a multiple-baseline N = 1 design, we gathered mostly qualitative data by means of an episode analysis, a content analysis, a satisfaction questionnaire, and interviews with the supervisor and supervisee. We found that the EBCS approach was associated with higher supervision fidelity and increased engagement in experiential learning by the supervisee. This case study in the evaluation of supervision illustrates the successful application of some rarely applied qualitative methods and some potential supervision enhancements, which could contribute to the development of CBT supervision.

Type
Education and supervision
Copyright
Copyright © British Association for Behavioural and Cognitive Psychotherapies 2011

Introduction

Clinical supervision is a necessary part of modern healthcare (DoH, 2004), intrinsic to professional development and regulation [e.g. British Association for Behavioural and Cognitive Psychotherapy (BABCP); Latham, Reference Latham2006]. The reasons for its importance are numerous, including enhancing clinical outcomes and guiding therapists (supervisees), all leading ultimately to the promotion of safe and effective practice (Falender & Shafranske, Reference Falender and Shafranske2004). For the UK's National Health Service (NHS), supervision serves to promote a ‘high-quality of practice’, as it ‘will encourage reflective practice’ (DoH, 2004, p. 35).

However, the term ‘supervision’ covers a wide diversity of practices, from personal growth (e.g. Hawkins & Shohet, Reference Hawkins and Shohet2000) to systematic professional development (e.g. Schoenwald et al. Reference Schoenwald, Sheidow and Chapman2009). This reflects equally diverse ways of defining clinical supervision, from the inclusive (Bernard & Goodyear, Reference Bernard and Goodyear2004) to the exclusive (Milne, Reference Milne2007). The general lack of precision in defining supervision, which unfortunately includes the NHS's own brief and limited definition (DoH, 1993), has the advantage of feasibility, but it hampers best practice, supervisor training, and research. For example, practitioners are unclear about what exactly supervision means, and what it is supposed to achieve (Lister & Crisp, Reference Lister and Crisp2005). Similarly, in a scrutiny of supervision within a sample of 32 clinical trials, Roth et al. (Reference Roth, Pilling and Turner2010) noted that information about the supervision that took place within these trials was presented inconsistently and sketchily. Because of this general lack of precision, the present study adopts the empirically derived definition in Milne (Reference Milne2007), which is essentially: ‘The formal provision, by approved supervisors, of a relationship-based education and training that is work-focused and which manages, supports, develops and evaluates the work of colleagues’ (p. 3).

Rationale for the present study

These problems with definition are compounded by the weak status of research on supervision, which is scant and embryonic, while also featuring vague or absent conceptualization, limited rigour and poor measurement (Ellis et al. Reference Ellis, Ladany, Krengel and Schult1996), a situation that has not improved (Wheeler & Richards, Reference Wheeler and Richards2007; Ellis et al. Reference Ellis, D'Iuso, Ladany, Hess, Hess and Hess2008). To rectify matters, Falender et al. (Reference Falender, Cornish JAE, Goodyear, Hatcher, Kaslow, Leventhal, Shafranske, Sigmon, Stoltenberg and Grus2004) urged that ‘A range of research procedures should be employed, including, for example, self-report, experimental, single-subject repeated measures, qualitative’ (p. 775). A related suggestion, made by successive researchers, is to improve the way that we attempt to measure supervision (e.g. Ellis & Ladany, Reference Ellis, Ladany and Watkins1997). For example, Watkins (Reference Watkins1998) stated that ‘In recent years, several supervision publications have emphasized one point vigorously: more valid, reliable, supervision-specific measures are needed to advance research efforts’ (p. 94). Watkins (Reference Watkins1997) and Falender et al. (Reference Falender, Cornish JAE, Goodyear, Hatcher, Kaslow, Leventhal, Shafranske, Sigmon, Stoltenberg and Grus2004) also viewed qualitative methods as a promising option. Watkins (Reference Watkins1998) has clarified the way that qualitative research could contribute to our understanding. He suggests that we require: ‘A greater focus on the behaviour of the supervisor, examining what actually happens in supervision, what supervisors and supervisees actually do . . . with multiple indices to measure supervision process and outcome . . . longitudinal studies . . . observable behavioural data’ (p. 96).

Exemplifying this kind of approach to qualitative research, Milne et al. (Reference Milne, Lombardo, Kennedy, Freeston and Day2008) used a single subject (N = 1) multiple baseline design to compare two micro-analytical approaches to evaluating supervision: a quantitative momentary time-sampling methodology and an episode-based qualitative analysis (based on Ladany et al. Reference Ladany, Friedlander and Nelson2005). As predicted, significant differences were observed in specific categories of supervisory effectiveness in the intervention phase, as indicated by both types of analysis.

Another way to advance supervision research is to compare different approaches, but such comparative evaluations are rare, by contrast with evaluations of the effectiveness of single approaches, such as CBT supervision (Milne & James, Reference Milne and James2000; Townend et al. Reference Townend, Iannetta and Freeston2002; Wheeler & Richards, Reference Wheeler and Richards2007; Milne et al. Reference Milne, Reiser, Aylott, Dunkerley, Fitzpatrick and Wharton2010). In a rare example of a comparative evaluation, Bambling et al. (Reference Bambling, King, Raue, Schweitzer and Lambert2006) conducted a random controlled trial comparing CBT and psychodynamic supervision, reporting no clinical difference (measured in terms of patients’ scores on a depression measure at the end of an eight-session treatment period). However, analysis of the clinical outcomes of patients who received a problem-solving therapy without supervision, suggested that supervision of either kind improved patient outcome. In a second comparative evaluation, Uys et al. (Reference Uys, Minnaar, Simpson and Reid2005) found that the two supervision approaches that they compared both produced significantly improved supervisee ratings of supervision, but that neither approach was superior (i.e. a developmental model, and one based on Holloway's Reference Holloway1995 matrix model).

In summary, the rationale for the present study is to try and enhance supervision research by addressing some of the prevailing concerns, such as the need for supervision-specific measures, qualitative approaches, and comparisons between different approaches. This study has the potential to make an important contribution to the definition and practice of CBT supervision, benefitting researchers, supervisors and clinicians.

In response to this parlous situation within supervision research, the present study adopts an N = 1 multiple baseline design to analyse the qualitative outcomes of CBT supervision, in comparison to Evidence-Based Clinical Supervision (EBCS), a science-informed approach to conducting and evaluating supervision (Milne, Reference Milne2009). The EBCS approach supplements CBT supervision by incorporating a developmentally and educationally informed framework (e.g. explicit educational needs assessment), attending systematically to the affective aspects of therapy and supervision (e.g. feeling reactions to the experiential learning exercises within supervision). EBCS is also based on an integrated programme of research, systematic supervisor training, and the operationalization of EBCS through a supervision manual. Specific procedural overlaps and distinctions between the two approaches are provided within Table 1.

Table 1. A comparison of key elements of CBT† and evidence-based clinical supervision (EBCS)

–, Absent; * common elements; ** enhanced in EBCS. † Based on Padesky (Reference Padesky and Salkovskis1996), Liese & Beck (Reference Liese, Beck and Watkins1997).

Predictions

This paper builds on the reviewed studies above by reporting a comparative qualitative evaluation. We predicted that:

  1. (I) We would be able to reliably introduce EBCS and CBT supervision conditions (i.e. fidelity would be demonstrated).

  2. (II) EBCS supervision would demonstrate stronger effects, in terms of improvements in the supervisee's engagement in experiential learning.

  3. (III) Supervisees would regard the CBT and EBCS approach as equally acceptable, based on their satisfaction data.

Method

Design

In order to compare the two approaches to supervision, we utilized a multiple baseline across participants, alternating treatment design. This N = 1 design is considered appropriate for this kind of comparison (Oliver & Fleming, Reference Oliver and Fleming1997; Borckardt et al. Reference Borckardt, Nash, Murphy, Moore, Shaw and O'Neil2008), especially given that both approaches were still in need of systematic operationalization, and that only a few rigorous N = 1 evaluations have been conducted (Milne, Reference Milne2008).

In relation to predictions I (demonstrate fidelity) and II (demonstrate EBCS improves engagement in experiential learning), there were multiple (i.e. A-B-A and A-B-A-B) experimental phases across the three clients. For example, the A-B-A design alternated the two approaches to supervision, starting with CBT supervision in the baseline (A) phase, switching to EBCS for the intervention phase (B), then reverting to the baseline condition once more. This provided a test of whether the two approaches could be introduced and withdrawn successfully. These phases included 37 consecutive, audio-taped sessions of supervision, taking place over an 11-month period. Three of the authors (L.B., A.B., P.S.) scrutinized all 37 sessions within the present study.

Instruments

The first measure evaluated predictions I and II, and was based on direct observation. This was the task analysis approach called ‘episode analysis’ (Ladany et al. Reference Ladany, Friedlander and Nelson2005), which these authors argue enables the supervision process to be segmented into ‘meaningful chunks’ (p. 6), facilitating our understanding of this process and the supervisees’ development. An episode consists of ‘not only what is specifically discussed in supervision (the content), but also the types of sequential, interpersonal behaviours that can effect change’ (Ladany et al. Reference Ladany, Friedlander and Nelson2005, p. 6). Three elements compose an episode: a marker event (e.g. a question raised by the supervisee about the relevant knowledge base); an interaction sequence (various supervisor interventions, such as questioning and informing); and a resolution (e.g. a plan of action). Figure 1 illustrates this approach with an episode from a CBT supervision session (the figures in parentheses represent the time, in minutes and seconds; while the underlined words illustrate what was included in the content analysis).

Fig. 1. Illustrative episode from one supervision session.

The second measure was a content analysis, which was undertaken using transcripts of each observed episode. This allowed us to compare those episodes occurring within the two supervision approaches, CBT and EBCS, in a systematic and replicable manner (Bryman, Reference Bryman2004), providing a finer-grained qualitative method for testing the first two predictions. Coding categories were developed empirically by the first author, based on the manifest content of these episodes, following the procedure reported by Papworth et al. (Reference Papworth, Milne and Boak2009) and from the defining features of CBT and EBCS, as described above (and see Table 1). This requires an iterative approach to be followed, in which the initial coding categories were revised as necessary with each successive episode, so that mutually exclusive categories of speech (i.e. utterances) were developed, as listed in Table 2. Once these categories were clarified they were then counted, to yield the frequencies of each type of utterance. Therefore, there was a minor quantitative aspect to the content analysis (this has been termed the ‘hybrid’ approach; Pistrang & Barker, Reference Pistrang, Barker, Barkham, Hardy and Mellor-Clark2010). The words spoken (i.e. utterances) by the supervisor and supervisee were treated together, and similarly no distinction was drawn between the detailed nature, quality, or other aspects of what was said: we simply listed and counted all utterances belonging to the identified categories.

Table 2. Content analysis of all observed supervision episodes (N = 31)

EBCS, Evidence-based clinical supervision.

The third and final prediction was assessed with a supervisee feedback and evaluation instrument, REACTS [Rating of Experiential learning And Components of Teaching & Supervision; Wilson, Reference Wilson2007 (note this instrument has not been published in a peer-reviewed journal)]. REACTS is an 11-item, one-page-long, paper-and-pencil rating of supervision by the supervisee, designed in part to assess Proctor's (Reference Proctor, Marken and Payne1986) ‘normative’ and ‘restorative’ aspects of supervision (through items on the structure/resources: namely the frequency and duration of supervision sessions; and content that includes both management issues and the provision of emotional support). However, REACTS mainly focuses on the ‘formative’ aspect of supervision (i.e. educative function), by listing Kolb's (Reference Kolb1984) learning modes (i.e. experiencing, reflecting, conceptualizing, experimenting, planning). An example item (number 5) is: ‘I was able to recognize relevant feelings, becoming more self-aware (e.g. role-play helped me to express emotion)’. The five-point rating scale ranged from ‘strongly agree’ to strongly disagree’ (with a ‘not applicable’ option), giving a score range of 8–45 (there are nine rated items), where higher scores represent greater supervisee/supervisor satisfaction and learning. REACTS also includes a ‘Helpful aspects of supervision’ item (following Llewellyn, Reference Llewellyn1988) to collect qualitative data, and a final item inviting any further comments. It can be completed within 5 min. REACTS has good test–retest reliability (r = 0.96, p = 0.0001) and high internal consistency (Cronbach's α = 0.94, p = 0.001).

The fourth and final measure was an interview, conducted independently with the supervisor and the second supervisee by a third-party interviewer, in order to assess the fidelity of the CBT and EBCS interventions. This was an ad-hoc, semi-structured interview, lasting up to 1 hour, intended to assess fidelity as defined by Bellg et al. (Reference Bellg, Borrelli, Resnick, Hecht, Minicucci, Ory, Ogedegbe, Orwig, Ernst and Czajkowski2004). It was developed following feedback from the supervisor, and consisted of a social desirability rating (made at the start and end of the interview by the interviewer) and six open-ended questions, each with prompts. For example, question 2 asked: ‘What happened during evidence-based clinical supervision – what kind of supervision was it?’ Prompts included asking what proportion of supervision sessions were made up of didactic and experiential work; and whether the EBCS guidelines and other supporting materials were ever used. There were no formal psychometric data for the interview. (Copies of all four instruments are available from the first author.)

Participants

The participants were one male consultant (i.e. the supervisor of the supervisor), one male supervisor, two female therapists (i.e. supervisees) and three clients (two males, one female). The consultant was a doctoral-level, fully accredited clinical psychologist, with over 30 years of experience in providing supervision and training to psychologists and other mental health professionals. The supervisor was also an accredited clinical psychologist, with 20 years of clinical experience, and was a training clinic director with over 6 years of experience in providing doctoral-level training. The first supervisee participated for prediction I. She was a second-year Ph.D. student. For predictions II and III there was a second supervisee, who was a post-doctoral student on a 1-year internship. Both supervisees were supervised by the same supervisor, successively. The adult, outpatient clients were two males and a female, who presented with anxiety or depression.

This was a convenience sample, having been selected on the pragmatic grounds that the supervisor had invited the consultant to engage in a collaborative research study during a representative period of clinic activity. Similarly, the supervisees were those who were currently working with the supervisor at the time of the study; and the three clients were those being seen by the second supervisee (i.e. the therapist) at the time of the study, and who completed all study phases (i.e. a further three clients discontinued therapy or did not consent to participate). In essence, therefore, this was a routine or naturalistic service evaluation, triggered by the consultant's involvement in the present research project. There were, therefore, no exclusion criteria and in general the sample was considered to be representative of the supervisees and clients within this clinic. The study was approved by the relevant University Institutional Review Board in the USA; and by the Research and Development Department of the consultant's employing NHS Trust in the UK. Informed consent was obtained from the supervisees and the clients.

Procedure

The study site was a community-based psychology training clinic in the USA, a clinic for adults presenting with complex mental health problems. Supervisor training took place over five consecutive weeks immediately prior to the initial study baseline, following the same consultancy procedure that will be described below. During the training, baseline and experimental phases, consultancy entailed fortnightly, phone-based, hour-long reviews of the preceding week's tape recorded supervision (i.e. ‘supervision-of-supervision’). Consultancy involved didactic work, including reference to a supervision manual, to supervision guidelines and to other supportive materials (such as a client experiencing scale, a supervisee's learning outcomes list, and illustrative video material; Milne, Reference Milne2009). This didactic aspect was supplemented by experiential work, primarily revolving around corrective feedback to the supervisor. This was based on the consultant's completion of a supervisory competence rating scale (SAGE; Milne et al. Reference Milne, Reiser, Cliffe and Raine2011) and the supervisee's completion of her satisfaction form (REACTS); and experiential work such as educational role-plays and behavioural rehearsal. Consultancy within both approaches was intended to be equivalent, but with the appropriate differences of emphasis (i.e. to encourage adherence to the respective CBT and EBCS supervision approaches). All supervision sessions were recorded on audiotape by the supervisor, to permit the qualitative analyses. A total of 37 supervision sessions were recorded, each lasting approximately 1 hour and taking place in the supervisor's office. This sample was selected as it included all recorded supervision sessions within the study period in which there was a discussion of one or more of the three clients, during the entire 11-month study period.

Regarding the observation and recording of the episodes, for training purposes, one tape was initially analysed simultaneously by three coders. Each episode that we observed was transcribed and the key utterances of the supervisor and supervisee were entered into the Ladany et al. (Reference Ladany, Friedlander and Nelson2005) episode framework, to show the elements of an episode. Episodes were coded following a manual, developed specially for the present study and intended to operationalize the Ladany et al. (Reference Ladany, Friedlander and Nelson2005) approach. This particular supervision session was chosen as it had previously been coded by the first author (a supervision researcher, experienced in using direct observation), and it represented a clear example of a critical event, as defined by Ladany et al. (Reference Ladany, Friedlander and Nelson2005). It had been identified by the first author in routine consultancy, mid-way through the study. During training the tape was paused after each of the identified elements of the episode, and discussions between the three coders occurred until agreement was reached. This process was continued until the end of the tape (64 min), following the coding manual instructions. A second tape was then coded independently, to assess the observer's inter-rater agreement. Specifically, any markers and resolutions were identified, with both the time and corresponding verbal utterances being recorded. These data enabled agreement to be assessed using the exact % agreement method:

\begin{equation}
{\rm Total \,\% \, agreement =}\frac{{{\rm number\ of\ agreements \ \hbox{–} \ number\ of\ disagreement}}}{{{\rm total\ number\ of\ ratings}}}{\rm \times\, 100}{\rm .}\end{equation}

Both tapes were representative of our dataset, containing episodes. After training, the three coders worked independently. When listening to an audiotape, any identified markers were recorded, alongside the corresponding time of the utterance. If the marker developed into a full episode it was documented; when a marker did not resolve it was not included in the study.

In terms of the remaining two measures, for prediction III only the second supervisee was requested to complete the satisfaction questionnaire after each supervision session (i.e. pragmatically, this was all that was necessary to test this prediction); and the interviews were conducted over the telephone, 1-month after the end of the N = 1 study.

Results

Prediction (I): We would be able to reliably introduce EBCS and CBT supervision conditions.

To check the fidelity of the two supervision approaches, we compared the relevant supervisor utterances within all 31 supervision episodes (see below for a summary of these episodes). In this content analysis we noted all specific instances of key terms within these episodes, as illustrated in Figure 1 (i.e. the underlined words, e.g. ‘challenge’, ‘directive’, ‘modelled’). We then combined them into suitable categories, as set out in Table 3. Table 2 summarizes the most frequently occurring categories, and the ones that best differentiated the supervisor's utterances in the CBT and EBCS phases.

Table 3. Comparison of the frequency and nature of episodes that were observed in CBT and evidence-based clinical supervision (EBCS)

%, Proportion of episodes per taped supervision session.

Table 2 also clarifies the extent to which the utterances were either inconsistent with the given supervision phase (i.e. ‘low fidelity content’, e.g. discussing ‘insight’ within CBT supervision); or were consistent with that approach (i.e. ‘high fidelity content’, e.g. discussing ‘feeling reactions’ within EBCS). This content analysis suggested that CBT was more cognitive, structured and collaborative. By contrast, EBCS episodes contained more experiential (behavioural and affective) material, were more challenging, and contained fewer counselling-focused utterances.

We next supplemented this content analysis with interview data. The interviews, conducted independently with the supervisor and the second supervisee, were both judged by the interviewer (from the interview) and the first author (from the recorded replies) to be low in social desirability. Both interviewees stated that EBCS was the more structured approach, being more agenda-driven. Moreover, they both identified EBCS as more affectively charged (‘challenging’, ‘arousing’, ‘emotional’) and more experiential (‘role-plays’). This experiential emphasis was associated with the use of several materials from the EBCS manual (i.e. the client experiencing scale, learning outcomes list, supervision guidelines, emotions library, video material). When asked about the negative effects of EBCS, the supervisee noted that supervision sessions could be: ‘rushed, as there was so much to fit in’. EBCS was also regarded as more taxing than CBT supervision, including ‘raising anxiety levels’. The supervisor agreed that EBCS was ‘challenging’ and that he too experienced some anxiety. However, both construed this arousal as desirable and productive. The positive effects of EBCS were thought to revolve around the unprecedented attention that was accorded to the facilitation of experiential learning (‘never really done experimenting before’, ‘directly challenged beliefs’, ‘deepening beliefs’, ‘enhanced learning’). Neither of the participants thought that the results obtained were due to factors other than EBCS, although each noted some other influences (e.g. the supportive relationship, the broad range of clients). In summary, the content analysis and interviews indicated that the two supervision approaches were implemented with fidelity, although the episode analysis suggested that the CBT supervision had high frequencies of counselling-style utterances, which might lead to questioning its fidelity.

Prediction (II): EBCS supervision would demonstrate stronger effects, in terms of improvements in the supervisee's engagement in experiential learning.

As regards the relative effectiveness of EBCS over CBT supervision, assessed in terms of the supervisee's engagement in experiential learning, we found that 31 of the 37 tapes contained episodes, but (contrary to our prediction) that these were divided roughly equally between the CBT and EBCS phases of supervision (16 and 15, respectively). Table 3 shows that these episodes were also distributed across all three clients roughly equally (although discussion of client 2 was associated with the most episodes per session, in both phases). That is, episodes were observed on 64% (CBT supervision) and 60% (EBCS) of occasions, overall, but client 2 had the highest frequency of episodes in both phases (i.e. 78% and 100%).

It can also be seen from Table 3 that the nature of the episodes is comparable. Using the categories provided by Ladany et al. (Reference Ladany, Friedlander and Nelson2005), we found that the markers in both approaches were mainly concerned with the supervisee's need for guidance, leading in the main to the supervisor focusing on key skills, and resolving the episodes with the identification of skill developments. In summary, these episode data do not suggest that there is any difference in the effectiveness of the CBT and EBCS approaches.

The final prediction was that both forms of supervision would achieve similarly favourable qualitative feedback and evaluation ratings from the supervisee. A total of 20 REACTS forms were returned by the second supervisee, a 54% response rate. This predicted reaction was obtained, as the supervisee rated the supervisor very highly throughout the study: on all but seven items out of a possible 180 (i.e. 4%), the supervisor was credited with the maximum rating of 5 (‘strongly agree’). Four of these lower ratings (i.e. a rating of 3 or 4) were made during CBT supervision, but there was no pattern in terms of the items receiving these lower ratings. In summary, expressed as a percentage of the maximum possible supervisee satisfaction, both approaches received very high endorsement (98% for CBT, 99% for EBCS). This is equivalent to an overall rating of ‘strongly agree’ in both cases.

The qualitative item concerned with ‘helpful events’ elicited comments on all 20 completed REACTS forms. During CBT supervision these comments highlighted the value of planning what to do next in therapy (e.g. ‘looking at the case from different angles’). In EBCS the helpful events that were noted were far more varied and numerous, embracing a wide range of experiential methods (e.g. role-plays, viewing video recordings of the therapy, and exploring feeling reactions). These methods were frequently associated with comments about effective treatment planning, enhanced self-awareness, and improved self-confidence. Last, the final REACTS item, ‘Any other comments?’ received only two responses: in the CBT instance the supervisee noted the need to allocate the time better, so that all clients could be discussed; in the EBCS instance, she noted that it had been ‘difficult being admonished for my lack of professionalism’.

Discussion

Our first prediction was that the two supervision approaches, CBT and EBCS, could be reliably introduced. Such fidelity is a basic logical requirement for process-outcome research, as well as for increasing power (Lipsey, Reference Lipsey1990). However, it is rare for studies in the supervision field to include demonstrations of fidelity (Watkins, Reference Watkins1997). Using a combination of content analysis and interviewing, we were able to provide evidence that the CBT and EBCS approaches could be introduced with reasonably good fidelity between the phases (i.e. A-B-A-B). To illustrate, this qualitative analysis suggested that CBT was more cognitive, structured and collaborative. By contrast, EBCS episodes contained more experiential (behavioural and affective) material, were more challenging, and contained fewer counselling-focused utterances. One significant discrepancy was the high frequency of ‘counselling focus’ in CBT supervision. It may be that this actually subsumes a high proportion of utterances that would be deemed perfectly appropriate within CBT supervision, because they reflected aspects of the supervisory alliance and a collaborative style (e.g. providing core conditions, such as empathic statements). Therefore, we suggest that this requires further investigation, ideally guided by a CBT supervision manual or similar operational statement. On the other hand, if we assume that the ‘counselling focus’ was indeed due to low fidelity to the CBT supervision approach, then the differential fidelity confounds our comparison: what we may have is a comparison between a high vs. a low fidelity intervention. It would not be surprising if this produced findings favouring the high fidelity approach. Again, we believe that the way forward is to operationalize CBT supervision, so that fidelity can be quantified and manipulated more precisely. As to why we found this differential fidelity, the supervisor was far more experienced in CBT supervision, so we believe that the explanation is that the supervisor was provided with individual training (‘supervision-of-supervision’) in EBCS using a systematically prepared training package including a clear operationalization (Milne, Reference Milne2010), whereas there was no such package for CBT. There may also have been ‘demand characteristic’ and ‘allegiance’ effects in operation, as the supervisor and the first author were active research collaborators, developing the EBCS approach.

Our second main finding was that the CBT and EBCS approaches appeared to yield predictably different effects, as measured in terms of the supervisee's engagement in experiential learning. The qualitative evaluation was based initially on the episode approach (Ladany et al. Reference Ladany, Friedlander and Nelson2005), which indicated that both approaches were equally effective. A total of 37 tapes contained episodes, divided roughly equally between the CBT and EBCS phases of supervision (16 and 15, respectively). However, when these episodes were probed for their content, EBCS was shown to promote much more frequent engagement in experiential learning, in terms of the discussion of topics like behavioural experiments (11 instances), feeling reactions (73) and role-play (20) (see Table 2, high fidelity column).The final prediction was that both forms of supervision would gain similarly favourable satisfaction ratings from the supervisee, for which we found clear support, although this was possibly confounded because the supervisee rated the supervisor very highly throughout the study (98% for CBT, 99% for EBCS). We strongly suspect that a positive bias inflated these ratings, a recurring problem with satisfaction ratings that are provided to a supervisor. The qualitative item concerned with ‘helpful events’ elicited comments on all the REACTS forms. During CBT supervision these comments highlighted the value of planning what to do next in therapy (e.g. ‘looking at the case from different angles’). As just noted in relation to the content analysis (Table 2), in EBCS the helpful events that were noted were far more varied and numerous, embracing a wide range of experiential methods (e.g. role-plays, viewing video recordings of the therapy, and exploring feeling reactions). These methods were frequently associated with comments about effective treatment planning, enhanced self-awareness, and improved self-confidence. Thus, these qualitative data seem to provide more useful discriminations between the two approaches than do the ratings, due to a ceiling effect (and to the effects of a ‘grateful testimonial’).

Critical review

We recognize a number of additional methodological weaknesses within the present analysis. These include the need to add complementary instruments, such as objective (quantitative) assessments of learning and of the transfer of the supervisees’ learning to therapy and to refine training in EBCS. The concept of ‘episode’ may also merit refinement, given striking parallels with concepts such as ‘sudden gains’ (Tang & DeRubeis, Reference Tang and DeRubeis1999), ‘milestones’ (Shiffman et al. Reference Shiffman, Scharf, Shadel, Gwaltney, Dang and Paton2006), ‘good moments’ (Mahrer & Nadler, Reference Mahrer and Nadler1986) and ‘achievements’ (Barkham et al. Reference Barkham, Stiles, Lambert, Mellor-Clark, Barkham, Hardy and Mellor-Clark2010). Another weakness was our use of an experienced supervisor, in the sense that he would perhaps have an established allegiance to his usual approach (CBT supervision), or be less markedly influenced by training and consultancy, a common finding within therapist-training studies (Beidas & Kendall, Reference Beidas and Kendall2010). A more substantive methodological issue concerns the differentiation of CBT supervision and EBCS, in that it could be argued that EBCS is simply CBT supervision done correctly (i.e. the argument that EBCS is not conceptually different from CBT supervision, even if there are differences in implementation). After all, the main accounts of CBT supervision advocate much that is in EBCS (e.g. Padesky, Reference Padesky and Salkovskis1996; Liese & Beck, Reference Liese, Beck and Watkins1997), such as educational role-play. However, we believe that there are practical and theoretical differences between the two approaches: in addition to a different emphasis on some shared variables (e.g. EBCS stresses the behavioural and affective aspects of supervision), there do appear to be conceptually distinct aspects of EBCS, due to drawing on ideas about human development and learning from beyond the CBT supervision literature (Milne, Reference Milne2009). Further, CBT supervision as implemented does not appear to match these theoretical accounts, either in the present study or in the other process evaluations that exist (Milne, Reference Milne2008), complicating differentiation. It appears that CBT supervision requires the kind of operationalization that has occurred with EBCS before a true comparison can be undertaken, and before appropriate conclusions can be drawn about distinctiveness, relative effectiveness, etc. Following a related example, where motivational interviewing was differentiated from acceptance and commitment therapy in terms of theoretical and practical comparisons (Bricker & Tollison, Reference Bricker and Tollison2011), it would seem reasonable to refer to CBT and EBCS as ‘complementary’ approaches. Future research might draw on these points, to enable a more systematic comparison between CBT, EBCS and other supervision models. This may help to identify the key conceptual features, relevant instruments, and the most effective elements within such approaches (Kazdin, Reference Kazdin1998).

Conclusions

This illustrative case study indicates the potential for qualitative methods to advance our understanding of CBT supervision. The present analysis was novel in clarifying qualitatively the fidelity and comparative effectiveness of two supervision approaches. However, improved N = 1 studies are required, before proceeding to large-group evaluations.

Declaration of Interest

None.

Learning objectives

By studying this paper carefully, the reader will be able to:

  1. (1) Summarize the argument for comparative evaluations.

  2. (2) Discuss the criteria for distinguishing between two or more interventions.

  3. (3) Evaluate the extent to which the present study represents a comparative evaluation of two forms of supervision.

References

Recommended follow-up reading

Miller, WR, Rollnick, S (2009). Ten things that motivational interviewing is not. Behavioural and Cognitive Psychotherapy, 37, 129140.CrossRefGoogle ScholarPubMed

References

Bambling, M, King, R, Raue, P, Schweitzer, R, Lambert, W (2006). Clinical supervision: its influence on client-rated working alliance and client symptom reduction in the brief treatment of major depression. Psychotherapy Research 16, 317331.CrossRefGoogle Scholar
Barkham, M, Stiles, WB, Lambert, MJ, Mellor-Clark, J (2010). Building a rigorous and relevant knowledge-base for the psychological therapies. In: Developing and Delivering Practice-Based Evidence (ed. Barkham, M., Hardy, G. E. and Mellor-Clark, J.), pp. 2161. Chichester: Wiley.CrossRefGoogle Scholar
Beidas, RS, Kendall, PC (2010). Training therapists in evidence-based practice: a critical review of studies from a systems-contextual perspective. Clinical Psychology: Science and Practice 17, 130.Google ScholarPubMed
Bellg, AJ, Borrelli, B, Resnick, B, Hecht, J, Minicucci, DS, Ory, M, Ogedegbe, G, Orwig, D, Ernst, D, Czajkowski, S (2004). Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH behavior change consortium. Health Psychology 23, 443451.CrossRefGoogle ScholarPubMed
Bernard, JM, Goodyear, RK (2004). Fundamentals of Clinical Supervision, 3rd edn. London: Pearson.Google Scholar
Borckardt, JJ, Nash, MR, Murphy, MD, Moore, M, Shaw, D, O'Neil, P (2008). Clinical practice as natural laboratory for psychotherapy research. American Psychologist 63, 7795.CrossRefGoogle ScholarPubMed
Bricker, J, Tollison, S (2011). Comparison of motivational interviewing with acceptance and commitment therapy: a conceptual and clinical review. Behavioural and Cognitive Psychotherapy 39, 541559.CrossRefGoogle ScholarPubMed
Bryman, A (2004). Social Research Methods. Oxford University Press.Google Scholar
Cohen, J (1988). Statistical Power Analysis for the Behavioural Sciences. New York: Lawrence Erlbaum.Google Scholar
DoH (1993). A Vision for the Future. London: Department of Health.Google Scholar
DoH (2004). Organising and Delivering Psychological Therapies. London: Department of Health.Google Scholar
Ellis, MV, D'Iuso, N, Ladany, N (2008). State of the art in the assessment, measurement, and evaluation of clinical supervision. In: Psychotherapy Supervision: Theory, Research and Practice (ed. Hess, A. K., Hess, K. D. and Hess, T. H.), pp. 473499. Chichester: Wiley.Google Scholar
Ellis, MV, Ladany, N (1997). Inferences concerning supervisees and clients in clinical supervision: an integrative review. In: Handbook of Psychotherapy Supervision (ed. Watkins, C. E.), pp. 447507. New York: Wiley.Google Scholar
Ellis, MV, Ladany, N, Krengel, M, Schult, D (1996). Clinical supervision research from 1981 – 1993: a methodological critic. Journal of Counselling Psychology 43, 3540.CrossRefGoogle Scholar
Falender, C, Cornish JAE, Goodyear, R, Hatcher, R, Kaslow, NJ, Leventhal, G, Shafranske, E, Sigmon, ST, Stoltenberg, C, Grus, C (2004). Defining competencies in psychology supervision: a consensus statement. Journal of Clinical Psychology 60, 771785.CrossRefGoogle ScholarPubMed
Falender, CA, Shafranske, EP (2004). Clinical Supervision: A Competency-Based Approach. Washington, DC: American Psychological Association.CrossRefGoogle Scholar
Hawkins, P, Shohet, R (2000). Supervision in the Helping Professions: An Individual, Group and Organizational Approach. Milton Keynes: Open University Press.Google Scholar
Holloway, EL (1995). Clinical Supervision: A Systems Approach. Thousand Oaks, CA: Sage.CrossRefGoogle Scholar
Kazdin, AE (1998). Research Design in Clinical Psychology. Boston: Allyn Bacon.Google Scholar
Kolb, DA (1984). Experiential Learning: Experience as the Source of Learning and Development. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
Ladany, N, Friedlander, ML, Nelson, ML (2005). Critical Events in Psychotherapy Supervision: An Interpersonal Approach. Washington, DC: American Psychological Association.CrossRefGoogle Scholar
Latham, M (2006). Supervisor and training accreditation (Training Newsletter, February, p. 3). Accrington: British Association of Behavioural and Cognitive Psychotherapies (BABCP).Google Scholar
Llewellyn, SP (1988). Psychological therapy as viewed by clients and therapists. British Journal of Clinical Psychology 27, 105114.CrossRefGoogle Scholar
Liese, BS, Beck, JS (1997). Cognitive therapy supervision. In: Handbook of Psychotherapy Supervision (ed. Watkins, C. E.), pp. 114133. Chichester: Wiley.Google Scholar
Lipsey, MW (1990). Design Sensitivity. London: Sage.Google Scholar
Lister, PG, Crisp, BR (2005). Clinical supervision in child protection for community nurses. Child Abuse Review 14, 5772.CrossRefGoogle Scholar
Mahrer, AR, Nadler, WP (1986). Good moments in psychotherapy: a preliminary review, a list, and some promising research avenues. Journal of Consulting and Clinical Psychology 54, 1015.CrossRefGoogle ScholarPubMed
Milne, DL (2007). An empirical definition of clinical supervision. British Journal of Clinical Psychology 46, 437447.CrossRefGoogle ScholarPubMed
Milne, DL (2008). CBT supervision: from reflexivity to specialization. Behavioural and Cognitive Psychotherapy 36, 779786.CrossRefGoogle Scholar
Milne, DL (2009). Evidence-Based Clinical Supervision: Principles and Practice. Chichester: BPS/Blackwell.Google Scholar
Milne, DL (2010). Can we enhance the training of clinical supervisors? A national pilot study of an evidence-based approach. Clinical Psychology and Psychotherapy 17, 321328.CrossRefGoogle ScholarPubMed
Milne, DL, James, IA (2000). A systematic review of effective cognitive-behavioural supervision. British Journal of Clinical Psychology 39, 111127.CrossRefGoogle ScholarPubMed
Milne, DL, Lombardo, C, Kennedy, E, Freeston, M, Day, A (2008). Zooming in on clinical supervision. Behavioural and Cognitive Psychotherapy 36, 619624.CrossRefGoogle Scholar
Milne, DL, Reiser, R, Aylott, H, Dunkerley, C, Fitzpatrick, H, Wharton, S (2010). The systematic review as an empirical approach to improving CBT supervision. International Journal of Cognitive Therapy 3, 278294.CrossRefGoogle Scholar
Milne, DL, Reiser, R, Cliffe, T, Raine, R (2011). SAGE: preliminary evaluation of an instrument for observing competence in CBT supervision. The Cognitive Behaviour Therapist. Published online: 24 November 2011. doi:10.1017/S1754470X11000079.CrossRefGoogle Scholar
Oliver, JR, Fleming, RK (1997). Applying within-subject methodology to the transfer of training research. International Journal of Training and Development 1, 173180.CrossRefGoogle Scholar
Padesky, CA (1996). Developing cognitive therapist competency: teaching and supervision models. In: Frontiers of Cognitive Therapy (ed. Salkovskis, P. M.), pp. 266292. London: Guilford Press.Google Scholar
Papworth, MA, Milne, DL, Boak, G (2009). An exploratory content analysis of situational leadership. Journal of Management Development 28, 593606.CrossRefGoogle Scholar
Pistrang, N, Barker, C (2010) Scientific, practical and personal decisions in selecting qualitative methods. In: Developing and Delivering Practice-Based Evidence (ed. Barkham, M., Hardy, G. E. and Mellor-Clark, J.), pp. 6590. Chichester: Wiley.Google Scholar
Proctor, B (1986). Supervision: a co-operative exercise in accountability. In: Enabling and Ensuring (ed. Marken, M. and Payne, M), pp. 21–34. Leicester: Leicester National Youth Bureau and Council for Education and Training in Youth and Community Work.Google Scholar
Roth, AD, Pilling, S, Turner, J (2010). Therapist training and supervision in clinical trials: Implications for clinical practice. Behavioural and Cognitive Psychotherapy 38, 291302.CrossRefGoogle ScholarPubMed
Schoenwald, SK, Sheidow, AJ, Chapman, JE (2009). Clinical supervision in treatment transport: effects on adherence and outcomes. Journal of Consulting and Clinical Psychology 77, 410421.CrossRefGoogle Scholar
Shiffman, S, Scharf, DM, Shadel, WG, Gwaltney, CJ, Dang, Q, Paton, Clark (2006). Analyzing milestones in smoking cessation: Illustration in a nicotine patch trial in adult smokers. Journal of Consulting and Clinical Psychology 74, 276285.CrossRefGoogle Scholar
Tang, T, DeRubeis, RJ (1999). Sudden gains and critical sessions in cognitive-behavioural therapy for depression. Journal of Consulting and Clinical Psychology 67, 894904.CrossRefGoogle ScholarPubMed
Townend, M, Iannetta, L, Freeston, MH (2002). Clinical supervision in practice: a survey of UK cognitive-behavioural psychotherapists accredited by the BABCP. Behavioural and Cognitive Psychotherapy 30, 485500.CrossRefGoogle Scholar
Uys, LR, Minnaar, A, Simpson, B, Reid, S (2005). The effect of two models of supervision on selected outcomes. Journal of Nursing Scholarship 37, 282288.CrossRefGoogle ScholarPubMed
Watkins, CE (1997). Handbook of Psychotherapy Supervision. Chichester: Wiley.Google Scholar
Watkins, CE (1998). Psychotherapy supervision in the 21st century. Journal of Psychotherapy Practice & Research 7, 93101.Google Scholar
Wheeler, S, Richards, K (2007). The impact of clinical supervision on counsellors and therapists, their practice and their clients: a systematic review of the literature. Counselling and Psychotherapy Research 7, 5465.CrossRefGoogle Scholar
Wilson, M (2007). Can experiences of supervision be quantified? REACTS: a new tool for measuring supervisee's perceived satisfaction with clinical supervision [unpublished thesis] (available from the Psychology Department, Newcastle University).Google Scholar
Figure 0

Table 1. A comparison of key elements of CBT† and evidence-based clinical supervision (EBCS)

Figure 1

Fig. 1. Illustrative episode from one supervision session.

Figure 2

Table 2. Content analysis of all observed supervision episodes (N = 31)

Figure 3

Table 3. Comparison of the frequency and nature of episodes that were observed in CBT and evidence-based clinical supervision (EBCS)

Submit a response

Comments

No Comments have been published for this article.