Introduction
Ensuring that therapists are competently trained in the delivery of evidence-based therapies is an important first step in delivering effective and efficient psychotherapy to clients. This has been achieved for many individual therapy models such as cognitive behavioural therapy (CBT; McManus et al., Reference McManus, Westbrook, Vazquez-Montes, Fennell and Kennerley2010; Rakovshik and McManus, Reference Rakovshik and McManus2010; Sharpless and Barber, Reference Sharpless and Barber2009), but it is not fully achieved for couple and family interventions.
Within the context of depression and relationship distress, there is evidence that behavioural couple therapy (BCT) is effective at eliminating depressive symptoms by targeting general relationship functioning (Baucom et al., Reference Baucom, Shoham, Mueser, Daiuto and Stickle1998; Whisman and Baucom, Reference Whisman and Baucom2012). Therefore, recent National Institute for Health and Clinical Excellence (NICE) guidelines for depression have cited BCT as an evidence-based treatment (NICE, 2009).
In England, the Improving Access to Psychological Therapies Programme (IAPT) has recommended additional training for qualified CBT therapists in BCT, in order to increase access to this evidence-based form of therapy (IAPT; Clark et al., Reference Clark, Layard, Smithies, Richards, Suckling and Wright2009). This has included not only BCT where one or both partners are experiencing depression, but also for clinical presentations where there is an interaction between depression, couple functioning and long-term health conditions. Consistent with the move towards competency-based training more broadly (Holloway, Reference Holloway, Fuertes, Spokane and Holloway2012), such an expansion in BCT training has also required the creation of a competence framework for couple therapy for depression (Clulow, Reference Clulow2010a), and the need to develop valid and reliable measures of competence in the delivery of BCT.
Reliable and valid measures of therapist competence are essential to assess the level of competence attained during training and to monitor the quality of treatment provision within routine clinical practice (Fairburn and Cooper, Reference Fairburn and Cooper2011; Muse and McManus, Reference Muse and McManus2013). Reliable and valid measures also allow targeted feedback regarding a therapist’s strengths and weaknesses, which can be effective in improving competence (McManus et al., Reference McManus, Westbrook, Vazquez-Montes, Fennell and Kennerley2010; Muse and McManus, Reference Muse and McManus2013). In addition, competence assessment is essential to interpreting the outcomes of effectiveness studies. Finally, there may be a relationship between therapist competence and treatment outcome. Therefore, reliable and valid measures of therapist competence are essential for assessing trainee therapists and for monitoring the quality of treatment provision to ensure that treatment is optimally effective (Muse and McManus, Reference Muse and McManus2013).
What is competence?
Fairburn and Cooper (Reference Fairburn and Cooper2011) defined therapist competence as ‘…the extent to which a therapist has the knowledge and skill required to deliver a treatment to the standard needed for it to achieve its expected effects’ (p. 374). Focusing specifically on professional psychologists, Kaslow (Reference Kaslow2004) discusses eight domains of competence: (a) ethical and legal issues; (b) individual and cultural diversity; (c) scientific foundations and research; (d) assessment; (e) intervention; (f) consultation and interprofessional collaboration; (g) supervision; and (h) professional development. Competence within the intervention domain includes global therapeutic knowledge and skills (i.e. a therapist’s ability to independently assess a client’s well-being) and limited-domain competence which refers to specific knowledge and skills according to a given therapeutic domain (Barber et al., Reference Barber, Sharpless, Klostermann and McCarthy2007). This makes it necessary to define model-specific competences.
Importance of measuring therapist competence
Within the field of couple-based intervention, there has historically been a dearth of evaluations of therapist competence (Jacobson and Addis, Reference Jacobson and Addis1993). Most of the research that does exist in this area has been conducted within CBT, but increasingly attention is being paid to this across the psychotherapies and also in couple-based interventions (Jacobson and Addis, Reference Jacobson and Addis1993).
The American Association for Marriage and Family Therapy (2004), for example, has identified a set of core competences needed when practising marriage and family therapy. In the UK, research groups have defined a set of specific competences required to deliver effective couple therapy for partners with depression (Clulow, Reference Clulow2010a) as well as a competence framework for systemic therapies (Pilling et al., Reference Pilling, Roth and Stratton2010). Both competence models were based on the CBT competence framework developed by Roth and Pilling (Reference Roth and Pilling2007) and bring together the competences and skills identified within a range of manuals that are evidence-based and likely to be effective in treating depression (Clulow, Reference Clulow2010b) and other mental health difficulties (Pilling et al., Reference Pilling, Roth and Stratton2010). The frameworks have the same five domains as Roth and Pilling’s framework (Reference Roth and Pilling2007): generic therapeutic competences (e.g. knowledge of depression), basic competences (e.g. knowledge of sexual functioning in couples, how depression manifests in couples), specific competences (e.g. techniques that engage a couple), specific applications (e.g. BCT) and metacompetences (e.g. using clinical judgement when implementing the therapy).
The need for valid measures
As the first post-qualification CBT course accredited by the BABCP, the BCT training course needed to ensure therapists doing the training and achieving BCT accreditation have reached a certain standard of competence. The most commonly used method of assessing competence within training courses and routine practice is observer ratings of therapists’ in-session performance (Muse and McManus, Reference Muse and McManus2013) using a competence rating scale. Therefore, reliable and valid measures of therapist competence are essential when assessing therapist competence (Muse and McManus, Reference Muse and McManus2013). Within couple therapy, competency rating scales are scarce. In an extensive literature searchFootnote 1, only two competency rating scales for a couple therapy context were found, and neither was appropriate for the purpose for assessing BCT competence specifically. For example, one scale, the ‘Couple Therapy for Depression Competency Adherence Scale’, was developed by the Tavistock Relationship workgroup who deliver couple therapy training within IAPT (Hewison, Reference Hewison2011). The scale is long, consisting of 41 items. However, it only covers one of the five suggested competence domains (i.e. specific couple therapy techniques; Clulow, Reference Clulow2010a) and includes a comprehensive list of techniques used across all couple therapy for depression approaches (Clulow, Reference Clulow2010b). Not all competencies are expected to be observed during one session. Certain competencies listed relate to different stages of therapy and others are mutually contradictory because they are part of different therapeutic interventions (Hewison, Reference Hewison2011). This would suggest that the assessor needs knowledge of all the different approaches to couple therapy to be able to rate the trainee in a reliable manner. Therefore, using this scale would not be feasible within a BCT-specific training course. Additionally, no psychometric data could be found on how the scale was developed or on the scale’s reliability and validity.
The second scale identified was the ‘Behavioral Couple Therapy Competence Rating Scale’ (Jacobson et al., Reference Jacobson, Christensen, Prince, Cordova and Eldridge2000). This scale was used as part of two larger randomised control trials (Christensen et al., Reference Christensen, Atkins, Berns, Wheeler, Baucom and Simpson2004; Jacobson et al., Reference Jacobson, Christensen, Prince, Cordova and Eldridge2000), comparing traditional behavioural couples therapy (TBCT; Jacobson and Margolin, Reference Jacobson and Margolin1979) and integrative behavioural couple therapy (IBCT; Christensen and Jacobson, Reference Christensen and Jacobson1998). This scale was developed for those trials to ensure that the therapist did not display a new treatment bias and hence focused exclusively on TBCT. However, the scale has short anchor points describing the score on the scoresheet, but no accompanying manual and only a specialist consultant observing several of a therapist’s sessions would be able to judge if competence is achieved (Jacobson et al., Reference Jacobson, Christensen, Prince, Cordova and Eldridge2000); in addition, the scale did not focus on addressing depression within an interpersonal context. Within routine practice a more practical approach is needed, allowing trainers or supervisors to measure the competence demonstrated in a single therapy session and allowing those other than specialists within the field to apply the scale. Furthermore, the Behavioral Couple Therapy Competence Rating Scale does not cover all aspects of BCT (Epstein and Baucom, Reference Epstein and Baucom2002), and no psychometric data on the validity or reliability of the scale were found.
BCT is a particular approach to couple therapy for depression requiring specific techniques to be used in a competent manner (Baucom et al., Reference Baucom, Epstein, Kirby, LaTaillade, Gurman, Lebow and Snyder2015). Therefore, neither of the measures mentioned above could effectively measure therapist competence in the BCT model. BCT requires a unique measure capturing the general skills of couple therapy and the specific skills of BCT. as well as the adherence to the BCT model. As such, developing reliable, valid and usable methods for assessing the competence with which BCT is delivered is crucial to the continued progression of the field. However, future research needs to strike a balance between the need for reliable and valid assessments of therapist competence and the limits on resource availability within routine practice. Cost-effective methods of assessing competence need to be developed further which can be utilised across a range of practice settings (Muse and McManus, Reference Muse and McManus2013).
Scale development
The BCTS-D is based on the established and widely used ‘Cognitive Therapy Scale-Revised’ (CTS-R; Blackburn et al., Reference Blackburn, James, Milne, Baker, Standart, Garland and Reichelt2001). The rationale was twofold. First, it fitted well with the defined couple competences which were based on a CBT competence model (Clulow, Reference Clulow2010a), and, secondly, it was a cost- and time-effective way to use an established measure to define generic competences into which model-specific BCT competences could be integrated.
An expert group consisting of one of the founders of BCT (D.H. Baucom; see Epstein and Baucom, Reference Epstein and Baucom2002) and several key members of the BCT group in the UK adapted the CTS-R (Blackburn et al., Reference Blackburn, James, Milne, Baker, Standart, Garland and Reichelt2001), an observation-based rating scale, to a couple context creating the BCTS-D (Corrie, Fischer, Worrell, & Baucom, n.d.; see Appendix A in Supplementary material). The CTS-R was modified to be appropriate for rating a therapist’s degree of competence in BCT working with a couple where at least one partner is experiencing depression. A comprehensive manual providing detailed descriptions of each competence level for every item was also developed. Each item of the CTS-R was rephrased to fit the couple context, two new items (‘Focus on Depression in Context’ and ‘Facilitating Couple Communication’) were added, and the homework item was split into two different items (reviewing homework and setting new homework; see Table 1). Thus the scale consists of 15 items. Further, and again consistent with the CTS-R, each item is rated on a 7-point Likert scale and is underpinned by Dreyfus and Dreyfus’ (Reference Dreyfus and Dreyfus1986) model of competence. The total BCTS-D score is 90 and the overall passing threshold was set at a score of 45 (i.e. 50%, the same threshold as for the CTS-R). Finally, the BCTS-D was designed to assess both audio and video recordings of active treatment sessions (i.e. excluding assessment and ending sessions).
1 Homework item split into two items.
2 Added to BCTS-D.
Study design
In this study, BCT supervisors and BCT trainees were initially asked to rate the competence of an audio recording of a BCT session using the BCTS-D. The study then consisted of two phases. First, formal feedback about the BCTS-D from both BCT supervisors (i.e. experts in the field) and trainees (i.e. relative novices who are likely to receive feedback on the BCTS-D and use the tool to rate their own competence) was collected. This feedback was used to examine content validity (i.e. adequate representation of BCT competence), face validity (i.e. credibility and plausibility of items measuring BCT competence) and perceived usability. Second, a focus group was conducted and qualitative feedback was collected to gain more in-depth feedback on the experience of using the BCTS-D and its usefulness.
Method
Procedure
A workshop for BCT supervisors and trainees was organised where the BCTS-D was introduced and explained during a short presentation. Thereafter, supervisors and trainees were asked to individually rate the competence of a BCT therapist on the BCTS-D by listening to an audio recording of a single therapy session. The therapy session selected for rating was of a mid-treatment session where one partner had been diagnosed with depression. Participants were sent the BCTS-D and the accompanying manual in advance, were asked to familiarize themselves with its content and were advised to use the manual for further guidance when undertaking their ratings. After using the scale, participants filled in a detailed feedback questionnaire exploring the content and face validity of the BCTS-D as well as its usability.
In the last part of the workshop, a focus group was conducted and qualitative data were collected to gain more in-depth feedback on the participants’ experiences of using the BCTS-D and its usefulness. An information sheet about the study was included and written consent was obtained.
Participants
The optimal number of participants for analysing the content validity of a rating scale depends on a range of factors such as length and style of the scale and practical considerations including the availability of experts. However, it is generally agreed that using more than five participants facilitates detection and exclusion of rater outliers and increases the robustness of ratings (Haynes et al., Reference Haynes, Richard and Kubany1995).
The study recruited BCT supervisors and trainees (see Table 2 for demographic information) to support the scale development, to gain feedback from the target population and to ensure that it was usable for a variety of stakeholder groups (Brewer and Hunter, Reference Brewer and Hunter2006; Campanelli et al., Reference Campanelli, Martin and Rothgeb1991). Getting the perspectives of both groups was a way of ensuring that the measure was fit for purpose for future groups of trainees and supervisors for the purposes of formal evaluation, self-assessment and reflection. All participants had either previously completed a BCT training course or were in the process of completing a BCT training course at postgraduate level. Participants were invited to the training day as part of their continuous professional development. An email invitation was sent to 19 supervisors and 44 trainees of which six supervisors (32%) and 14 trainees (32%) attended.
Part 1: Evaluation of content and face validity and perceived usability of the BCTS-D
Material
Content and face validity
Participants were asked to rate the relevance and clarity of each of the items (where ‘1’ was not relevant/clear, ‘2’ was somewhat relevant/clear, ‘3’ was quite relevant/clear and ‘4’ was very relevant/clear). An index of content validity was examined (CVI; Lynn, Reference Lynn1986) by calculating the percentage of participants who rated the item as both relevant and clear (i.e. a rating of 3 or 4 on the 4-point scale). This was done separately for BCT supervisors and trainees as the calculation of the CVI is usually based on expert feedback.
Usability questions
Participants were asked if the scale was useful to judge competence (1: not useful, to 4: very useful) and to rate the overall style, appearance and layout of the scale (1: poor, to 4: very good). They were also asked how easy it was to use the scale (1: not easy, to 4: very easy), if the scale gave the opportunity for useful feedback (1: not useful, to 4: very useful), and how appropriate they found the scoring system (1: not appropriate, to 4: very appropriate). If participants provided a score of 3 or less, they were asked to provide feedback on how they believed that the scale could be improved.
Part 2: Qualitative evaluation of scale utility
Material
The aim of the focus group was to collect in-depth feedback on the strengths and weaknesses of the scale. A semi-structured interview schedule was used to facilitate the discussion. Within the schedule, emphasis was placed on facilitating discussion of both positive and negative responses to the BCTS-D using open-ended questions. Where negative feedback was provided, participants were asked to share their views on whether the issue could be resolved and, if so, how.
The qualitative feedback provided by participants in the questionnaire and focus group was analysed using thematic analysis (Braun and Clarke, Reference Braun and Clarke2006). Thematic analysis was chosen because it is flexible and can be used as a descriptive approach reflecting the reality of participants (Braun and Clarke, Reference Braun and Clarke2006). This process consisted of six phases: familiarization with the data, coding generation, searching for patterns based on the initial coding, reviewing themes, defining and naming themes, and producing the report. The process of analysis does not provide in-depth description and interpretation of the data (i.e. no attempt was made to identify the broader meanings and implications of the themes or to relate these themes to previous literature), as the intention is simply to identify ‘surface level’ meaning of the participants’ comments, such as problems with the scale and any suggested improvements (Braun and Clarke, Reference Braun and Clarke2006).
Results
Part 1: Evaluation of content and face validity and perceived usability of the BCTS-D
Content and face validity
Content validity scores are presented in Table 3 for both BCT supervisors and trainees, who found all items in the scale relevant and clear (a rating of 3 or 4 on the 4-point scale), with no significant differences between the scores assigned by supervisors and trainees. The CVI (Lynn, Reference Lynn1986) was above the suggested threshold of 70% for all items. Only one supervisor out of six and none of the trainees indicated that there were important aspects of BCT competence they felt were missing from the scale. Mann–Whitney testsFootnote 2 revealed no significant differences between the scores for relevance or clarity assigned by trainees and supervisors, indicating that trainees and supervisors did not differ significantly in their views. Thus, all items in the scale were viewed as having acceptable content validity by both supervisors and trainees.
1 CVI: content validity index, the percentage of participants who rated item as 3 or 4 on the 4-point scale (1: not to 4: very) for both relevance and clarity.
2 As the data were not normally distributed, non-parametric Mann–Whitney U-tests were performed to examine the difference between the scores assigned by trainees and supervisors.
Usability
All participants rated the scale as at least ‘quite useful’ for judging BCT competence. The way the scale enables the provision of feedback was also rated at least as ‘quite’ useful. In addition, the results indicate the scale was at least ‘quite’ easy to use, had at least ‘good’ style, appearance and layout, and had at least a ‘quite’ appropriate scoring system with no differences between trainees and supervisors. Mann–Whitney tests revealed no significant differences between the scores for relevance or clarity assigned by trainees and supervisors, indicating that trainees and supervisors did not differ significantly in their views. Thus, all items in the scale were viewed as having acceptable content validity by both supervisors and trainees (see Table 4).
Part 2: Qualitative evaluation of scale utility
Thematic analysis identified major themes in the qualitative feedback on how to improve the scale. Additionally, the majority of participants commented positively on the development of the scale. Specifically, participants believed that the scale would aid self-reflection and personal development and could be used as a self-directed learning tool to facilitate a more objective review of one’s ability, while being reminded of the core competences that enable fidelity to the BCT model. Furthermore, it was stated that the scale is very thorough and gives the opportunity to rate a therapist’s competence while accommodating the couple’s needs (i.e. being highly directive if needed). However, these comments were not included in Table 5 as they did not add anything to how the scale could be improved further.
From the coding and the thematic analysis of the written feedback and the focus group data, four main themes emerged: (a) need to capture competence better; (b) complexity of competence assessment ratings; (c) improve clarity on how to use the scale (i.e. revisions needed to make the rating of the scale easier); and (d) overlap of items (i.e. aspects of competence assessed in one item overlaps with the aspects of competence assessed in another item). Examples of comments are provided in Table 5, with the full thematic analysis and codes provided in Appendix B in Supplementary material.
The revisions to the BCTS-D made in response to each coding, or an explanation as to why the issue in the theme was not resolved, are also outlined in Table 5.
Discussion
The results of these two studies suggest that the BCTS-D may have good validity and usability. The BCTS-D received encouraging feedback from the research participants. The majority of BCT supervisors and trainees found all items on the scale relevant and clear, and only a very small percentage of participants indicated that items in the scale inappropriately overlapped with other items. Only one supervisor of the 20 participants indicated that important aspects of BCT competence were missing from the scale. Overall, supervisors and trainees indicated that they found the scale useful and easy to use with an appropriate scoring system. The qualitative feedback collected reflected a similar picture with many positive comments such as the scale would be useful for self-reflection, personal development, and can be used to aid an objective review of one’s competences.
Some feedback indicated some confusion regarding how to use the BCTS-D and its manual. At this point, it is uncertain whether this is due to the manual being insufficiently clear or attributable to participants not having had sufficient opportunity to familiarise themselves with it prior to participating in the study. These findings may, however, be representative in that those using the BCTS-D in research, training and routine practice settings might find it unduly arduous to read the manual, which consists of 98 pages. Consequently, participants identified the opportunity to practise and become familiar with the scale before using it as important. Ideally, a training day before the feedback workshop would have taken place to ensure that all participants had had time to read the manual. However, due to financial and time constraints, this proved to be unrealistic. Another explanation for certain areas of confusion could be that BCT is a principle-driven approach using more formulation-based techniques, and many of the participants had only recently started their BCT training after working in IAPT services which are typically organised around delivering treatment protocols.
A further recommendation was to add a contextual framework to the BCTS-D. The failure to take therapeutic context into account when measuring competence has been criticised by Waltz and colleagues (Reference Waltz, Addis, Koerner and Jacobson1993), who stated that it undermines the validity of any scale. This could lead to therapists being penalized and rated as less competent for treating more complex cases (Rakovshik and McManus, Reference Rakovshik and McManus2010). To counteract this, use of the BCTS-D requires that therapists submit a summary sheet including demographic information such as each partner’s age, a description of the couple’s presenting problems, and the aims of the session (see Appendix C in Supplementary material). This information enables the rater to evaluate the session in its broader context.
Despite some issues that will need considering in future research, the scale was well-liked by the participants in the group, who thought it a beneficial development for learning and self-reflection, especially within the BCT training course.
Future development of the BCTS-D
The BCTS-D will require further evaluation as the sample size for this pilot analysis of validity and utility was small. Independent use and verification of the scale will be important as the participant sample was part of the BCT postgraduate diploma programme that has developed the BCTS-D. It is also worth noting that the qualitative feedback collected in this study might have been influenced by demand characteristics. Any such influences would be more likely to have influenced trainees, who were less experienced and less senior than the supervisors. Nonetheless, no differences in the questionnaire ratings assigned by trainees and supervisors were found. Most ratings were very favourable and some aspects of the scale were rated a 4 (i.e. very relevant and very clear) by all six supervisors. This raises the question of whether some of the feedback was positively skewed due to the participant sample. All participants invited to the workshop were from the BCTS-D training group. Those attending were more interested in the development of the BCTS-D and possibly viewed it in a favourable light. However, there is one non-significant trend (p = .051) where trainee participants rated ‘the opportunity for useful feedback’ less favourably than the supervisors. The trainees’ average rating was still at ‘quite useful’ (3.29) and no negative qualitative comments regarding this point were identified in the analysis to help explain the phenomenon. Furthermore, steps were taken to mitigate against participants providing positively biased feedback (i.e. anonymised questionnaires and inviting negative as well as positive comments to help develop the scale further).
Scale development is an iterative process with data collected in later stages being used to improve certain previous steps (DeVellis, Reference DeVellis2012). Therefore, the future modification of the scale should be considered and more qualitative feedback collected and quantitative research conducted. Conducting a factor analysis would solve the question of how many constructs underlie the BCTS-D. In addition to the factor analysis, future research should focus on using several recordings, rated by at least two raters to explore the inter-rater reliability of the scale. Unfortunately, it was not possible within the time frame of this study to address the stringent data protection requirements of the National Health Service. Hence therapy recordings were not allowed to be kept in order for competence ratings to be carried out at a later date by an independent rater. Moreover, it was beyond the scope of this study to determine discriminant validity of the BCTS-D. It is expected that therapeutic competence will increase over a year-long course as trainee therapists develop their skills (McManus et al., Reference McManus, Westbrook, Vazquez-Montes, Fennell and Kennerley2010; Williams et al., Reference Williams, Moorey and Cobb1991). Measuring discriminant validity would indicate if the BCTS-D can provide a useful tool for measuring therapists’ progress within a BCT training programme, and this analysis should be included in future research.
Research suggests that familiarisation through reading a manual is not enough to ensure good consistency between raters (Gordon, Reference Gordon2007; Reichelt et al., Reference Reichelt, James and Blackburn2003). Thorough training is proposed as key to ensuring inter-rater reliability. A possibility would be to offer some online training to help familiarisation. The development of a standardized training in how to use the BCTS-D and evaluating its effect on the reliability and perceived validity and usability of the scale, is also essential. Ideally, the training should be standardised to increase inter-reliability including regular discussion and a feedback session about how to calibrate ratings of certain treatment sessions.
Following the qualitative feedback, a summary sheet which includes demographic information about the couple, their presenting problems, diagnosis, formulation, number of sessions, what was done previously and the aim of the session, will be added to the official BCTS-D pack as therapeutic context is an important factor when measuring competence (Stiles et al., Reference Stiles, Honos-Webb and Surko1998; Waltz et al., Reference Waltz, Addis, Koerner and Jacobson1993). Moreover, the manner in which this contextual information should influence ratings will need to be detailed in the manual.
Finally, to be able to use the scale to measure competence within the training for which it was designed (i.e. a postgraduate diploma in BCT), it will be important to establish an empirically informed cut-off point to determine when competence has been achieved. This will need to take the purpose of the assessment into account as, for example, the threshold to pass an introductory BCT training programme may be lower than the requirement at the end of a one-year BCT training course (Muse and McManus, Reference Muse and McManus2013). Future research could usefully examine whether the BCTS-D can discriminate between BCT trainees and experienced BCT therapists as a first step towards validating that cut-off point.
Conclusion
To ensure good quality treatment within routine practice, valid and reliable competence assessment tools are essential (Muse and McManus, Reference Muse and McManus2013). The newly developed BCTS-D is filling the gap within BCT. This initial evaluation of the BCTS-D demonstrates good face validity, content validity and usability, and suggests that it is a tool useful for providing formative feedback and promoting self-reflection. Hence the BCTS-D appears to have good potential to be a suitable competence assessment scale within clinical practice, training settings and research studies. Several issues will need to be addressed in future research to optimise the use of the scale within a BCT training context. Furthermore, it will be essential to analyse the psychometric properties of the BCTS-D within a real-world clinical setting in order to confirm its validity and the reliable use of the scale. However, despite its limitations, this tool provides an important, initial contribution to helping supervisors and trainees better understand, conceptualise and evaluate competence in this emerging specialism.
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/S1754470X20000276
Acknowledgements
The authors sincerely acknowledge the team of authors (I.-M. Blackburn, I. A. James, D. L. Milne, C. Baker, S. Standart, A. Garland and F. K. Reichelt) who produced the Cognitive Therapy Scale-Revised (2001) upon which the BCTS-D was based. They kindly gave their permission to adapt the CTS-R.
Conflicts of interest
I. Rudolf von Rohr, S. Corrie, M. S. Fischer, D. H. Baucom, M. Worrell and H. Pote have no conflicts of interest with respect to this publication.
Financial support
This research received no specific grant from any funding agency, commercial or not-for-profit sectors.
Ethical statements
The authors have abided by the Ethical Principles of Psychologists and Code of Conduct as set out by the APA. Ethical permission to collect data within the Central London Training Centre was obtained from the Health Research Authority (HRA, IRAS project ID: 199914) and the Royal Holloway University of London Ethics Committee.
Key practice points
(1) To ensure good quality treatment within routine practice, valid and reliable competence assessment tools are essential (Muse and McManus, Reference Muse and McManus2013). The newly developed BCTS-D is filling the gap within BCT.
(2) This initial evaluation of the BCTS-D demonstrates good face validity, content validity and usability, and suggests that it is a tool useful for providing formative feedback and promoting self-reflection. Hence the BCTS-D appears to have good potential to be a suitable competence assessment scale within clinical practice, training settings and research studies.
(3) Furthermore, it will be essential to analyse the psychometric properties of the BCTS-D within a real-world clinical setting in order to confirm its validity and the reliable use of the scale.
(4) However, despite its limitations, this research makes an important initial contribution to helping supervisors and trainees better understand, conceptualise and evaluate competence in this emerging specialism.
Comments
No Comments have been published for this article.