Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-06T16:11:09.525Z Has data issue: false hasContentIssue false

Novel Unsupported and Empirically Supported Therapies: Patterns of Usage Among Licensed Clinical Social Workers

Published online by Cambridge University Press:  09 September 2011

Monica Pignotti*
Affiliation:
Florida State University, Tallahassee, USA
Bruce A. Thyer
Affiliation:
Florida State University, Tallahassee, USA
*
Reprint requests to Monica Pignotti, Florida State University, Social Work, c/o Bruce Thyer, 296 Champions Way, Tallahassee 32306-2024, USA. E-mail: pignotti@att.net; bthyer@fsu.edu
Rights & Permissions [Opens in a new window]

Abstract

Background: While considerable attention has been focused in recent years on evidence-based practice, less attention has been placed on clinical social workers’ choice to use ineffective or harmful interventions, referred to in the present paper as Novel Unsupported Therapies (NUSTs). Method: The present study surveyed 400 Licensed Clinical Social Workers (LCSWs) across the United States in order to determine the extent of their usage of NUSTs, as well as their usage of conventional therapies that lacked support and empirically supported therapies (ESTs). Reasons for selecting interventions were also assessed. Results: While the vast majority (97.5%) reported using some form of EST, 75% of our sample also reported using at least one NUST. Logistic regression analysis revealed that NUST usage was related to female gender and trauma specialization. A split plot ANOVA revealed that respondents rated positive clinical experience higher than published research as a reason for selecting an intervention. LCSWs with a CBT theoretical orientation rated research evidence more highly than those of other theoretical orientations. However, even within the group of LCSWs with a CBT orientation, clinical experience was rated more highly than research evidence. Conclusions: Implications for practice are discussed.

Type
Research Article
Copyright
Copyright © British Association for Behavioural and Cognitive Psychotherapies 2011

Introduction

The majority of psychotherapy providers in the United States are clinical social workers, who outnumber both psychiatrists and psychologists (Hartston, Reference Hartston2008). Over the past decade, considerable attention has been focused on the implementation of evidence-based practice (EBP) in social work and other mental health professions. As mental health professionals, clinical social workers are held accountable to their clients to select interventions most likely to help them and to accurately represent the extent of such evidence or lack thereof (National Association of Social Workers, 1999; Gambrill, Reference Gambrill2006; Myers and Thyer, Reference Myers and Thyer1997). Consequently, increasing attention is being placed on the importance of properly training social workers in the use of EBP (Parrish and Rubin, Reference Parrish and Rubin2011; Rubin and Parrish, Reference Rubin and Parrish2007).

EBP has been defined by its developers as the “integration of the best research evidence with clinical expertise and [client] values” (Sackett, Straus, Richardson, Rosenberg and Haynes, Reference Sackett, Straus, Richardson, Rosenberg and Haynes2000, p.1). Clinical expertise refers to relationship skills, past clinical experience, and identifying unique client circumstances, individual risks and benefits, and client values refer to “unique preferences, concerns, and expectations each client brings to the encounter” (p.1). EBP is “not a static state of knowledge but rather represents a constantly evolving state of information” (Thyer, Reference Thyer2004, p. 168). The present study focuses on the first part of the definition, selection of interventions that have the best research evidence for the client's problems, as well as its opposite, the selection of interventions that lack such evidentiary support and possible reasons for such choices.

While a great deal of effort has been spent in identifying interventions that have empirical support, until recently little attention has been focused on interventions that lack empirical support, make unsupported claims, and in worst case scenarios may even do more harm than good. For a few exceptions illustrating social workers exposing ineffective or harmful interventions see Stocks (Reference Stocks1998), Sarnoff (Reference Sarnoff2001), Pignotti and Thyer (Reference Pignotti and Thyer2009a) and Thyer and Pignotti (Reference Thyer and Pignotti2010). Lilienfeld (Reference Lilienfeld2007) discussed a number of reasons why it is important to identify ineffective or harmful interventions, even if those being harmed are in the minority. A primary principle included in all mental health and health professional codes of ethics is primum non nocere, which means “First, do no harm” (American Psychological Association, 2002; National Association of Social Workers, 1999). Lilienfeld points out that even though clients, on average, benefit from various forms of psychotherapy, some treatments “can produce harm in a nontrivial number of individuals” (p. 54).

Lilienfeld's concerns have been elaborated upon recently by others who share his concerns (Barlow, Reference Barlow2010; Castonguay, Boswell, Constantino, Goldfried and Hill, Reference Castonguay, Boswell, Constantino, Goldfried and Hill2010; Dimidjian and Hollon, Reference Dimidjian and Hollon2010). Dimidjian and Hollon indicated that interventions can do harm in a number of ways, directly or indirectly. Even interventions that, in and of themselves, appear to do no harm may result in opportunity cost (Lilienfeld, Reference Lilienfeld1998, Reference Lilienfeld2002). That is, such interventions might waste an individual's or an organization's time and money. Moreover, if such interventions are opted for in lieu of interventions that have empirical support, the client could be indirectly harmed by being deprived of an intervention from which he or she could benefit or at least could prevent the client from further deterioration that may occur with an intervention that lacks support.

Interventions that lack adequate empirical support and yet make unsupported claims for their efficacy will hereafter be referred to as Novel Unsupported Therapies (NUST). New interventions that are in the process of being properly researched that do not make claims beyond the data would be excluded from the category of NUSTs, by the definition used in the present study. An example of a NUST would be Thought Field Therapy, a novel therapy that employs finger tapping on purported acupressure points on the body in specified sequences, which claims to cure a variety of psychological and medical problems in minutes and makes unsupported claims of superiority to existing empirically supported interventions (Callahan and Trubo, Reference Callahan and Trubo2001). Conversely, not all interventions that lack empirical support are necessarily new. Conventional interventions and assessment methods that are widely accepted can nevertheless lack empirical support. Such interventions will, in the present study, be referred to as Conventional Unsupported Therapies (CUST). An example of a CUST would be dream interpretation, a common practice used in conventional therapy that nevertheless lacks credible empirical support.

Before social work professionals can attempt to change the practices of clinicians by providing them with the knowledge, the skills and the motivation to select the best practices for their clients, we must first determine the extent to which the problem of the use of interventions that lack empirical support exists. In an exploratory study, which was a survey of 191 licensed clinical social workers throughout the US, Pignotti and Thyer (Reference Pignotti and Thyer2009b) found that 75% of those who responded to the survey reported having used at least one NUST within the past year. The study did not classify or specifically identify Empirically Supported Therapies (ESTs). The most commonly-used NUSTs included Attachment Therapy (26.5%), Critical Incident Stress Debriefing (22.9%), Critical Incident Stress Management (21.2%), Body-Centered Psychotherapy (21.8%), and EMDR for conditions other than PTSD (15.9%). Although the first three are interventions classified as directly potentially harmful (Lilienfeld, Reference Lilienfeld2007), a caveat to keep in mind is that specifics on what clinicians actually did with their clients was not reported, hence not all of what was reported was necessarily harmful.

In addition to identifying what interventions clinicians are choosing, it is important and relevant to understand their reasons for selecting interventions in order to determine the extent to which clinicians value research findings in making such choices. Several surveys of mental health professionals have found that clinicians usually value clinical experience over empirical evidence in selecting interventions (Lucock, Hall and Noble, Reference Lucock, Hall and Noble2006; Orlinsky, Botermans and Ronnestad, Reference Orlinsky, Botermans and Ronnestad2001; Riley et al., Reference Riley, Schumann, Forman-Hoffman, Mihm, Applegate and Asif2007; Stewart and Chambless, Reference Stewart and Chambless2007; von Ranson and Robinson, Reference Von Ranson and Robinson2006). Orlinsky et al. included a minority of clinical social workers and a variety of mental health professionals, but the other samples were comprised of clinical psychologists. Cook, Schnurr, Biyanova and Coyne (Reference Cook, Schnurr, Biyanova and Coyne2009) conducted a web-based survey of 2607 Canadian and US psychotherapists to determine influences on their practice choices. They found that the factors most highly influential on current practice were significant mentors, books, training in graduate school, and discussions with colleagues and that “empirical evidence had little influence on the practice of mental health providers” (p. 671). A qualitative study on school social workers suggested that some participants appeared to see evidence-based practice and the practical necessities of clinical experience as being in conflict (Bates, Reference Bates2006).

The present study expands upon our pilot study (Pignotti and Thyer, Reference Pignotti and Thyer2009b), by adding to the sample size and employing an expert review for the classification of therapies as NUSTs and CUSTs and by adding a new category, Empirically Supported Therapies (ESTs), which was not a variable included in the previous study (Pignotti and Thyer, Reference Pignotti and Thyer2009b). Participants in the present survey were also asked to rate how important various reasons (i.e. positive clinical experiences, research) were in the selection of interventions in their practices, data not included in the previous report. Based on prior studies, we hypothesized that LCSWs would rate clinical experience as more important than evidence from research in selecting interventions. We also examined the relationship between sociodemographic characteristics and practice characteristics with the use of novel unsupported and empirically supported therapies and conducted a logistic regression analysis on variables found to be significant in an exploratory bivariate analysis.

Method

Data collection and participants

Permission was obtained for this study from Florida State University's Institutional Review Board (IRB). The current sample was comprised of data from 191 participants in an earlier pilot study (Pignotti and Thyer, Reference Pignotti and Thyer2009b) hereinafter referred to as round one. These data were combined with data collected from 209 additional participants (round two), resulting in a total of 400 participants from both rounds, with the combined data reported in the present report. Dillman's (Reference Dillman2007) methods were used for invitations to participate, which included an e-mailed pre-letter informing participants about the survey, an initial invitation, and three follow-up requests sent to nonrespondents. Participants were assured that their responses would be anonymous and that data would only be reported as summaries in which no individual's response could be identified.

Both rounds of data collection consisted of study participants who were LCSWs located across the US who advertised their services on the Internet and published their electronic mail addresses on the website www.helppro.com. This database of LCSWs is linked to the National Association of Social Workers’ (NASW) website. This website, described as “the most comprehensive, user-friendly National Social Worker finder” (Helppro, n.d., para 1), was selected as appropriate for this study because LCSWs from a wide variety of specialties listed e-mail addresses that were publicly accessible. Approximately 25% of LCSWs in that database did not have published e-mail addresses and thus had to be excluded. Because the addresses were accessible only through a search by zip code, the database was first searched by randomly selected zip codes. Our goal was to obtain as complete a list as possible, so we concluded the search by zip code when we were no longer getting any new names (after 2000 randomly selected zip codes). The resulting list consisted of 2200 potential participants.

For round one, 400 participants were randomly selected from the aforementioned list of 2200 LCSWs and invited, via e-mail, to participate. However, 26 of the 400 e-mail addresses were returned as undeliverable, resulting in 374 participants who presumably received an invitation to participate, of whom 191 responded. For round two, an additional 600 randomly selected participants from the original compiled list were invited to participate. Of those invitations, 80 were undeliverable, leaving a total of 520 participants who presumably received the invitation to participate, with 209 responding; thus the response rate for the combined samples was 45%. A total of 33 of the 400 respondents completed only the first page of the survey. The first page contained the following variables: licensure, state, theoretical orientation, years in practice, area of specialty, practice setting, and age group of clients. Thus, analyses that contained variables that were not on the first page of the survey will have missing data, as indicated by the lower numbers reported in the tables on these analyses. The variables beyond the first page of the survey included demographics, interventions used in practice, and reasons for using the interventions. Analysis of the completers versus the noncompleters revealed no significant differences on the first page variables, with the exception of practice setting. Completers were more likely to be in private practice than noncompleters (see Pignotti and Thyer, Reference Pignotti and Thyer2009b for a full report of this analysis).

Sample description

Demographics for the present study (round one and round two participants combined) are presented in Table 1. As expected with surveys of social workers, 77% of the respondents were female. Participants ranged in age from 27 to 82 years old, with an average age of 53 years (SD = 10). It is noteworthy that 15% were over 62 years, a considerably higher percentage than the 5% reported by an NASW practitioner survey (National Association of Social Workers, 2003). Consistent with earlier surveys of clinical social workers in private practice (Strom, Reference Strom1994), 94% were Caucasian. Respondents practiced in 39 different states, plus one respondent who was in the military practicing overseas. The largest percentage of participants were from the Northeast (44%).

Table 1. Sample description

Measures

Survey instrument. The survey instrument included questions on demographics (age, sex, race, religion, geographic location, education); practice characteristics (years in practice, area of specialization, region, type of practice setting, age group of clients, and theoretical orientation); therapies used in practice; and reasons for choosing interventions. Participants were asked in two different ways about therapies used in practice. First, they were asked to name the three interventions they use most in their practices. Next, participants were presented with a list of commonly used interventions and asked to indicate whether they had used them within the past year. The interventions presented were selected on the basis of being cited in the social work and other professional literatures, mention of commonly used unorthodox therapies (derived from Lilienfeld, Reference Lilienfeld2007; Lilienfeld, Lynn and Lohr, Reference Lilienfeld, Lynn, Lohr, Lilienfeld, Lynn and Lohr2003; Norcross, Garofalo and Koocher, Reference Norcross, Garofalo and Koocher2006) and selected therapies advertised by LCSWs on the internet on the website www.helppro.com. Three assessment procedures used by LCSWs (Myers-Briggs Type Indicator, Eneagram and Genogram) were also included.

In order to determine reasons for using interventions, participants were presented with a list of possible reasons and asked to rate their importance on a scale of 1 to7, where 1 is “not at all important” and 7 is “very important.” A copy of our survey instrument is available from the senior author.

Creation of categories for types of interventions and expert review. Composite variables were created for Novel Unsupported Therapies (NUSTs), Conventional Unsupported Therapies (CUSTs), combined NUSTs and CUSTs (CNUSTs), and Empirically Supported Therapies (ESTs) by summing the affirmative responses for each therapy that fall into those categories. Both continuous (number used) and dichotomous (used or did not use) versions of each of these variables were created. The following definitions were used:

ESTs: Therapies that meet the Division 12 of the American Psychological Association criteria (Chambless et al., Reference Chambless, Baker, Baucom, Beutler, Calhoun and Crits-Christoph1998) for empirically supported therapies for either a probably or a well established treatment and does not have key proponents who make claims that go beyond the evidence.

NUSTs: Therapies that are relatively new, not widely accepted and or widely taught in graduate programs, have not met the American Psychological Association's Division 12 criteria as an EST, and have major proponents who are making claims based only on anecdotes, uncontrolled case reports, and/or testimonials that go beyond the evidence. Therapies that do meet the Division 12 criteria as an EST for certain conditions, but make claims that go beyond the evidence, were also considered NUSTs (e.g. a therapy that meets the EST criteria for PTSD but makes claims for being successful with other conditions that are unsupported).

CUSTs: Therapies that are conventionally accepted and taught, and yet do not meet the criteria for ESTs. These are therapies that are accepted, based on favorable clinical experience or authority but lack published clinical trials to support their efficacy.

Categorization of therapies for the present study was determined by an expert review, as based on recommendations by Springer, Abell and Hudson (Reference Springer, Abell and Hudson2002). A panel of two expert reviewers in addition to one of the present authors (Thyer) was presented with the list of therapies used in the survey, accompanied by the above definition of NUSTs, CUSTs, and ESTs. The two expert reviewers were selected because they are considered experts in the area of science and pseudoscience in mental health practice, having published extensively in that area. The expert reviewers indicated, by placing an “X” in the appropriate column, whether they thought the therapy fit the construct. The option of “not familiar with this therapy” was also offered. Inter-rater agreement was calculated by summing the number of agreements and dividing that by the number of agreements plus disagreements and then multiplying this by 100 to obtain the percentage (Bloom, Fischer and Orme, Reference Bloom, Fischer and Orme2006). This calculation was made between each of the two reviewers with one another and with Thyer. Inter-rater agreement ranged from 95–98% for ESTs; 84–94% for NUSTs; and 84–92% for CUSTs. Therapies that received at least two votes for a particular category were classified in that category. In some cases, due to one of the reviewers not being familiar with a particular therapy, there were ties. The ties were resolved through the present authors forming a rationale and reaching consensus. The resulting classifications of NUSTs, CUSTS, and ESTs are presented in Table 2.

Table 2. Expert review classification of interventions and assessment procedures

Results

Practice characteristics

Practice characteristics for the sample are presented in Table 3 and areas of specialization are presented in Table 4.

Table 3. Practice characteristics

Table 4. Areas of specialization named by participants*

* Total percentages add up to >100% because some respondents were allowed to name more than one area.

Most commonly reported interventions

Interventions used within the past year and the most frequently used interventions (respondents were asked to name three), as reported by respondents, are presented in Table 5. For interventions used within the past year, the top five ESTs and NUST are listed, along with the four CUSTs. The top 10 interventions written in by participants as most frequently used are also reported. These interventions were not reported by category because the expert review panel only classified the prepared list of interventions, not the ones participants wrote in as most frequently used. A complete list of all interventions reported and their frequency is available from the first author upon request.

Table 5. Most commonly reported interventions used within past year

Participant prevalence of usage of ESTs, NUSTs, CUSTs, and combined novel and conventional unsupported therapies (CNUSTs) are presented in Table 6. An overwhelming majority (97.5%) of participants reported using at least one intervention classified as an EST. However three-quarters of respondents also reported using at least one NUST and 86% reported having used at least one NUST or CUST within the past year.

Table 6. Participant usage of ESTs, NUSTs, and CUSTs

Reasons for selecting interventions and relationship to CBT theoretical orientation

The participant ratings of reasons for selecting interventions reported by respondents are presented in Table 7. The split plot ANOVA (similar to that conducted by Stewart and Chambless, Reference Stewart and Chambless2007) of the relationship between reasons (clinical experience and favorable research in peer reviewed journals) by CBT theoretical orientation is presented in Table 8. Given that CBT-oriented interventions are, in general, more widely researched than those of other orientations (although there are exceptions), the purpose of this analysis was to determine whether CBT oriented practitioners would consider research findings more important than those of other theoretical orientations. The within subject variables were reasons for selecting interventions and the between subject variable was CBT theoretical orientation. The hypothesis that clinical experience would be more highly rated than peer reviewed research was supported. Both clinical experience with results that held up over time and clinical experience with fast, positive results were rated significantly more highly than favorable research in peer reviewed journals as reasons for selecting interventions. Participants with a CBT theoretical orientation rated favorable research more highly than those of other orientations. However, within the group of participants with a CBT orientation, they too rated clinical experience more highly than favorable research.

Table 7. Reasons for selecting interventions

Table 8. Relationship between reasons for choosing interventions to theoretical orientation

*p < .01; **p < .001

Relationship between demographic/practice characteristics and usage of NUSTs

Bivariate analyses were conducted on dichotomized usage of NUSTs with each demographic and practice characteristic. Chi square analysis indicated that females were more likely than males to use NUSTs (X 2 = 8.19 Ф = .15 p < .01) and that participants who indicated subscribing to a new age, spiritual or nondenominational religion were more likely to use NUSTs than those of other religions (X 2 = 4.81 Ф = .12 p < .05). Additionally, participants who indicated a specialization in trauma were more likely to use NUSTs than those who did not indicate a specialization in trauma ((X 2 = 15.85 Ф = .21 p <. 01). None of the other demographic or practice characteristics were statistically significant in relation to dichotomized usage of NUSTs.

Logistic regression analysis was then conducted on the demographic and practice characteristics that were statistically significant in the bivariate analysis in relation to NUSTs usage, to determine predictors of NUSTs usage (used or not). The results of this analysis are presented in Table 9. Females were shown to be more than two times more likely to use NUSTs than males, and participants who indicated a specialization in trauma were more than three times more likely to use NUSTs than those who did not. Religion was no longer significant, after controlling for gender and trauma specialization.

Table 9. Logistic regression analysis for variables predicting NUST usage

*p < .05; ***p < .001

Discussion

Key findings

In a national internet-based survey of the reported practices of almost 400 licensed clinical social workers, a group largely demographically reflective of such professionals, it was found that the most common areas of psychotherapy specialization included trauma, mood disorders, marital and family issues, anxiety disorders, and addictions. The most commonly reported type of interventions used by our respondents included cognitive-behavioral therapy (42%), psychodynamic therapy (21%), solution focused therapy (17%) and family systems therapy (12%). An overwhelming majority of practitioners (97.5%) reported using an empirically supported treatment during the past year, which can be seen as a positive feature of contemporary psychotherapeutic practice engaged in by clinical social workers. However, 75% also provided some form of NUST, defined as relatively new treatment, not widely accepted or taught in graduate training, and which does not meet accepted criteria to be labeled as an EST. Our respondents provided an average of six ESTs and two NUSTs during the past year. Practitioners also widely reported using CUSTs, with 60% of the practitioners providing an average of .92 such interventions during the past year.

One's positive clinical experiences, theoretical preferences, personality style, and emotional compatibilities were more influential in these psychotherapists’ choice of interventions than favorable research reports in peer reviewed journals. Although this is the first study that tested this hypothesis on a sample consisting exclusively of LCSWs, our findings are consistent with findings of past surveys on psychologists and other mental health professionals, which also found that clinical experience was valued over research evidence (Lucock et al., Reference Lucock, Hall and Noble2006; Orlinsky et al., Reference Orlinsky, Botermans and Ronnestad2001; Riley et al., Reference Riley, Schumann, Forman-Hoffman, Mihm, Applegate and Asif2007; Stewart and Chambless, Reference Stewart and Chambless2007; von Ranson and Robinson, Reference Von Ranson and Robinson2006). Understanding why clinical experience is valued over research is important. Prior research has suggested that social workers’ clinical experience and research is in conflict and thus they felt ambivalent about EBP (Bates, Reference Bates2006).

One's choice of theoretical orientation (cognitive behavioral versus others) significantly influenced the selection of therapies, with CB-oriented therapists being more likely to attend to research reported in peer reviewed journals. However, our within-group analysis of CBT therapists revealed that even that group rated clinical experience (for both results that held up over time and fast, positive results) as more important than research in selecting an intervention, although not to the same extent as non-CBT practitioners. There are several possible ways in which these findings may be interpreted. Clearly, clinical experience and observing fast, positive results first hand and/or observing a client change over time is very compelling. Clinicians may sincerely believe that their clinical experience in and of itself is a sufficient method of evaluating outcomes and thus have a positive intent. However, one possible interpretation of these findings is that they may not be aware of how various cognitive biases, such as confirmation bias, can taint first hand observations, making them not as accurate as one might presume (see Gambrill and Gibbs, Reference Gambrill and Gibbs2009 for a listing and description of different forms of cognitive biases). Research, although not perfect, is designed to control for such biases in a way that our direct experience cannot.

Another more optimistic interpretation is that some of our respondents may actually be using single case designs to evaluate their clinical outcomes and considering this as results from clinical experience that held up over time. However, some of the most frequently used NUSTs were interventions that were discredited through empirical studies as being either ineffective or possibly doing harm and thus ought not to be used at all in practice. For example, nearly a quarter of our respondents reported that they had used Critical Incident Stress Debriefing within the past year, which has been classified as potentially harmful treatment and shown in multiple meta-analyses to either not be helpful or to actually increase a person's chance of later developing posttraumatic stress disorder (see Lilienfeld, Reference Lilienfeld2007 for a review). Also, a number of prior studies have examined the post-graduation application of single-case research designs by MSW students explicitly trained in their use, with consistently low rates being reported (e.g. Welch, Reference Welch1983; Gingerich, Reference Gingerich1984; Cheatham, Reference Cheatham1987). We suspect, but cannot prove, that our respondents were more likely relying on traditional sources of practice knowledge such as clinical judgment and intuition rather than single-case research designs, in reference to this issue.

Another key finding is that the use of Novel Unsupported Therapies was significantly more common among the female respondents and among practitioners specializing in the treatment of trauma. Further research is needed to explore the reasons why those specializing in trauma would be more likely to use NUSTs. It is possible that this is due to the fact that many such interventions for trauma have been developed, although this is also the case for treatments for developmental disabilities, particularly autism (Thyer and Pignotti, Reference Riley, Schumann, Forman-Hoffman, Mihm, Applegate and Asif2010). On the initial bivariate analysis, those endorsing new age/nondenominational or spiritual/religious beliefs were also shown to be more likely to use NUSTs, although in the logistic regression when gender and trauma specialization were controlled for this difference was no longer significant. Nevertheless, future research might explore this further by using a standardized assessment procedure for new age beliefs to determine in a more precise manner if such a relationship exists.

Study limitations

Our data must be interpreted cautiously in that our results are based on LCSWs self-reported practices, and are subject to the usual sources of inaccuracy (e.g. inability to correctly recall) or bias (e.g. a tendency to over-report the use of empirically supported treatments, given the profession-wide emphasis and ethical standards promoting such practices). Additionally, the 45% response rate presents potential limitations on the generalizability of our findings, although the response rate is above the average of 39% for similar studies systematically reviewed, which ranged from 12% to 77% (Pignotti, Reference Pignotti2010).

While demographically similar to larger national samples of LCSWs, our web-based survey of approximately 400 respondents only included individuals with internet access and who freely complied with our request for information about their practice. Our study was also limited to the psychotherapeutic practices of LCSWs, which although the larger provider of these services in the US, may not be reflective of the types of therapies delivered by licensed psychologists, psychiatrists, nurses or martial and family therapists.

Implications for practice and policy

A common misconception many have is that EBP is the same as an EST. This confusion has been demonstrated in both the psychology (Luebbe, Radcliffe, Callands, Green and Thorn, Reference Luebbe, Radcliffe, Callands, Green and Thorn2007) as well as the social work literature (Bates, Reference Bates2006). The EBP model as defined by Sackett and his colleagues (Sackett et al., Reference Sackett, Straus, Richardson, Rosenberg and Haynes2000) involves, as mentioned previously, the integration of the best empirical research evidence with clinical expertise and client values and each of those is represented in a diagram by three circles of equal size. Thus, it is appropriate for practitioners to value clinical experience. However, our results would suggest that, for many of our respondents, the “circle” representing clinical expertise is significantly larger than the circle representing research evidence.

Most importantly, in order to make EBP relevant to clinical practice, clinicians need to learn that the EBP model is more than just selecting an intervention from a list of ESTs for a particular diagnosis. This has limited value because often a particular client will not fit the population of clients for which evidence exists. For example, a particular client may have a co-existing disorder that clients studied did not have, or come from a different culture from the population studied. In contrast, EBP involves first formulating a question about one's particular client and his or her particular problem and then doing a literature search to locate the best evidence for an intervention that will most effectively address that problem for that individual client. This evidence is then integrated with clinical expertise and the client's values.

In terms of the data in the present analysis, the finding that would be most consistent with the EBP model would be if there were no significant differences between the valuing of research and clinical experience. However, we identified significant differences, which would indicate that our participants’ implementation of the model would be uneven, although we must also keep in mind that future research needs to investigate exactly what participants have in mind when they indicate the importance of clinical experience.

The widespread adoption of a cognitive behavioral theoretical orientation among clinical social workers, indeed its greater acceptance than any other model, may be seen as encouraging, given the generally high level of empirical support enjoyed by this approach's derivative interventions. This suggests that the professional theories and practices of clinical social workers parallel those reported by licensed psychologists, which have documented a sharp rise in the acceptance of cognitive behavior theory and practice over the past few decades (Andersson and Asmundson, Reference Andersson and Asmundson2008; Norcross, Karpiak and Santoro, Reference Norcross, Karpiak and Santoro2005).

The relatively large proportion of respondents who continue to provide novel unsupported therapies is a source of concern. Moreover, the relatively low role of favorable reports of research findings published in peer-reviewed journals as influencing practice is troubling. Contemporary ethical standards promulgated by organizations such as the American Psychological Association and the National Association of Social Workers clearly indicate that the practice of these disciplines is to be grounded, at least in part, on scientific research findings; and this perspective is of course a hallmark of evidence-based practice. We believe that professional organizations should become more proactive in discouraging its members’ use of psychotherapies established as ineffective or harmful (see Thyer and Pignotti, Reference Thyer and Pignotti2010; Dimidjian and Hollon, Reference Dimidjian and Hollon2010; Barlow, Reference Barlow2010; Castonguay et al., Reference Castonguay, Boswell, Constantino, Goldfried and Hill2010; Lilienfeld et al., Reference Lilienfeld, Lynn, Lohr, Lilienfeld, Lynn and Lohr2003; Lilienfeld, Reference Lilienfeld2007).

Merely identifying ineffective or harmful treatments provides insufficient protection to the public. First and foremost, more effective means of dissemination to mental health professionals of treatments that have empirical support are needed, so practitioners can receive some guidance on what intervention they can use instead of the ineffective or harmful ones they are possibly using. Ideally, such dissemination would begin in the Bachelors and Masters Social Work programs. A recent survey (Weissman et al., Reference Weissman, Verdeli, Gameroff, Bledsoe, Betts and Mufson2006) indicated that 61.7% of the social work programs surveyed did not have a requirement for both didactic and clinical supervision in any empirically supported therapy. This finding indicates that educational requirements for teaching interventions with empirical support, and including training in such interventions, may be the best way to start disseminating such practices to people who are beginning their careers as clinical social workers. In addition to learning about empirically supported practices it is important that people entering the profession understand how the use and promotion of practices that lack such support can do harm to clients and that they understand the important role empirical evaluation plays in providing clients with the safest, most effective interventions. Moreover, practicing clinicians need to be educated on where to find resources for quickly identifying the latest research findings. For example, the Cochrane (http://www.cochrane.org/) and Campbell (http://www.campbellcollaboration.org/) Collaborations have excellent websites, with systematic reviews and meta-analyses that are available to the public. Additionally, practitioners need to learn that sometimes an intervention can be worse than doing nothing and that watchful waiting and no immediate intervention may be the best choice in some situations (Lilienfeld, Reference Lilienfeld2007).

Even if dissemination is improved, however, practitioners may still be reluctant to change, as long as they can earn income through the provision of such treatments, especially if they are recognized as top “experts” in that method. Even though it may lack scientific basis, their “expertise” may be valued by other mental health professionals (see Dawes, Reference Dawes1994 for further discussion on putative expertise without scientific basis). This elevation in status, accompanied by financial gain, may provide strong incentives for questionable practices to be maintained. Just as North American medicine attempted to purge itself of clearly ineffective and harmful treatments, following the report by Flexner (Reference Flexner1925) on the sorry state of medical education in the early part of the last century, contemporary psychotherapy disciplines should consider active efforts to disassociate themselves from similarly bogus therapies. Some small steps have been made in discouraging the use of the more obviously harmful and ineffective treatments. Various professional associations have taken position statements against its members providing so-called reparative therapies aimed at changing the sexual orientation of gay men or lesbians, or in condemning the use of fraudulent treatments such as facilitated communication provided to persons with autistic disorders. More efforts such as these are needed.

Another way to reform the disciplines would be successful lawsuits alleging malpractice experienced by clients who received an ineffective treatment from a psychotherapist, when one or more empirically supported treatments were clearly appropriate for that client's condition. If a legal precedent were established that clients should be offered, as first choice, empirically supported treatments for their psychosocial problems, where such interventions were known to exist and to be appropriate for that client's circumstances, the face of psychotherapy practice would dramatically change in a brief period of time. However, on a practical level, lawsuits are difficult and more expensive than what most clients can afford and even if pro bono legal representation were obtained, they may be too emotionally taxing for clients to endure. Educating clients to be informed consumers who question and interview their prospective therapists regarding the evidence for the interventions they are proposing (as suggested by Singer and Lalich, Reference Singer and Lalich1996) may be a way to prevent such malpractice from occurring in the first place and motivate therapists to seek out interventions that have empirical support in order to attract clients who have become educated consumers.

References

American Psychological Association (2002). Ethical principles of psychologists and code of conduct. American Psychologist, 57, 10601073.CrossRefGoogle Scholar
Andersson, G. and Asmundson, G. J. G. (2008). Should CBT rest on its success? CognitiveBehaviour Therapy, 37, 14.Google ScholarPubMed
Barlow, D. H. (2010). Negative effects of psychological treatments. American Psychologist, 65, 1320.CrossRefGoogle ScholarPubMed
Bates, M. (2006). A critically reflective approach to evidence-based practice: a sample of school social workers. Canadian Social Work Review, 23, 95109.Google Scholar
Bloom, M., Fischer, J. and Orme, J. G. (2006). Evaluation Practice: guidelines for the accountable professional (5th ed.). Boston: Allyn and Bacon.Google Scholar
Callahan, R. J. and Trubo, R. (2001). Tapping the Healer Within. Chicago: Contemporary Books.Google Scholar
Castonguay, L. G., Boswell, J. F., Constantino, M. J., Goldfried, M. R. and Hill, C. E. (2010). Training implications of effects of harmful therapies. American Psychologist, 65, 3449.CrossRefGoogle Scholar
Chambless, D. L., Baker, M.J., Baucom, D. H., Beutler, L. E., Calhoun, K. S., Crits-Christoph, P., et al. . (1998). Update on empirically validated therapies, II. The Clinical Psychologist, 51, 316.Google Scholar
Cheatham, J. M. (1987). The empirical evaluation of clinical practice: a survey of four groups of practitioners. Journal of Social Service Research, 10, 163177.CrossRefGoogle Scholar
Cook, J. M., Schnurr, P. P., Biyanova, T. and Coyne, J. C. (2009). Apples don't fall far from the tree: influences on psychotherapists’ adoption and sustained use of new therapies. Psychiatric Services, 60, 671676.CrossRefGoogle ScholarPubMed
Dawes, R. (1994). House of Cards: psychology and psychotherapy built on myth. New York: Free Press.Google Scholar
Dillman, D. A. (2007). Mail and Internet Surveys: the tailored design method. Hoboken, NJ: John Wiley & Sons.Google Scholar
Dimidjian, S. and Hollon, S. D. (2010). How would we know if a psychotherapy were harmful? American Psychologist, 65, 2133.CrossRefGoogle Scholar
Flexner, A. (1925). Medical Education: a comparative study. New York: Macmillan.Google Scholar
Gambrill, E. (2006). Evidence-based practice and policy: choices ahead. Research on Social Work Practice, 16, 338357.CrossRefGoogle Scholar
Gambrill, E. and Gibbs, L. (2009). Critical Thinking for Helping Professionals: a skills-based workbook (3rd ed.). New York: Oxford University Press.Google Scholar
Gingerich, W. J. (1984). Generalizing single-case evaluation from classroom to community setting. Journal of Education for Social Work, 20, 7482Google Scholar
Hartston, H. (2008). The state of psychotherapy in the United States. Journal of Psychotherapy Integration, 18, 87102.CrossRefGoogle Scholar
Helppro, (n.d.). National Social Worker Finder. Retrieved on 28 July 2008 from http://www.helppro.com.Google Scholar
Lilienfeld, S. O. (1998). Pseudoscience in contemporary clinical psychology: what it is and what we can do about it. The Clinical Psychologist, 51, 39.Google Scholar
Lilienfeld, S. O. (2002). The scientific review of mental health practice: our raison d'etre. The Scientific Review of Mental Health Practice, 1, 510.Google Scholar
Lilienfeld, S. O. (2007). Psychological treatments that cause harm. Perspectives on Psychological Science, 2, 5370.CrossRefGoogle ScholarPubMed
Lilienfeld, S. O., Lynn, S. J. and Lohr, J. M. (2003). Science and pseudoscience in clinical psychology: initial thoughts, reflections, and considerations. In Lilienfeld, S. O., Lynn, S. J. and Lohr, J. M. (Eds.), Science and Pseudoscience in Clinical Psychology (pp. 114). New York: Guilford Press.Google Scholar
Lucock, M. P., Hall, P. and Noble, R. (2006). A survey of influences on the practice of psychotherapists and clinical psychologists in training in the UK.Clinical Psychology and Psychotherapy, 13, 123130.CrossRefGoogle Scholar
Luebbe, A. M., Radcliffe, A. M., Callands, T. A., Green, D. and Thorn, B. E. (2007). Evidence-based practice in psychology: perceptions of graduate students in scientist-practitioner programs. Journal of Clinical Psychology, 63, 643655.CrossRefGoogle ScholarPubMed
Myers, L. L. and Thyer, B. A. (1997). Should social work clients have the right to effective treatment? Social Work, 42, 288298.CrossRefGoogle ScholarPubMed
National Association of Social Workers (1999). Code of Ethics. Washington, DC: NASW Press.Google Scholar
National Association of Social Workers (2003). Demographics. Practice Research Network. Retrieved on December 30, 2007, from https://www.socialworkers.org/naswprn/surveyTwo/Datagram2.pdfGoogle Scholar
Norcross, J. C., Garofalo, A. and Koocher, G. P. (2006). Discredited psychological treatments and tests: a Delphi poll. Professional Psychology: Research and Practice, 37, 515522.CrossRefGoogle Scholar
Norcross, J. C., Karpiak, C. P. and Santoro, S. O. (2005). Clinical psychologists across the years: the division of clinical psychology from 1960–2003. Journal of Clinical Psychology, 61, 14671483.CrossRefGoogle Scholar
Orlinsky, D. E., Botermans, J. and Ronnestad, M. H. (2001). Towards an empirically grounded model of psychotherapy training: four thousand therapists rate influences on their development.Australian Psychologist. Special Issue: Training in Clinical and Counseling Psychology, 36, 139148.CrossRefGoogle Scholar
Parrish, D. and Rubin, A. (2011). An effective model for continuing education training in evidence-based practice. Research on Social Work Practice, 21, 7787.CrossRefGoogle Scholar
Pignotti, M. (2010). The use of novel unsupported and empirically supported therapies by licensed clinical social workers, The Florida State University. Dissertation Abstracts International, Section A: Humanities and Social Sciences, 71, 1094. Retrieved from http://search.proquest.com/docview/787006718?accountid=4840.Google Scholar
Pignotti, M. and Thyer, B. A. (2009a). Some comments on Energy Psychology: a review of the evidence: Premature conclusions based on incomplete evidence? Psychotherapy, Theory, Research, Practice, Training, 46, 257261.CrossRefGoogle ScholarPubMed
Pignotti, M. and Thyer, B. A. (2009b). The use of novel unsupported and empirically supported therapies by licensed clinical social workers. Social Work Research, 33, 517.CrossRefGoogle Scholar
Riley, W. T., Schumann, M. F., Forman-Hoffman, V. L., Mihm, P., Applegate, B. W. and Asif, O. (2007). Responses of practicing psychologists to a web site developed to promote empirically supported treatments. Professional Psychology: Research and Practice, 38, 4453.CrossRefGoogle Scholar
Rubin, A. and Parrish, D. (2007). Views of evidence-based practice among faculty in Master of Social Work programs: a national survey. Research on Social Work Practice, 17, 110122.CrossRefGoogle Scholar
Sackett, D. L., Straus, S. E., Richardson, W. C., Rosenberg, W. and Haynes, R. M. (2000). Evidence-Based Medicine: how to practice and teach EBM (2nd ed.). New York: Churchill Livingstone.Google Scholar
Sarnoff, S. K. (2001). Sanctified Snake Oil: the effect of junk science on public policy. Westport, CT: Praeger.Google Scholar
Singer, M. T. and Lalich, J. (1996). Crazy Therapies: what are they? How do they work? San Francisco: Jossey-Bass Publishers.Google Scholar
Springer, D. W., Abell, N. and Hudson, W. W. (2002). Creating and validating rapid assessment instruments for practice and research: Part 1. Research on Social Work Practice, 12, 408439.CrossRefGoogle Scholar
Stewart, R. E. and Chambless, D. L. (2007). Does psychotherapy research determine treatment decisions in private practice? Journal of Clinical Psychology, 63, 267281.CrossRefGoogle Scholar
Stocks, J. T. (1998). Recovered memory therapy: a dubious practice. Social Work, 43, 423436.CrossRefGoogle ScholarPubMed
Strom, K. (1994). Social workers in private practice: an update. Clinical Social Work Journal, 22, 7389.CrossRefGoogle Scholar
Thyer, B. A. (2004). What is evidence-based practice? Brief Treatment and Crisis Intervention, 4, 167176.CrossRefGoogle Scholar
Thyer, B. A. and Pignotti, M. (2010). Science and pseudoscience in developmental disabilities: guidelines for social workers. Journal of Social Work in Disability and Rehabilitation, 9, 110129.CrossRefGoogle ScholarPubMed
Von Ranson, K. M. and Robinson, K. E. (2006). Who is providing what type of psychotherapy to eating disorder clients? A survey. International Journal of Eating Disorders, 39, 2734.CrossRefGoogle ScholarPubMed
Weissman, M., Verdeli, H., Gameroff, M., Bledsoe, S. E., Betts, K., Mufson, L., et al. (2006). A national survey of psychotherapy training programs in psychiatry, psychology and social work. Archives of General Psychiatry, 63, 925934.CrossRefGoogle ScholarPubMed
Welch, G. J. (1983). Will graduates use single-subject designs to evaluate their caseworkpractice? Journal of Education for Social Work, 19, 4247.Google Scholar
Figure 0

Table 1. Sample description

Figure 1

Table 2. Expert review classification of interventions and assessment procedures

Figure 2

Table 3. Practice characteristics

Figure 3

Table 4. Areas of specialization named by participants*

Figure 4

Table 5. Most commonly reported interventions used within past year

Figure 5

Table 6. Participant usage of ESTs, NUSTs, and CUSTs

Figure 6

Table 7. Reasons for selecting interventions

Figure 7

Table 8. Relationship between reasons for choosing interventions to theoretical orientation

Figure 8

Table 9. Logistic regression analysis for variables predicting NUST usage

Submit a response

Comments

No Comments have been published for this article.