Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-06T14:56:24.306Z Has data issue: false hasContentIssue false

Evidence-Based Practice Center network and health technology assessment in the United States: Bridging the cultural gap

Published online by Cambridge University Press:  27 February 2006

Antonio Sarría-Santamera
Affiliation:
Spanish Agency for Health Technology Assessment
David B. Matchar
Affiliation:
Duke University
Emma V. Westermann-Clark
Affiliation:
Harvard University
Meenal B. Patwardhan
Affiliation:
Duke University
Rights & Permissions [Opens in a new window]

Abstract

Objectives: The purpose of this study was to identify the Evidence-Based Practice Center (EPC) network participants' perceptions of the characteristics of the EPC process and the relationship of the process to the success of EPC reports.

Methods: Semistructured interviews were conducted with the three groups involved in the EPC: EPC staff, Agency for Healthcare Research and Quality (AHRQ) staff, and representatives of partner organizations.

Results: The analysis of the coded transcripts revealed three related major themes, which form the conceptual basis for the interpretation presented here: the definition of a successful report, the determinants of a successful report, and the role of AHRQ in the process.

Conclusions: A successful report is a report that is used. The ultimate success of the core health technology assessment objective, moving from research to policy, depends on balancing two values: excellence and relevance. Our findings are consistent with the “two communities thesis,” which postulates the existence of two camps that confer different values to excellence and relevance, with resulting tension. A promising model for approaching this tension is integration or collaboration, which requires linking researchers and policy makers, promoting productive dialogues about the formulation and timing of analysis, and early consideration of how the resulting analysis will be used. This effort suggests that actively blurring the frontiers between these two groups will enhance their interaction. Furthermore, enhancing the role of the AHRQ as scientific broker will maximize the potential of the EPC network.

Type
GENERAL ESSAYS
Copyright
© 2006 Cambridge University Press

Health technology assessment (HTA) aims to provide a bridge between science and policy by becoming involved in the demanding everyday business of helping decision makers solve difficult problems. This bridging, however, is typically less than straightforward. Practitioners are criticized for failing to base actions on research evidence, and academic researchers are sometimes condemned as irrelevant to practice. Various barriers exist that often limit the creation, transfer, or utilization of research findings (1). For example, research may portray the scientific and technical information in a way that leads to an unwise decision, or it may present it in a manner that is not useful for the decision maker (9). The necessity of reducing the gap between research and policy has been clearly identified and targeted (6). At this interface between analysts and decision makers, it is crucial to better mesh the cultures and processes of the two worlds involved in this relationship (16).

The thirteen-member Evidence-Based Practice Center (EPC) network was established by the Agency for Healthcare Research and Quality (AHRQ) to obtain and synthesize evidence-based information into reports disseminated to decision makers who then turn the policy levers in a way that improves health care and patient outcomes. EPC centers review all relevant scientific literature on clinical, behavioral, and organization and financing topics to produce evidence reports and technology assessments. They also conduct research on methodologies and the effectiveness of implementation strategies, and provide technical assistance in translating EPC reports and assessments into quality improvement tools that help inform policies. AHRQ determines the topics to be assessed based on nominations from partners, who include both governmental entities (e.g., National Institutes of Health, Centers for Disease Control and Prevention, Centers for Medicare and Medicaid Services, Social Security Administration) and scientific organizations (e.g., American College of Physicians, American College of Obstetricians and Gynecologists). The EPC network tries to reduce the gap between science and decision making by bringing together the main stakeholders in health decision making, thus closing the gap between “doers” (analysts) and “users” (decision makers) (8). In short, the objective of the EPC network is practical: to produce the best quality evidence synthesis for clinical and public health policy making.

In keeping with this agenda and in the spirit of process improvement, we sought to identify the perceptions of all participants in the EPC network—researchers, mediators, and users—on the characteristics of the process of generating evidence-based reports. We focused on what constitutes a successful translation of research into practice and what conditions facilitate this process.

METHODS

We conducted semistructured interviews with the three groups involved in EPC activities: EPC research center directors and project managers, AHRQ staff, and representatives from partner organizations, both public and private. Interviews were conducted by telephone, were taped, and lasted approximately 45 minutes. The interviews were conducted during a 2-month period by three researchers using a structured interview guide. Questions focused on general areas related to the establishment of an effective mechanism for developing an appropriate process for evidence-based reporting and health technology assessment. All topics in the interview guide were completed by all interviewers.

Variables considered when establishing the sample of EPC centers to be interviewed included the number of reports each had produced and type of organization. Centers with a greater number of reports were oversampled. We conducted eleven interviews with EPC staff, three with AHRQ staff, and three with representatives of partner organizations. Once transcribed, all interviews were then read several times to identify key issues. Based on a reading of the transcripts and on the objectives of the project, a structured coding scheme was devised and sections of text were coded to these themes. Coded segments were then examined and compared to identify similarities and differences and were used to produce a conceptual framework for analysis and interpretation.

RESULTS

Our analysis of the coded transcripts reveled that the interviewees had identified three related major themes that form the conceptual basis for the interpretation presented in this study. Quotes presented in the study have been selected as typical of the perceptions and experiences recorded; however, they are not statistically representative of a larger population.

Concept of a Successful Report

For EPC researchers, success was measured at different levels. At the most fundamental level, success was perceived to occur when the partners used the report, either to produce guidelines or develop policy, or otherwise to meet a concrete need. At more-intermediate levels, a successful report for researchers must reflect a rigorous and credible synthesis of data (in accordance with evidence-based principles), must be intellectually satisfying, must provide new insights, and should provide junior faculty in pursuit of an academic career with publication opportunities. More superficially, a successful report was one that led to positive feedback from users; whether or not a report represented a technically good analysis, it would be perceived as successful if it got “a lot of press.” In addition, EPC participants thought that a successful report is one that is produced within the allocated budget.

AHRQ staff also believed that success was reflected in more than one way; specifically, they focused on the dual qualities of usefulness and excellence. A good report “meets the needs” of the partners; that is, it answers the questions that the partners ask and contains suggestions that they find useful. A successful report is an excellent, well-written synthesis of qualitative issues related to the topic addressed and is produced with the highest methodological standards. A good report shows how each piece of evidence fits together to answer a question. It posits a clear question, identifies all the relevant literature, relies upon a technical expert panel knowledgeable about the subject, and is informed by clinical opinion that possesses an integrated understanding of evidence but remains fundamentally non-normative. A successful report requires “clinical content expertise, knowledge of the technology, and identifies the policy implications.”

The three partner representatives primarily focused on the concept of usefulness. For them, a successful report “includes what we told you to do.” For them to consider a report good, it must include the “universe of evidence,” not just a subset; it should contain tables that organize evidence in an easy-to-understand manner so that a committee, for example, could itself weigh the evidence. Unlike the EPC researcher respondents, partners described the notion of completeness as an important component in promoting credibility, not simply completeness for its own sake.

Determinants of a Successful Report

For researchers, the most important determinant of a successful report is that it is generated from a well-defined scope of work. Narrow and clearly worded questions facilitate research within the time and resources available. A second major determinant is knowing who the intended audience is and how they intend to use the report. Researchers need explicit information about “the context of what users are looking for.” They also want a clear understanding of the desired structure and tone of the report (i.e., scientific versus lay language).

There is an overall concern among researchers, however, that these vital determinants are oftentimes not explicit or clearly established in the original statement of work. If reports are to meet a partner's needs, those needs must be adequately identified. “Clear expectations [must be] identified in the partner.” “What they want” must be well articulated for the researchers to produce a credible product, and all the relevant questions need to be identified from the very beginning of the project. “A crucial component is to define the scope of the project up front, what they want.” Oftentimes, several problems have been identified in defining the scope of the process, which occasionally means that what the analysts are asked to do is completely “out of feasible bounds.” Unfortunately, project scopes and questions that are too broad appear to be the norm, not the exception. In many instances, the researchers sensed that their partners lacked certainty about what they really wanted the researchers to do. “It took us a while to figure out what they actually wanted.” In addition to putting the scientific success of a report at risk, when research questions must be retuned after the start of a study, there may also be financial implications, because “budget is the concrete manifestation of a statement of work.” This issue is particularly relevant because of the strict time frame researchers work within to complete their reports. Because working with partners inevitably produces changes over time, and that some changes are unavoidable, researchers indicate that “we need flexibility.” When a clear conceptual framework is missing at the beginning of a project, the consequence is that questions tend to be too general and, thus, fail to adequately define the type of analysis that will be required.

The AHRQ staff also recognized that the process of generating reports needs to start with a well-defined conceptual framework that specifies the problems and the complexities surrounding the project. They also acknowledged that developing this conceptual framework requires time and commitment. The technical analysis is then adjusted to this conceptual framework. Partners also identify asking focused questions as a major determinant of a successful report. They consider asking appropriate questions as their responsibility in this process. During our interviews, partners acknowledged that they perceive their participation in an EPC project as a learning process. In particular, they have come to understand that, to generate a successful report “that informs the process it is intended to put in practice,” the partners must “have an outcome in mind.”

Two other factors described by more than one respondent group were identified as determinants of a successful report. The first is an abundance of communication. All parties involved mentioned that establishing a “face-to-face relationship” is essential. Partners have realized that it is crucial to maintain close interaction with AHRQ and researchers from the very beginning stages of the process. They want to ensure that “our input does not come too late in the process.” Establishing this relationship outlines what should be done, in an interactive, refined process. For researchers, there is an inherent tension: “The more we want the reports to have impact, the more we have to negotiate with partners.”

The second factor determining a successful report involves the formal processes for producing a standard report: establishment of a good working team that consists of diverse and substantial expertise, development of collaborations with clinicians and content experts, identification of pertinent literature, synthesis of the evidence, and composition of a technically correct and readable report. Respondents considered the utilization of external experts as very helpful because such experts know who the leaders in the field are (e.g., professional societies, advocacy groups, industry) and, thus, can illuminate often necessary background information, such as what motivated the solicitation of a particular report or what agenda may be driving various participants.

AHRQ's Role in the Process

Overall, both partners and researchers indicate a high level of satisfaction with AHRQ. In some areas, however, it was suggested that AHRQ's role could be enhanced to improve the success of EPC reports. There is an overall perception, both among researchers and partners, that AHRQ should mediate or at least facilitate the interaction between analysts and partners. “AHRQ has to make sure that the project they select creates the greatest value.” “I would be happy to see AHRQ take a stronger role in making partners prioritize questions, to avoid scope creep when the partner suddenly discovers he wants to ask another question.” EPCs often describe this mediation in terms of limited resources and the need to prioritize among all possible questions, focusing on the question whose answer provides the greatest value. Researchers urge partners to understand that “they are going to receive what they need, but may not get what they want.” Researchers would like AHRQ to “push the partner to define what they want.” Partners should understand that they are receiving “an enormous gift.” It is, however, a gift they must prepare to receive by collaborating in the development of the report as well as by being ready to use it once they have it in hand. Partners need to understand evidence-based medicine and systematic reviews—“We need to educate them.” Researchers insist that AHRQ should do this educational work ahead of time: “They [AHRQ] could set the stage better with partners.” Researchers view this education process as promoting more-realistic partner expectations and helping them to focus on the “right questions.”

Communication is essential in the EPC process, as is establishing good relationships with partners. Manipulation must be avoided in this relationship; partners need to recognize the expertise of the EPC and not dictate what type of analysis they must carry out. In this intermediary process, AHRQ also recognizes that it has much to do, such as refining the question to be addressed, identifying “the essence” of the problem. Partners also assume the importance of establishing a relationship with EPC researchers to “help them to understand us.”

From the researchers' perspective, an important function for the AHRQ is to help reconcile discordance between what the partners say they want, what they need to satisfy their ultimate objective (the purpose of the report), and what is expressed in the scope of work. Current strategies for mediation were not seen as sufficient. AHRQ has to identify “other ways to do the intermediary work, other mechanisms.” Partners also recommended establishing a formal process with clearly established parameters that would ensure that all issues are addressed. “We need a consistent, routinely followed process.”

All groups agreed that the process has improved over time. Earlier reports were more “painful”. Both EPC and AHRQ staff said that more-recent reports have been done better. EPCs are now more experienced at producing reports within budget, are better at estimating how much literature they will review, and are forming better teams by identifying key people to participate in report writing. This improvement, however, has occurred without establishing any formal process. Each EPC has learned primarily from its own experience, with little contribution from the experiences of other EPCs. There is an overall perception that EPC functioning has improved, although no formal quality improvement process has been identified.

Three additional specific issues were raised regarding the contribution of AHRQ. First, it was noted that some variability is evident among the different Task Order Officers who assume responsibility for EPC reports in the way they conduct their work. Having a “fabulous project officer” is viewed as a key factor in producing a successful report. Another issue within AHRQ surrounds the Coordinating Center, the establishment of which initially raised a lot of expectations. The current overall perception of its impact is, however, very limited. A third issue is that existing mechanisms for establishing budgets and payments are problematic. Several EPC respondents noted that they are not getting the funding they need to conduct their work. “This work is more than pulling together a few RCTs.” Limited resources limit the possibilities of doing a quality job. “We need more money, more time.” One interviewee mentioned that EPC work is still being done primarily by means of a boutique industry model and, consequently, is quite expensive because each product is produced largely from scratch. Although some standardization and other strategies can improve efficiency, tailoring is viewed as one of the most attractive and useful features of EPC reports. For EPCs, a distinction appeared between units that are primarily part of academic institutions and those that are primarily contract research organizations. The former were perceived to have greater flexibility in addressing resource constraints. For example (whether true or not), it was noted that academic institutions may be better able to get “free” workers in the form of postdoctoral fellows and junior faculty than are contract organizations.

DISCUSSION

Based on the results of this study, a successful technical report is one that is used to make policy decisions at either the clinical or public health levels. To be used, this report must be excellent (meet academic standards) and useful (meet decision-maker needs). The key determinant of success is a productive interaction between analysts and decision-makers, between science and policy. The development of the “right question,” usually manifested in the “statement of work” with the shared development of a conceptual framework, is seen as a process in which all stakeholders actively take part. In the specific case analyzed here—technical assessments performed by the EPC network—AHRQ has an opportunity to enhance the usefulness of this resource by being the most effective possible bridge between analysts and decision makers.

In interpreting the findings of this study, it is useful to consider the work of the EPCs within the context of the broader activity of HTA, which aims to influence decision making by activating policy mechanisms. Those mechanisms can operate at the micro (clinical practice), meso (institutional), or macro (health policy) levels. The ultimate success of the complex voyage from research to policy depends on how these activities are linked. In the case of HTA, this link depends on balancing two fundamental values: excellence and relevance. Excellence means adherence to principles that give validity to the results. Relevance means responding in a meaningful way to practical problems. Excellence tends to be emphasized by scientists, whereas relevance tends to be emphasized by decision makers. The “two communities thesis” postulates the existence of two camps (researchers/analysts and policy makers) that do not naturally tend to account for the values and perspectives of the other. However, the different value that both communities confer to excellence and relevance has little to do with the personalities of the individuals involved. The roots of the conflict lie in the different logic and demands that characterize the respective spheres of research and decision making. The results from our work have identified the different perspectives that exist between those who collect and analyze health care evidence and those who use that evidence to make clinical decisions and to formulate clinical and public health policy (19). However, to achieve a scenario in which health-care policy decisions are consistent with a coherent body of research evidence, the barriers between the two camps must be removed. The idea of an integrative approach to building evidence-based decision making is not new and is not exclusive to health care.

If HTA, as Banta and Andreasen indicated, is to be useful, it cannot be merely a technical or scientific study (2). It must work with policy makers to develop criteria for determining the questions to be asked, for generating helpful answers, and for presenting information on possible responses. Frenk proposed a model for reconciling the tension between research and decision making based on integration (10). His model requires an organizational design that brings together the advantages of proximity to decision making and the structures, procedures, and incentives developed by research centers to ensure academic quality. Lilford et al. suggested use of an iterative method involving a productive dialogue between commissioners, researchers, and potential users when the research question and the form and scope of research are not clear-cut at the outset (13). Bensing et al., highlighted the importance of a productive dialogue between researchers and policy makers to cooperate as they define the right questions, at the right time, and communicate in such a way that their ideas could be implemented (4). Denis and Lomas discussed collaborative research, which they define as a deliberate set of interactions and processes designed to bring together those who study problems and issues and those who act on or within those problems and issues, blurring the frontiers between those groups to push toward more interaction among them (7).

The literature on environmental policy also provides some related examples. The National Research Council has proposed that risk characterization should be based on an analytic–deliberative process (17). Here, analysis and deliberation are two complementary approaches to gaining knowledge, formulating understanding, and reaching agreement. The analytic–deliberative process includes an early attention to problem formulation that includes the spectrum of all interested and affected parties, including public officials, scientists, and interested and affected individuals or groups. Busenberg proposed a collaborative analysis approach, which is an alternative model in which all the groups involved in the policy debate work together to assemble and direct a joint research team, which then studies the technical aspects of the policy issue in question (5). With the ultimate goal being to generate a single body of knowledge accepted by all, this model aims to overcome suspicions of distorted communication by giving each group the means to ensure that the other groups are not manipulating the debate.

In the field of education, researchers—in conjunction with policy makers, administrators, and teachers—have also sought to develop strategies for strengthening the links between research and policy and practice. Three models have been described (decision-oriented research, collaborative-action research, and research as collective praxis). In the latter model, the line between researcher and policy maker or practitioner becomes blurred as all involved work together to understand and improve schools.

Research about the determinants of successful knowledge transfer, such as the uptake of research by governments in the fields of natural sciences, engineering, and social sciences, have found that the crucial factor is the existence of mechanisms that link researchers and users of research (11). Such mechanisms must overcome the barriers between the “two communities,” barriers that affect user acquisition efforts and their adaptation of research products made by scholars. Mechanisms need to address the intensity of the linkages between researchers and decision makers as well as the organizational contextual factors of users (12).

At bottom, the literature reinforces the sentiment that it makes little sense to celebrate scientific advances if they have no prospect of being translated into improvements in health. Furthermore, if left entirely to chance or the good intentions of the stakeholders, this translation is not likely to occur well, if at all. Successfully bridging science and policy is not easy and requires a balance between the ideals of scientific rigor and the realities of policy making (3).

Translating HTA into policy is a highly complex business. Despite the growth of HTA over the past two decades, its influence on policy making, as well as its perceived relevance, remains marginal (15). Compared with some HTA efforts in other countries or in the U.S. private sector, HTA in the context of the U.S. EPC program is unique, with an expectation of being relevant without a dominant mechanism for being relevant. As such, the EPC's challenge of producing reports that are both excellent and useful is especially difficult. Transforming clinical and public health policy in the United States into a primarily HTA-driven process is a prospect that is both unlikely and, indeed, problematic. This work suggests that aggressive efforts to support core teams and to promote the productive relationship between analysts and decision-makers (18) will enhance the objective of the EPC to bring the best science to health care in the United States (14).

POLICY IMPLICATIONS

Health policy decision makers have to make difficult choices in a rapidly changing and highly complex environment, which often includes vast quantities of contradictory information. This task involves bridging two distinct cultures—that of the analyst and that of the policy maker. This bridging demands a strategy as well as a significant investment of time and a commitment. Modern health policy development requires more than technical excellence. It requires excellent relationships between the analysis and policy-making communities.

CONTACT INFORMATION

Antonio Sarría-Santamera, MD, PhD (), Senior Health Services Researcher, Spanish Agency for Health Technology Assessment, Sinesio Delgado 4, 28029 Madrid, Spain

David B. Matchar, MD (), Professor, Department of General Internal Medicine, Duke University Medical Center, Duke University, Durham, NC 27708; Director, Duke Center for Clinical Health Policy Research, Duke University Medical Center, 2200 West Main Street, Suite 220, Durham, NC 27705

Emma V. Westermann-Clark, BS (), Doctoral Student, Department of Health Policy, Harvard University, 718 Huntington Avenue, Boston, MA 02115

Meenal B. Patwardhan, MD, MHSA (), Assistant Research Professor, Department of Medicine, Duke University Medical Center, Duke University, Durham, NC 27708; Director of Operations, Evidence-Based Practice Center, Duke Center for Clinical Health Policy Research, 2200 West Main Street, Suite 220, Durham, NC 27705

The study received financial support under contract 290-02-0025 from the Agency for Healthcare Research and Quality (AHRQ). The authors of this article are responsible for its contents. No statement in this article should be construed as an official position of the Agency for Healthcare Research and Quality or of the U.S. Department of Health and Human Services. The authors wish to thank Yvonne M. Connelly, MA, MPH for her comments and suggestions.

References

Alberta Heritage Foundation for Medical Research. 2005. A study of the impact of 2000-2001 HTA products. Information Paper #11. January, 2002. Available at: http://www.ahfmr.ab.ca/publications.html. Updated June 13, 2005. Accessed July 12
Banta HD, Andreasen PB. 1990 The political dimension in health care technology assessment programs. Int J Technol Assess Health Care. 6: 115123.Google Scholar
Battista RN, Lance JM, Lehoux P, Regnier G. 1999 Health technology assessment and the regulation of medical devices and procedures in Quebec: Synergy, collusion, or collision? Int J Technol Assess Health Care. 15: 593601.Google Scholar
Bensing JM, Caris-Verhallen WM, Dekker J, Delnoij DM, Groenewegen PP. 2003 Doing the right thing and doing it right: Toward a framework for assessing the policy relevance of health services research. Int J Technol Assess Health Care. 19: 604612.Google Scholar
Busenberg GJ. 1999 Collaborative and adversarial analysis in environmental policy. Policy Sci. 32: 111.Google Scholar
2005. Canadian Health Services Research Foundation. If research is the answer, what is the question? Key steps to turn decision-maker issues into research questions. Proceedings of the Canadian Health Services Research Foundation Annual Workshop. 2001. Available at: www.chsrf.ca/knowledge_transfer/pdf/research_e.pdf. Updated 2001. Accessed July 12
Denis JL, Lomas J. 2003 Convergent evolution: The academic and policy roots of collaborative research. J Health Serv Res Policy. 8 (Suppl 2): 16.Google Scholar
Eisenberg JM, Zarin D. 2002 Health technology assessment in the United States. Past, present, and future. Int J Technol Assess Health Care. 18: 192198.Google Scholar
Fox NJ. 2003 Practice-based evidence: Towards collaborative and transgressive research. Sociology. 37: 81102.Google Scholar
Frenk J. 1992 Balancing relevance and excellence: Organizational responses to link research with decision making. Soc Sci Med. 35: 13971404.Google Scholar
Landry R, Amara N, Ouimet M. 2002. Research transfer in natural sciences and engineering: Evidence from Canadian universities. Quebec, Canada: Université Laval; Available at: http://kuuc.chair.ulaval.ca/francais/pdf/csrng.pdf. Updated October 2002. Accessed July 12 2005.
Lavis JN, Ross SE, Hurley JE, et al. 2002 Examining the role of health services research in public policymaking. Milbank Q. 80: 125154.Google Scholar
Lilford R, Jecock R, Shaw H, Chard J, Morrison B. 1999 Commissioning health services research: An iterative method. J Health Serv Res Policy. 4: 164167.Google Scholar
Lomas J. 2000 Connecting research and policy. ISUMA-Can J Policy Res. 1: 140144.Google Scholar
Oliver A, Mossialos E, Robinson R. 2004 Health technology assessment and its influence on health-care priority setting. Int J Technol Assess Health Care. 20: 110.Google Scholar
Plouffe LA. 2000 Explaining the gaps between research and policy. ISUMA-Can J Policy Res. 1: 135139.Google Scholar
Stern PC, Fineberg HV, eds. 1996. Understanding risk. Informing decisions in a democratic society. Washington, DC: National Academy Press;
von Below GC, Boer A, Conde-Olasagasti JL, et al. 2002 Health technology assessment in policy and practice. Working group 6 report. Int J Technol Assess Health Care. 18: 447455.Google Scholar
Walshe K, Rundall TG. 2001 Evidence-based management: From theory to practice in health care. Milbank Q. 79: 429458.Google Scholar