Hostname: page-component-745bb68f8f-kw2vx Total loading time: 0 Render date: 2025-02-11T16:18:01.398Z Has data issue: false hasContentIssue false

Redefining health technology assessment in Canada: Diversification of products and contextualization of findings

Published online by Cambridge University Press:  01 August 2004

Pascale Lehoux
Affiliation:
University of Montreal
Stéphanie Tailliez
Affiliation:
University of Montreal
Jean-Louis Denis
Affiliation:
University of Montreal
Myriam Hivon
Affiliation:
University of Montreal
Rights & Permissions [Opens in a new window]

Abstract

Objectives: While strategies for enhancing the dissemination and impact of Health Technology Assessment (HTA) are now being increasingly examined, the characteristics of HTA production have received less attention.

Methods: This study presents the results of a content analysis of the HTA documents (n=187) produced by six Canadian agencies from 1995 to 2001, supplemented by interviews with chief executive officers and researchers (n=40). The goal of this analysis was to characterize the agencies' portfolios and to analyze the challenges these agencies face in responding to the increased demand for HTA.

Results: On average, thirty HTA products were issued annually by the agencies. While the bulk of documents produced were full HTA reports (76 percent), two agencies showed significant diversification in their products. Three agencies in particular actively supported the publication of results in scientific journals. Three agencies showed evidence of adapting to different institutional environments by specializing in certain areas (drugs, health services). Overall, a significant portion of the agencies' HTAs contained data on costs (37 percent) and effectiveness (48 percent), whereas ethical and social issues were rarely addressed (17 percent). Most agencies addressed issues and outcomes that did not strictly fall under the typical definition of HTA but that increased the “contextualization” of their findings.

Conclusions: Our discussion highlights four paradoxes and reflects further on challenges raised by the coordination of HTA within large countries and among European states. This study concludes that HTA is being redefined in Canada as HTA agencies offer a more contextualized informational basis, an approach that may prove more compatible with the increased demand for HTA.

Type
GENERAL ESSAYS
Copyright
© 2004 Cambridge University Press

THE EVOLUTION OF HTA PRODUCTION IN CANADA

In the past two decades, Health Technology Assessment (HTA) has received fairly strong support from governments and academics in North America and Western Europe. The rationale behind HTA development is that better information and knowledge on the efficacy, safety, effectiveness, and costs of health technology, as well as the related social, ethical, and legal issues, improves decision and policy making (1). Such a scientific approach to health technology has been especially embraced in Canada, where six major organizations producing HTA and Health Services Research (HSR) have been established between 1988 and 1996 (34). A few years after their inception, the functioning and impact of several of these agencies were assessed (3;20;21). The main conclusions were that HTA is indeed a useful and worthwhile policy-oriented research activity and that HTA needs to be strengthened and its results more actively disseminated. Receiving generally positive support since the mid-1990s, these HTA agencies have been increasing and standardizing their production and disseminating their results more widely (27). It seems that now would be a good time to again examine the output of these agencies and to see how the agencies are coping with the challenges posed by a steadily increasing demand for HTA.

With the aim of gaining valuable insights from the Canadian experience, this study will more precisely define the type, scope, and conclusions of HTA products published by each of the six agencies between 1995 and 2001 Adopting an institutional theory framework (12;13) that defines HTA as a means to foster knowledge-based change in health-care systems, our assumption is that each agency has evolved in a specific institutional context that has shaped not only its production but also its way of defining and achieving its mandate. An analysis of the type and range of products delivered by agencies in a country like Canada should provide a better understanding of the challenges involved in implementing and coordinating HTA activities within large countries and among European states (30). In this study, we will first briefly review the literature to clarify the current state of knowledge about HTA content and organization and to determine what issues require further examination. We will then present our study methodology, followed by our results and a discussion.

Defining HTA Production

During the past ten years, initiatives to strengthen HTA production have arisen in several countries (18). Graduate-level courses in HTA that address the principles and methods of this policy-oriented field of research are now offered in major universities (6). To tackle practical and operational issues, agencies created in the early 1990s have often been consulted and used as an organizational benchmark to launch HTA activities in other countries. Nonetheless, an organizational structure that is adopted and found to be effective in one political and cultural setting may prove inadequate in another (2;35). The International Network of Agencies in Health Technology Assessment (INAHTA) has played a significant role since the mid-1990s by bringing together HTA producers from different parts of the world to share expertise and formalize tools. One of INAHTA's initiatives was the “Checklist for Health Technology Assessment Reports,” published in July 2001. It states that “[…] readers of an HTA report need to be able to easily obtain information on the purpose of the assessment, the methods used, assumptions made and conclusions reached” (INAHTA, 2001:1). Even though this recommendation may seem straightforward, previous work by Menon and Topfer (31) showed that 16 percent of reports published between 1989 and 1998 by four Canadian agencies did not expressly state the question under investigation, 27 percent did not include a structured abstract or summary, and only a “few assessments offer[ed] clearly stated recommendations for decision-makers” (31, page 900).

This last observation is puzzling for those concerned with assessing the impact of HTA. Indeed, Jacob and McGregor (20) argued that, to influence a decision, an assessment must carry a discernible “message,” that is, an indication of the appropriate conduct regarding the diffusion and/or use of a technology. However, in practice, agencies often try to strike a balance between conclusive statements and normative recommendations, for example, “telling others what they ought to do” (26). For instance, the Board members of the Canadian Coordinating Office of Health Technology Assessment (CCOHTA) disagreed on the desirability of emphasizing recommendations in HTA reports (3). Because health care in Canada falls under the responsibility of provincial governments, they stressed that HTAs published at the national level should inform decision-makers but not point out what should be done. However, a radically different approach was adopted by the National Institute for Clinical Excellence (NICE) in the United Kingdom. Its mandate stipulates that its guidelines should be taken into consideration by National Health System decision-makers, including providers (9). Thus, the mandate and institutional context in which an agency operates is likely to influence the type of assessments it delivers (8).

Another important variable that affects HTA production is the evaluation process itself. Scholars who have examined the impact of HTA on clinical practice and policy making have recommended that reports be prepared more quickly and be more readable for non-scientifically trained audiences (5;11;20). This approach requires not only having sufficient access to skilled and versatile researchers but also adapting the timeline and depth of assessments by producing, for example, “rapid technology assessments” (16;17;32). In addition, translating the implications of evidence for practitioners and policy-makers is not an easy task for most scientifically trained researchers and involves learning how to communicate effectively with an audience that is very different from the one to which they are accustomed (5;25). Indeed, Buxton and Hanney (7), in developing and testing their payback assessment model, showed how interfaces between HTA producers and users were key to making HTA more relevant to policy making. At the commissioning stage, the interaction between producers and decision-makers may help to more clearly define policy questions, whereas at the dissemination stage, discussions about the implications of findings may help sharpen key messages and identify how they can be implemented.

In the long run, the participation of decision-makers in the priority setting and assessment processes of HTA agencies may affect the agencies' portfolios, that is, the topics covered and type of assessments performed (19). For instance, Davies and Littlejohns (9, page 321) observed that 96 percent of Directors of Public Health in England and Wales who were surveyed “agreed that NICE had not made decisions about the relative importance of competing technologies,” and 81 percent said that it had not “helped the service prioritization debate.” These authors stress that local priority setting may be better informed by a different type of research—one that provides additional information such as health needs assessments and epidemiological analyses. Another way of looking at this issue is through the concept of contextualization. According to Gibbons et al. (14), the traditional distinction between pure and applied science is blurring, with knowledge now being produced much closer to its context of application and through processes that more directly integrate the concerns and interests of a broad set of actors. Knowledge is “contextualized” when it is the result of a process in which both supply and demand interact. Hence, HTA agencies trying to respond to a diversified demand may adapt their assessments to deal more specifically with their users' information needs and concerns.

Whether or not HTA practitioners will indeed interact more frequently with decision-makers and adopt different assessment processes remains an open question. We nonetheless believe that strong signals have been sent to HTA practitioners since the mid-1990s, with strong implications for HTA format and content (15;29). The work of Menon and Topfer (31) yielded two significant findings: The quality of Canadian HTA reports improved significantly from 1989 to 1998 (the period under study), and there was very little duplication of the technologies evaluated. However, their study looked at only four agencies, and very few publications have addressed organizational issues affecting HTA production, even less so from a comparative perspective (1;30). Consequently, we believed the time was ripe for an extensive examination of HTA production in Canada so that lessons could be drawn for the wider international HTA community. We thus devised a study aimed at (i) characterizing the portfolio of six Canadian agencies in operation between 1995 and 2001; (ii) analyzing the production challenges faced by individual agencies when trying to respond to the increase in demand; and (iii) examining how collaboration and coordination among a group of agencies could be fostered to facilitate access to HTA. Our purpose is not to establish the relative performance or productivity of the agencies. Rather, we wish to better understand what types of HTA products have been available to decision-makers thus far and why coordination issues are worth addressing.

METHODS

This study stems from a larger multiple case study combining qualitative and quantitative data. The study was entirely financed through a national peer-review funding agency, while the participation of the six Canadian agencies was secured on a voluntary basis. Due to Canadian legislative arrangements governing health care, five of the six HTA agencies were created by provincial governments (British Columbia, Alberta, Saskatchewan, Ontario, and Quebec), while one agency is a national entity. We use the term “jurisdiction” to refer to the level at which they operate. Although the groups differ with respect to organizational structure (arm's length government body, university-based research group, nonprofit agency) and each evolved within a specific institutional context (health reforms, presence of regulatory bodies), their general approach to assessment is fairly similar. Details about their production processes, dissemination strategies, and ways of interacting with users are described elsewhere (26).

The analyses presented in this study mainly exploit two sources of information: (i) official HTA documents published by the agencies between 1995 and 2001; and (ii) semistructured interviews with the staff and CEO of the six agencies (n=40). All agencies provided us with a list of their outputs, including HTA reports, rapid technology assessments, scientific papers, and oral communications (when available). For further analysis, only documents that had been peer-reviewed were included (in English or French). A total of 187 HTA documents were obtained from agencies or downloaded from their Web sites. These documents were coded through a predefined guideline inspired by the one used by Menon and Topfer (31). The general coding scheme included (i) type of document; (ii) type of technology assessed; and (iii) issues addressed. The first item included three mutually exclusive subcategories: full HTA reports, reports jointly published with another organization, and short documents (i.e., technical briefs, rapid assessments, up-dates). The second item included six mutually exclusive subcategories: diagnostic and/or screening technology, device, procedure, drug, health services, and general essays and/or research tools (i.e., definition of a new diagnostic, reliability of a given database, guidelines for economic assessments). The third item, the assessment scope, was defined through nonmutually exclusive subcategories: costs, effectiveness (including efficacy), cost-effectiveness, quality of life, safety/ethical/social/cultural issues, legal issues, state of current practices, and “other outcomes” (i.e., impact on service utilization, management of a given health problem, impact of a given health reform). The interviews were coded according to a mixed strategy, using both predefined and emerging concepts (36). Annual reports and external assessments (when available) were read by P. Lehoux and S. Tailliez to gauge the specific challenges faced by each agency. When appropriate, information was extracted from the agencies' Web sites to supplement or corroborate certain observations. Both descriptive statistics and quotes from interview or documents are presented.

RESULTS

We will first describe the types of HTA products and scientific outputs produced by the agencies between 1995 and 2001. This stage will be followed by an examination of the types of technology assessed, the spectrum of issues addressed, and the conclusions contained in the HTA documents. Finally, we will summarize the results of our study by looking at the overall characteristics of each agency's portfolio.

What Types of Products and Outputs?

Table 1 shows that the number of documents published by the agencies peaked in 1997–1998, then slightly decreased but remained steady until 2000–2001. Because the agencies differed in annual budget and number of employees, this table should be interpreted with caution. Its purpose is not to rank agencies according to their outputs. The populations of Ontario (11,894,000) and Quebec (7,417,000) are fairly large compared with British Columbia (4,101,000), Alberta (3,059,000), and Saskatchewan (1,017,000), while the budget allocated to HTA by each provincial government also varies considerably. The national-level agency receives funding from the provinces because its mandate includes facilitating coordination across the country. In addition, as stressed by several CEOs, agencies face significant challenges in recruiting researchers specialized in HTA. The data show that decision-makers and practitioners across Canada, including those in provinces and territories devoid of an HTA agency, had access to, on average, over thirty HTA products during the study period.

Table 2 provides details on the type of products delivered by the HTA agencies during the study period. While four agencies were still emphasizing the “traditional,” full HTA report (representing 81-100 percent of their production), two of them diversified their products by publishing joint reports (47 percent for A6), a result of partnerships with key health organizations (e.g., medical associations, workers' compensation boards), and several short documents such as peer-reviewed bulletins, rapid HTAs, and technical briefs (43 percent for A3). Among the non-peer-reviewed documents that were excluded from our more detailed analyses, it is worth mentioning that Agency 4 produced up to ten different publications. In addition, three agencies did not have a newsletter during the period under study, while the other three issued an electronic and/or paper newsletter on a quarterly basis. The format and appearance of reports has substantially improved since 1995, a change that may have improved reader-friendliness. Overall, while the bulk of HTA documents produced in Canada were full HTA reports (76 percent), we can see that a few agencies were diversifying their products in an attempt to meet the informational needs of a range of users.

We observed substantial variation in the number of traditional scientific outputs such as scientific publications and conferences. Table 3 shows that three agencies actively supported the publication of results in scientific journals (A1; A4; A5), while the majority encouraged presentations at conferences. Agency 1, which relies on several university-affiliated researchers, stands out in this respect. The CEO of this agency also confirmed that researchers are encouraged to apply for external funding to cover expenses for attending scientific meetings. As discussed below, this opportunity has a direct impact on this agency's portfolio. Overall, as recognized by most CEOs, while scientific journals may not be the most appropriate vehicle for reaching decision-makers, it is clear that such outputs do contribute to the agency's visibility and prestige. Indeed, establishing scientific credibility is often viewed as essential for convincing certain audiences. This is even more so in the case of physicians who are involved or interested in clinical research and who value peer-reviewed publications. Given that the resources of the six agencies differ significantly, the disparity in terms of scientific outputs is not entirely surprising. It nonetheless shows that conflicting legitimacies are at play—emphasizing the production of traditional scientific knowledge versus user-oriented knowledge. Because the credibility of HTA is intimately linked to its scientific rigor, this tension may impact the agencies' ability to recruit and retain evaluators seeking to build a career according to academic standards. Several interviewees favored a balanced approach—one that encourages peer-reviewed publications yet does not ignore target users of HTA who do not read scientific journals.

Assessing What Kinds of Technologies? Addressing What Issues? And Providing What Types of Conclusions?

Table 4 presents the types of technology assessed by the Canadian agencies. Two agencies clearly focused on health services (76 percent for A1; 56 percent for A6), while Agency 3 focused on drugs (74 percent). Four agencies were only slightly or not at all involved in assessing drugs (A1; A4; A5; A6), perhaps because several provinces had university groups and/or governmental bodies that produced cost-effectiveness analyses of pharmaceuticals. Diagnostic technology and screening tests did not constitute a very large proportion of HTA reports (less than 31 percent), and devices were only infrequently evaluated (less than 19 percent). A limited but significant portion of the production of five agencies included essays and publications concerning research tools. This kind of output, which does not directly respond to users' requests, is nonetheless useful in consolidating the practice and promotion of HTA. Excluding drugs, most agencies presented a fairly balanced portfolio. Overall, only a few agencies focused on certain topics, several did not do drug assessment, and most assessed several different types of technology. The interviewees believed this was the result of a certain institutional specialization of the agencies as a result of their specific jurisdictional contexts. As mentioned by a few CEOs, a critical mass of researchers may be required before engaging in a new area of research, and there is a willingness to limit overlap between groups of knowledge producers.

Extending the analysis to examine the content of HTA reports reveals further distinctions between the agencies. HTA should examine not only the efficacy, safety, and costs of health technology, but also the ethical, social, and legal issues. We were curious to see whether this full range of topics, in fact, was being addressed. Table 5 shows that the most frequently issues addressed were costs and effectiveness. Around a third of the documents of all agencies included a cost component. As one might expect, the vast majority of the production of four agencies (71 percent for A2; 41 percent for A3; 78 percent for A4; 69 percent for A6) examined effectiveness. However, very few agencies actually related costs to effectiveness by doing full-fledged cost-effectiveness analyses. Only Agency 3 did so in close to half of its reports. We were surprised to find that, despite the growing prevalence of chronic diseases, quality of life was seldom part of assessments. Safety was addressed in less than a third of the agencies' reports.

As expected, ethical and social issues were not frequently addressed. Agency 1 seemed to discuss such issues most frequently (in 40 percent of its reports), while Agencies 3 and 4 rarely provided information on this subject. That the official definition of HTA was not put into practice to its fullest extent is even more striking in the case of legal issues. According to interviews, securing access to staff specialized in ethical and legal issues when resources are limited and the demand great was problematic for several agencies. Of interest, we observed that the state of current practices was documented fairly often by two agencies (68 percent for A1; 69 percent for A6), which is compatible with the “contextualization” of HTA findings for regional or provincial decision-makers. By providing information about the current levels of service utilization, surgical rates, or prescription patterns, perhaps evaluators were aiming to better inform decision-makers about the potential gap between evidence and practice and, therefore, to support decisional processes. Similarly, four agencies fairly frequently included other outcomes, perhaps in response to specific requests, such as findings about organizational aspects (e.g., collaboration between midwives and obstetricians, funding arrangements) or data about the impact of a recent provincial policy (e.g., hospital length-of-stays, waiting lists) in their assessments.

In summary, close to 50 percent of the HTA reports available in Canada from 1995 to 2001 addressed effectiveness, 37 percent examined costs, and less than 25 percent discussed cost-effectiveness, ethical and social issues, safety, and quality of life. Up to 27 percent of documents conveyed information about current practices and up to 43 percent reported other outcomes. Table 5 shows that, overall, Canadian agencies were not entirely adhering to the concept of HTA. First, ethical, social, and legal issues were not dealt with very often. Second, the majority of agencies addressed current practices and other outcomes that, while relevant to practice and policy, are not key components of HTA guidelines. Perhaps this could be interpreted as a redefining of HTA—providing a more contextualized informational base for decision-makers—even though several issues are not being addressed.

We also thought it relevant to consider the conclusions reached in these HTA documents. In general, the documents were clearly written and contained three types of conclusions: (i) negative (24 of 197=13 percent); (ii) neutral (124 of 187=66 percent); and (iii) positive (39 of 187=21 percent; see examples in Table 6). Negative messages included reporting a technology had not been proven effective or emphasizing the risks associated with its use. Neutral conclusions did not stipulate the actions that should be followed based on the assessment, emphasized the weakness of the evidence, or remained somewhat evasive, stressing that further research was needed. Positive conclusions underscored that a given technology was both effective and safe, or cost-effectiveness reasonable. Formal recommendations were less often found. Except for A5 and A6, only a small proportion of HTAs contained recommendations: A1, 6 of 25=24 percent; A2, 3 of 17=18 percent; A3, 6 of 61=10 percent; A4, 1 of 32=3 percent; A5, 16 of 36=44 percent; A6, 9 of 16=56 percent. This is compatible with how interviewees explained the fragile balance between drawing conclusions about the strength of the evidence and defining what should be done with findings. If the former is definitely part of evaluators' task, the latter is seen as the responsibility of decision-makers and clinicians.

A Summary: Six Agencies With Different Production Lines?

Table 7 shows how the six agencies, even though they are all assessing health technology and services, are organizational entities with rather different “production lines.” First, some have engaged in product diversification by issuing shorter documents. Second, some have tended to focus more on certain types of topics (drugs, health services), issues, and outcomes. Product diversification may be seen as a response to the imperatives of decision and policy making (timing, readability). The specialization of agencies may result from attempts to attain a critical mass of researchers with similar expertise and to complement the work of other groups (or to avoid stepping on another organization's turf). Agencies, thus, are responding to the expectations of specific users, while at the same time trying to build their own unique mandate or vision statement.

Agency 1 has diversified its products to a limited extent, supports scientific publications, and has specialized in health services assessment. Its high percentage of contextualized documents addressing current practices and other outcomes is highly compatible with the aim of providing “information about utilization of health-care services in the […] population. And this may be information to assess the equity or access to services, it may be information about the quality of care, or about the outcomes” (Researcher). All interviewees from this agency confirmed that, even though the Ministry of Health “informs the agency agenda,” most projects were investigator-initiated. Because several of these projects may be funded through national and provincial research funding bodies, the portfolio of this agency is partly shaped by the funding criteria set by these bodies and its investigators can maintain a high level of scientific autonomy.

Agency 2 has diversified its products, weakly supports scientific publications and has a balanced portfolio. Its clearly stated mandate is to “explore ethical and social issues raised by the accelerating demand for technological interventions” and suggests an interest in developing the “theoretical foundations for using large data sets to understand the determinants of health in population groups, and to derive better evidence-based approaches in health care” (Web site). This is compatible with the finding that 24 percent of its documents fitted in the “Essay and Research Tool” category. Even though we observed limited contextualization of findings, Agency 2 is explicit about its role in providing decision-makers with guidance: “Research findings are published in detailed reports, serving a principal objective of disseminating results in a form useful to decision-makers in a position to use them. By identifying and communicating how limited health-care resources can be most effectively applied, [the agency] is able to assist the public sector in its policy development and planning efforts” (Web site). In fact, input from several sources is considered when the agency decides on what topics to assess. According to one researcher from this agency, the most important ones are prioritized because they raise significant, province-wide issues—“issues that have been raised from several different perspectives simultaneously.” In other words, this agency's portfolio has been shaped by an academic orientation to HTA, and by recurring and convergent suggestions for assessments.

Agency 3 has also diversified its products, weakly supports scientific publications, and specializes in assessing drugs. This specialization is consistent with the finding that it is the agency with the largest proportion of assessments examining cost-effectiveness (51 percent). Its priority-setting process is quite formalized and is significantly influenced by program-specific committees composed of representatives from various regions. As explained by an executive from this agency, a cumulative list of topics is built up throughout the year and then presented to the members: “We had a list of almost fifty possible topics for them and some were clearly not appropriate and some were very appropriate. But we asked them to vote and in the end, we got ten that received multiple votes.” Consequently, the agency's portfolio is the result of a structured process and reflects topics that matter to a wide set of stakeholders.

Agency 4 acts according to a broad mandate: “Support a community of researchers who generate knowledge that improves the health and quality of life of people […] throughout the world (Web site).” This agency has diversified its production to a limited extent, supports scientific publications and shows a balanced portfolio. Its findings are contextualized as well, but with the goal of “servicing” its population, from decision-makers to members of the public. In the words of one researcher: “We don't select topics, we answer everything that comes to us. What differs is the level of analysis or the comprehensiveness of the assessment.” This view was reinforced by the President of the organization of which this agency is part, stressing that the agency should never decline a request. As a result, according to the agency's head, only between 5 and 10 percent of the projects are investigator-initiated. Overall, this agency's portfolio is oriented by a public service “pull,” rather than a scientific “push.”

Agency 5 has not diversified its products, supports scientific publications, and has a balanced portfolio. Its findings are contextualized and recommendations are frequently formulated. Input for priority setting is sought from several sources, and criteria are used to decide which topics to investigate. These criteria include the burden of illness, the number of individuals concerned, the costs, the available evidence, and so on. While it believes requests formulated by the Ministry of Health or officials from the health-care system should not be declined, the depth of the response is based on available resources and the extent to which the request fits the criteria. Up to a third of this agency's documents are investigator-initiated. Thus, the final portfolio may reflect an interest in the production of both scientific and user-oriented knowledge. Such a balanced approach helps investigators maintain a certain level of scientific autonomy.

Agency 6 has diversified its products to a limited extent and specialized in health services assessment. Contextualization of its findings is important, and recommendations are frequently formulated. This is compatible with its mandate: To support the “development of strategic health policy by providing evidence on researchable issues” (Web site). The agency has been engaged in an environmental scanning process that informs its priorities. This process facilitates frequent interactions between researchers and decision-makers. Of interest, because this process relies on bringing together the opinions of diverse stakeholders, one researcher notes that “it's difficult to say how much of that [the topics chosen] is us and how much of it them, there's an interplay there.” In other words, the resulting portfolio is seen as responding to both supply and demand for HTA.

DISCUSSION AND POLICY RECOMMENDATIONS

Our analyses have attempted, on the one hand, to describe the outputs of the six agencies and, on the other hand, to explore the links between portfolio type and mandate, institutional positioning, and priority-setting mechanisms. This study shows that, despite similar production processes, the six agencies produce outputs that reflect a specific interpretation of what HTA is all about and to whom HTA should be providing guidance. According to our institutional theory framework, agencies can be situated on a spectrum with scientific autonomy (A1) at one end, and user-centered services (A4) at the other (Table 7). These two poles underscore a tension inherent to the production of knowledge aimed at influencing both policy and practice—knowledge needs to meet scientific standards, yet also respond to the needs of its intended users (14). We now wish to focus on four paradoxes that our data have highlighted and that are due to this fundamental tension regarding knowledge production and use (33). This discussion will provide insights into the challenges faced by the international HTA community when responding to an increased or diversified demand, as well as explain why increasing collaboration and coordination among agencies would be a valuable first step in reducing these paradoxical tensions.

Paradox 1. Diversification of Products Versus Streamlining

It is interesting to observe that, after recent efforts to standardize or “streamline” the content of HTA reports (INAHTA), two agencies have diversified their products. This finding is not entirely surprising given the recent emphasis, especially in Canada, on increasing the uptake of research (23;28;29). Full HTA reports may be seen as too lengthy, technical or detailed to be digestible for certain categories of decision-makers, or in situations where decisions need to be made within a very short time frame. Shorter documents may take the form of detailed, stand-alone summaries of full HTA reports designed to inform busy decision-makers, or rapid technology assessments designed to provide brief and perhaps provisional answers to specific questions. Nonetheless, producing shorter documents requires a systematic and rigorous approach to summarizing and interpreting the evidence in a concise way. Standards and guidelines for developing shorter documents should be developed, not only to facilitate the work of the supply side, but also to help the demand side distinguish the different types of HTA products (16;32).

Paradox 2. Contextualization Versus “One-Size-Fits-All” Assessments

Our results show that Canadian agencies often contextualize their results by adapting the content of their assessments to the jurisdictional context. This contextualization of findings is recognized as key to facilitating the use of research. Jacob and Battista (21) observed that assessments had a greater impact when they were tailored to local conditions (issues, actors, decision processes). However, one methodological assumption in HTA is that all systematic syntheses of the available evidence based on the same methodology should yield similar conclusions (8). Therefore, in principle, any HTA should be applicable to any context. A paradox, thus, lies in the willingness to make explicit the relevance of the findings to a local audience, while still adhering to a universal way of assessing health technologies. We believe that only rarely will “one-size-fits-all” assessments fulfill the needs of decision-makers. Because some interviewees explained how they could use a HTA produced by another agency and develop a specific set of recommendations adapted to the decisional issues of their own jurisdiction, further research should examine how sharing work between agencies could help increase both access to and uptake of HTAs.

Paradox 3. Addressing a Large Scope of Issues Versus One Well-Delineated Question?

HTA is often promoted as being an interdisciplinary inquiry that addresses a wide spectrum of issues, from effectiveness to ethics (24). Our findings suggest that, even though this may be true in principle, it is not the case in practice. From the producers' perspective, there may be some obvious reasons why ethical, social, and legal issues are rarely addressed. The staff of HTA agencies are usually trained in epidemiology and may feel less equipped to analyze legal, social, and ethical issues. From the users' perspective, the need to obtain, in concise, reader-friendly language, an answer to a well-defined policy question may preclude a sophisticated analysis of such inherently complex issues (5). Nonetheless, health policy and clinical practice are often shaped by legal incentives (4). Cookson and Maynard (8) also stressed that HTA should examine equity and instill the socially responsible allocation of resources. Thus, collaboration between HTA practitioners and ethicists, sociologists, and jurists should be increased to respond to the very specific requests of decision-makers and maintain a broad interdisciplinary perspective (22).

Paradox 4. Doing More But Producing Less Measurable Outputs?

Despite that the budgets of most Canadian agencies have modestly but steadily increased since the late 1990s, the number of HTA documents published is not increasing drastically. Perhaps the staff of HTA agencies are being mobilized in a range of activities that do not translate automatically into measurable outputs. In fact, facilitating the uptake of HTA requires promoting and explaining what HTA is and how it can be used to inform policy and practice. The need for such groundwork is confirmed by a study on technology acquisition in Canadian hospitals that found: “Respondents had accepted the concept of technology assessment, although they may not have instituted the appropriate organizational mechanisms to put this concept into practice” (10, page 26). Indeed, HTA is at risk of remaining a poor instrument for introducing knowledge-based change if the ability of providers and decision-makers to understand HTA principles and methods is not increased (27). In other words, HTA agencies have to deploy these less measurable but necessary efforts, and they will pay off in the long term.

Coordination and Collaboration

While perhaps not a complete surprise to some, these four paradoxes clearly highlight how the HTA community has set a very broad mandate for itself while operating with limited resources. Could coordination and collaboration amongst agencies within large countries or across European States facilitate the sharing of expertise and assessments and decrease some of the tensions inherent in the HTA production process? To explore this issue, we suggest adopting a global perspective that defines HTA production capacity as depending upon a collective of agencies. The five provincial agencies each receive funding from one major source (usually the provincial Ministry of Health), and the agencies each turn to this one source when faced with an increased demand. Nonetheless, it is possible to frame things differently by recognizing there exists a pool of qualified resources, able to produce a pool of HTA documents that can be used by decision-makers in any jurisdiction. Perhaps we could build on the existing strengths and expertise of each agency to better meet the needs of decision-makers across the country by actively supporting access to a range of HTA products covering different kinds of issues. Incentives for collaboration among agencies could be organized according to a broader, countrywide knowledge-brokering strategy. A similar reasoning could apply to collaboration among European states.

Given the four tensions outlined above, collaborative efforts could aim to (i) support a certain diversification of HTA products through the development of common guidelines for both short and full HTA reports that would orient not only the assessment process but also how decision-makers appraise and use such documents; (ii) build a network of evaluators and knowledge-brokers located in each agency who could share their know-how about contextualization and adapt/translate HTA findings produced in other jurisdictions to the concerns of local decision-makers; (iii) facilitate access to experts in ethics, sociology, and law who are familiar with HTA and who can contribute directly to assessments and support a certain specialization of teams of evaluators; (iv) develop a joint strategy to promote HTA through a series of capacity-building activities targeting providers and decision-makers across the country.

The overall goal of such efforts would be to consolidate assessments, boost production capacity, and increase HTA uptake. Because five of the six Canadian agencies are provincial initiatives, the question of encouraging specialization in agencies has never been analyzed in a systematic manner, although the mandate of the national-level agency includes, in principle, facilitating coordination. According to McDaid, such a coordinating role remains both limited and challenging because “there have been disagreements over the future role of both federal and provincial bodies given the existence of local agencies” (30). However, given that agencies are operating under serious constraints and there is an increasing demand for HTA, it seems more important than ever to fully develop this role.

By combining qualitative and quantitative analyses, the objective of this study was to more fully understand the constraints affecting HTA production. Of course, our findings and analyses should be interpreted with caution. The purpose of this study was not to compare the respective performance of the agencies. Because their mandate, structure, and level of financial resources vary greatly, the data we analyzed says more about their individual particularities than about what should be a “gold standard” in HTA production. In other words, our intent was not to define the one best way for producing HTA, but rather to gauge how HTA is being practiced and applied in different Canadian jurisdictions. Despite this caveat, this study is valuable in that it extends previous work by Menon and Topfer (31) by documenting three additional years of production and by including the work of two other agencies in the analyses. It is also a response to an overall lack of studies on the links between organizational structure, HTA content, and HTA production.

CONCLUSION

This study contributes to knowledge on the production and content of HTA assessments, an area that has not been sufficiently addressed in the literature. Our study shows that, while the bulk of HTA documents produced in Canada are full HTA reports, a few agencies have been diversifying their products to meet the informational needs of a range of users. Because scientific credibility remains associated with publishing in peer-reviewed journals, a balanced approach should aim to encourage such publications while providing target users with appropriately formatted reports. Several agencies present a fairly balanced portfolio with respect to the type of technologies assessed. Of interest, we observed that agencies are going beyond the traditional concept of HTA by including data on current practices and other outcomes. However, ethical, legal, and social issues are not being addressed as frequently as required by HTA guidelines. Additional efforts are required to make sure assessments lead to clear messages about ways to move forward.

A recent National Commission strongly recommended increasing HTA capacity in Canada (34). Indeed, further resources should be devoted to the dissemination of HTA, and the importance of capacity-building initiatives should be fully recognized (27). In the early years of the development of HTA, the risk of needlessly duplicating work done elsewhere was a recurrent issue (31). With the establishment of collaborative networks among agencies and HTA practitioners (through local initiatives and INAHTA but also through attendance at international meetings), the likelihood of clear duplication has been diminished. With the growth of HTA around the world, it now seems urgent to develop collaborative mechanisms to facilitate the sharing of expertise, encourage a certain level of specialization, and enable a wider access to all available HTA products. Collaboration works well on a voluntary basis, when different parties share similar values, interests, and methods. It is vital that we reflect further on how established know-how in HTA production and electronic publishing could facilitate the sharing of HTA, as well as its tailoring to the needs of decision-makers.

This research was funded by an operating grant from the Canadian Institutes of Health Research (CIHR; #42499). The first author is a National Scholar with the National Health Research and Development Program (now under CIHR; #6605-5359-48). Jean-Louis Denis holds a Chair jointly sponsored by the Canadian Foundation for Health Services Research (CFHSR) and the CIHR. We are grateful to the chief executive officers, evaluators, and communications staff who participated in our study and facilitated access to documents. We wish to thank John Lavis for generous comments on a draft version of this article.

References

Banta HD, Oortwijn WJ, van Beekum WT. 1995. The organization of health care technology assessment in the Netherlands. The Hague: Rathenau Institute
Battista RN, Banta HD, Jonsson E, et al. 1994 Lessons from eight countries. Health Policy. 30: 397421.Google Scholar
Battista RN, Feeny DH, Hodge M. 1995 Evaluation of the Canadian Coordinating Office for Health Technology Assessment. Int J Technol Assess Health Care. 11: 102116.Google Scholar
Battista RN, Lance JM, Lehoux P, et al. 1999 Health technology assessment and the regulation of medical devices and procedures in Quebec: Synergy, collusion or collision? Int J Technol Assess Health Care. 15: 593601.Google Scholar
Bero LA, Jadad AR. 1997 How consumers and policymakers can use systematic reviews for decision making. Ann Intern Med. 127: 3742.Google Scholar
Borlum Krisetensen F, Gabbay J, Antes G, et al. 2002 Education and support networks for assessment of health interventions. Int J Technol Assess Health Care. 18: 423446.Google Scholar
Buxton M, Hanney S. 1996 How can payback from health services research be assessed? J Health Serv Res. 1: 3543.Google Scholar
Cookson R, Maynard A. 2000 Health technology assessment in Europe. Improving clarity and performance. Int J Technol Assess Health Care. 16: 639650.Google Scholar
Davies E, Littlejohns P. 2002 Views of directors of public health about NICE appraisal guidance: Results of a postal survey. J Public Health Med. 24: 319325.Google Scholar
Deber R, Wiktorowicz M, Leatt P, et al. 1995 Technology acquisition in Canadian hospitals: How are we doing? Healthcare Manage Forum. 8: 2328.Google Scholar
Drummond M, Weatherly H. 2000 Implementing the findings of health technology assessments. If the cat got out of the bag, can the tail wag the dog? Int J Technol Assess Health Care. 16: 112.Google Scholar
Edquist C, Johnson B. 1997 Institutions and organizations in systems of innovation. In: Edquist C, editor. Systems of innovation. Technologies, institutions and organizations. London: Pinter: 4163.
Foray D. 1997 Generation and distribution of technological knowledge: Incentives, norms, and institutions. In: Edquist C, editor. Systems of innovation. Technologies, institutions and organizations. London: Pinter: 6484.
Gibbons M, Limoges C, Nowotny H, et al. 1994. The new production of knowledge. London: Sage Publications
Grilli R, Lomas J. 1994 Evaluating the message: The relationship between compliance rate and the subject of a practice guideline. Med Care. 132: 202213.Google Scholar
Hailey D, Corabian P, Harstall C, et al. 2000 The use and impact of rapid health technology assessments. Int J Technol Assess Health Care. 16: 651656.Google Scholar
Hailey D, Topfler LA, Wills F. 2001 Providing information on emerging health technologies to provincial decision makers: A pilot project. Health Policy. 58: 1526.Google Scholar
Hailey D. 1993 The influence of technology assessments by advisory bodies on health policy and practice. Health Policy. 25: 243254.Google Scholar
Hanney SR, Gonzalez-Block MA, Buxton MJ, et al. 2003 The utilization of health research in policy-making: Concepts, examples and methods of assessment. Health Res Policy Syst. 1.Google Scholar
Jacob R, McGregor M. 1997 Assessing the impact of health technology assessment. Int J Technol Assess Health Care. 13: 6880.Google Scholar
Jacob R, Battista RN. 1993 Assessing technology assessment: Early results of the Quebec experience. Int J Technol Assess Health Care. 9: 564572.Google Scholar
Johri M, Lehoux P. 2003 The great escape? Health technology assessment as a means of cost control. Int J Technol Assess Health Care. 19: 179193.Google Scholar
Lavis JN, Ross SE, Hurley JE, et al. 2002 Examining the role of health services research in public policy-making. Milkbank Q. 80: 125154.Google Scholar
Lehoux P, Blume S. 2000 Technology assessment and the sociopolitics of health technologies. J Health Polit Policy Law. 25: 10831120.Google Scholar
Lehoux P, Battista RN, Lance JM. 2000 Monitoring health technology assessment agencies. Can J Program Eval. 15: 133.Google Scholar
Lehoux P, Denis JL, Tailliez S, Hivon H. Dissemination of HTA in Canada: Do visions match strategies? J Health Polit Policy Law. Submitted.
Lehoux P. 2002. Could new regulatory mechanisms be designed after a critical assessment of the value of headline innovations? Discussion paper number 37. Commission on the Future of Health Care in Canada, Chaired by R. Romanow
Lomas J. 1990 Finding audiences, changing beliefs: The structure of research use in Canadian health policy. J Health Polit Policy Law. 15: 525542.Google Scholar
Lomas J. 1993 Making clinical policy explicit. Int J Technol Assess Health Care. 9: 1125.Google Scholar
McDaid D. 2003 Co-ordinating health technology assessment in Canada: A European perspective. Health Policy. 63: 205213.Google Scholar
Menon D, Topfer LA. 2000 Health technology assessment in Canada. A decade in review. Int J Technol Assess Health Care. 16: 896902.Google Scholar
Mowatt G, Grant AM, Bower DJ, et al. 2001 Timing of assessment of fast-changing health technologies. In: Stevens A, Abrams K, Brazier J, et al., editors. The advanced handbook of methods in evidenced-based health care. London: SAGE Publications: 471484.
Nowotny H, Scott P, Gibbons M. 2001. Re-thinking science: Knowledge and the public in an age of uncertainty. Cambridge: Polity Press
Romanow RJ. 2002. Building on values: The future of health care in Canada. Saskatoon, Sask: Commission on the Future of Health Care in Canada
Smits R, Leyten L. 1988 Key issues in the institutionalization of technology assessment. Development of technology assessment in five European countries and the USA. Futures. February: 1936.Google Scholar
Strauss A, Corbin J. 1990. Basics of qualitative research. Newbury Park: Sage
Figure 0

Number of HTA Documents Published by Agencies from 1995 to 2001a

Figure 1

Types of HTA documents (1995–2001)

Figure 2

Number of Scientific Papers and Conferences (1995–2001)

Figure 3

Type of Technologies Assessed in HTA Documents (1995–2001)

Figure 4

Issues Addressed in HTA Documents (1995–2001)a

Figure 5

Examples of Negative, Neutral, and Positive Conclusion (1995–2001)a

Figure 6

Summary of the Portfolio Characteristics of the Six Agenciesa