Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-11T17:56:18.124Z Has data issue: false hasContentIssue false

Supporting the use of health technology assessments in policy making about health systems

Published online by Cambridge University Press:  06 October 2010

John N. Lavis
Affiliation:
McMaster University
Michael G. Wilson
Affiliation:
McMaster University
Jeremy M. Grimshaw
Affiliation:
University of Ottawa and Ottawa Hospital Research Institute
R. Brian Haynes
Affiliation:
McMaster University and Hamilton Health Sciences
Mathieu Ouimet
Affiliation:
Université Laval and CHUQ Research Centre
Parminder Raina
Affiliation:
McMaster University
Russell L. Gruen
Affiliation:
Monash University and National Trauma Research Institute
Ian D. Graham
Affiliation:
University of Ottawa and Canadian Institutes of Health Research
Rights & Permissions [Opens in a new window]

Abstract

Objectives: The objective of this study is to profile the health technology assessments (HTAs) produced in Canada and other selected countries and assess their potential to inform policy making about health systems in jurisdictions other than the ones for which they were produced, and to develop and pilot test prototypes for packaging and assessing the relevance of HTAs for health system managers and policy makers.

Methods: We compiled an inventory of all HTAs that were produced by nine HTA agencies between September 2003 and August 2006; coded the title and abstract of each HTA according to the technologies assessed, methods used, and whether or not context-specific actionable messages were provided; developed a prototype for a structured, decision-relevant HTA summary and for a relevance-assessment form; and pilot-tested the prototypes using semistructured telephone interviews with a purposive sample of Canadian healthcare managers and policy makers.

Results: Our review of the 223 HTAs identified that: (i) 44 HTAs addressed health system arrangements (20 percent); (ii) 205 incorporated a systematic review (92 percent), whereas only 12 incorporated a sociopolitical assessment using explicit methods (5 percent); and (iii) 50 contained context-specific actionable messages (22 percent). Our interviews identified significant support for both the general idea of an HTA summary and the prototype's specific elements, but mixed views about using peer assessments of relevance.

Conclusions: Those involved in supporting the use of HTAs in policy making about health systems may wish to produce structured decision-relevant summaries for their systematic review-containing HTAs to increase the prospects for their HTAs being used outside the jurisdiction for which they were produced.

Type
POLICIES
Copyright
Copyright © Cambridge University Press 2010

Supporting evidence-informed policy making about health systems has garnered significant attention over the past 5 years (4547). Policy making about health systems can be taken to mean policy making about the governance, financial, and delivery arrangements within which clinical (and public health) programs and services are provided. Policy making within health systems, on the other hand, typically means policy making about which programs, services, drugs and devices to fund, cover or deliver. Policy making about health systems requires research evidence to inform problem definition (e.g., administrative database studies that provide comparisons over time or across jurisdictions), option framing and assessment (e.g., systematic reviews of randomized controlled trials about the benefits and harms of each option and economic evaluations about their cost-effectiveness), and implementation plans (e.g., qualitative studies about the barriers to implementation), among other types of research evidence (Reference Fretheim, Munabi-Babigumira, Oxman, Lavis and Lewin12;Reference Lavis24;Reference Lavis, Wilson and Oxman27;Reference Lavis, Wilson, Oxman, Lewin and Fretheim28).

Health technology assessments (HTAs) have the potential to be an important source of research evidence to inform policy making about health systems (not just within health systems, which has typically been their orientation), particularly research evidence to inform the framing and assessment of options that involve introducing or changing health system arrangements (Reference Velasco Garrido and Busse41;Reference Velasco Garrido, Gerhardus, Røttingen and Busse42;Reference Velasco Garrido, Zentner, Busse, Velasco Garrido, Kristensen, Nielsen and Busse44). For example, policy makers may ask how best to involve consumers in decision making about health systems (a question about governance arrangements), what is known about different ways of remunerating physicians (a question about financial arrangements), or whether teams deliver better care than independent providers (a question about delivery arrangements). Most definitions of technology used by HTA producers extend beyond what most lay people consider to be technologies, and also include health system arrangements. A question then is how frequently do HTAs actually focus on health system governance, financial and delivery arrangements?

HTAs also have the potential to be an important source of research evidence to inform policy making outside the jurisdiction for which they were produced. A potential benefit of an HTA to policy makers working in the jurisdiction for which an HTA is produced (who we call local policy makers) is that the assessment and interpretation of research evidence is typically highly context-specific in order to inform local decisions (3;Reference Draborg, Gyrd-Hansen, Poulsen and Horder9;Reference Goodman and Ahn14;Reference Lehoux, Tailliez, Denis and Hivon31;Reference Lomas, Culyer, McCutcheon, McAuley and Law32). This specificity, however, can limit the usefulness of the synthesized research evidence for policy makers in other jurisdictions. For example, the available research evidence about consumer-engagement, physician-remuneration, and team-composition options may be filtered through the lens of the particular professional, social, political, legal, and ethical context of a jurisdiction (22;37), leaving policy makers in other jurisdictions wondering how it might be applied in their own settings. A question then is how frequently do HTAs separate the systematic review of the relevant research literature, which can be used in other jurisdictions, from two HTA elements that typically cannot be used without modification in other jurisdictions: (i) an assessment of local professional, social, political, legal, and ethical factors (which we call a sociopolitical assessment); and (ii) context-specific actionable messages?

HTAs may also have a greater potential for both local and widespread use if they are optimally packaged and independently assessed for relevance to health systems. A structured decision-relevant summary of an HTA could enable rapid scanning by policy makers to determine the technology assessed (e.g., the type of health system arrangement), methods used (e.g., systematic review), and key findings (e.g., benefits and harms) and whether reading the full HTA is warranted. A relevance rating of an HTA could further inform a policy maker's decision whether to invest time in reading the full HTA or even a summary of the HTA. The rapid growth in HTA production over the past 2 decades in Canada (Reference Lehoux, Tailliez, Denis and Hivon31;Reference Menon and Topfer36), as in other jurisdictions, coupled with the rapid growth in systematic review production, has necessitated the development of mechanisms like these to reduce the “noise-to-signal ratio” for policy makers.

Our research has the general goal of refining methods for supporting the use of HTAs for policy making about health systems (Reference Eisenberg10;Reference Granados, Jonsson and Banta16;Reference Lehoux, Denis, Tailliez and Hivon30), the impacts of which have been questioned in the past (Reference Battista, Banta, Jonsson, Hodge and Gelbland1;Reference Battista, Lance, Lehoux and Regnier2;Reference Cookson and Maynard5;Reference Lehoux, Denis, Tailliez and Hivon30;Reference Menon35). The specific objectives of the current study were: (i) to profile the HTAs produced in Canada and other selected countries to assess whether they have the potential to inform policy making about health systems in jurisdictions other than the ones for which they were produced; and (ii) to develop and pilot test prototypes for packaging and assessing the relevance of HTAs for health system managers and policy makers.

METHODS

Profiling HTA Production

We began the process of profiling HTA production by purposively sampling HTA agencies. We selected five high-profile HTA agencies in Canada that focused on different health system “levels”: one with a national focus (CADTH), three with a provincial focus (AETMIS—Agence d'évaluation des technologies et des modes d'intervention en santé, AHFMR— Health Technology Assessment Unit at the Alberta Heritage Foundation for Medical Research, and OHTAC—Ontario Health Technology Advisory Committee), and one with an organizational focus (TAU-MCHC—Technology Assessment Unit of the McGill University Health Centre) (Reference McGregor and Brophy33). We selected four high-profile agencies with a national focus from outside Canada: two in the United Kingdom (NHS Research and Development Programme's Health Technology Assessment Programme and National Institute for Health and Clinical Excellence), one in Denmark (Danish Centre for Evaluation and Health Technology Assessment), and one in the United States (Agency for Healthcare Research and Quality).

We then reviewed the Web sites of the nine HTA agencies to compile a list of HTAs meeting our two inclusion criteria: (i) full HTAs (i.e., not rapid appraisals or briefs on new emerging technologies); and (ii) published between September 2003 and August 2006. Three of us (J.N.L., J.G., and M.G.W.) initially coded ten HTAs from each HTA agency using a simple framework that focused on the type of technology the HTA addressed and the methods used. We revised the coding framework iteratively and the final version focused on five questions (Table 1). Three of us independently coded all eligible HTA abstracts, with one reviewer (J.N.L.) coding all HTAs abstracts and two reviewers (M.G.W. and J.B.) coding half of the HTA sample. All coding disagreements were resolved by consensus.

Table 1. HTA Coding Framework

*Arguably there is no agreed methodology for conducting a sociopolitical assessment but we looked for any assessment using explicitly described methods.

HTA, health technology assessment.

Developing a Structured Summary Prototype

We developed a prototype for a structured summary for HTAs using input from a purposive sample of twenty-nine Canadian and British managers and policy makers who we interviewed in a previous study about how to make systematic reviews more useful to managers and policy makers (Reference Lavis, Davies and Oxman25). The input suggested the need for a summary with these attributes:

  • Provides graded entry to the full details of an HTA, which would mean something like a 1:3:25 page format (i.e., one page of take-home messages, a three-page executive summary that summarizes the full report, and a 25-page report, as well as a longer technical report if necessary) but with a structured format for one or both of the one- or three-page summaries;

  • Facilitates assessment of decision-relevant information, which would include the benefits, harms and costs of the option under consideration (not just the benefits), the uncertainty associated with estimates, and any differential effects by sub-group (or more generally any equity considerations) (Reference Chou and Helfand4;15;Reference Oxman and Guyatt38Reference Tsikata, Robinson and Petticrew40); and

  • Facilitates assessment of the local applicability of a review, which would include features of the option and the contexts in which it had been studied.

We supplemented this input with a review of the International Network of Agencies for Health Technology Assessment (INAHTA) checklist for HTA reports, which identified the importance of flagging both conflicts of interest and peer review (Reference Hailey17;21).

The prototype for the (roughly three-page) executive summary contained three sections:

  • A description of the report itself, which included (i) the type of technology assessed, (ii) the basic elements of the question addressed (i.e., the population, intervention, comparator, and outcomes studied), (iii) whether the report is part of an integrated set that addresses a larger topic, (iv) whether the report is an update of one previously completed, (v) the methods used (and whether the findings from different methods, such as a systematic review, could be used separately), (vi) whether the report has been peer and/or user reviewed, and (vii) whether any conflicts of interest were reported;

  • A description of the findings contained in the report, which included verbatim text from the report about the technology's benefits (categorized into evidence of benefits, no evidence of benefits, and lack of evidence for benefits), harms, and costs (both average costs and cost-effectiveness), as well as related prompts about local applicability (Reference Lavis, Posada, Haines and Osei26), equity considerations, and scaling-up considerations; and

  • A description of the take-home messages (i.e., the conclusions and/or recommendations) about the technology.

Prompts for data elements that were not touched on in the report are retained but in a lighter font to assist readers to identify where salient information may be lacking.

The prototype for the one-page version placed the description of the take-home messages on a separate front page.

Developing a Relevance-Assessment Prototype

Our approach to engaging health system managers and policy makers in relevance assessments used as a starting point the approach used successfully for EvidenceUpdates, a physician-targeted evidence service (Reference Haynes18). Evidence Updates uses the McMaster Online Rating of Evidence (MORE) system (http://hiru.mcmaster.ca/more) to obtain peer assessments of the relevance and newsworthiness of original studies and systematic reviews (Reference Haynes, Cotoi and Holland19). The prototype for Evidence Updates was evaluated in a cluster randomized trial that documented a sustained increase of approximately 58 percent in the use of evidence-based sources by physicians (Reference Haynes, Holland and Cotoi20).

Our relevance-assessment prototype for health system managers and policy makers contained three sections: (i) immediate relevance (i.e., whether the technology is being discussed and deliberated upon actively), (ii) potential relevance (i.e., whether the technology is likely to be discussed and deliberated upon in the future), and (iii) informativeness. We chose to distinguish immediate relevance (with a scale anchored by “Definitely not relevant right now: completely unrelated content area” and by “Directly and highly relevant right now”) from potential relevance (with a scale anchored by “Definitely not relevant in the future: completely unrelated content area” and by “Directly and highly relevant in the future”) because health system managers' and policy makers' agendas are far less predictable than physicians' caseloads and what is not on their agenda today could emerge on it in the future. We chose to use the word “informativeness” to capture the idea that HTAs may or may not add any new information to what is already known (with a scale anchored by “Not of direct decision-making importance” and by “Useful information, most managers and policy makers in my organization or jurisdiction definitely do not know this (unless they have read this article)”) rather than the word “newsworthiness” because the latter may have been confused with likelihood of media coverage. The instructions for the form differed slightly depending on whether the health system managers and policy makers are drawn from the same jurisdiction as where the HTA was undertaken.

Pilot Testing the Structured Summary and Relevance-Assessment Prototypes

We pilot tested the structured summary and relevance-assessment prototypes using one-on-one semistructured telephone interviews with a purposive sample of Canadian healthcare managers and policy makers. We selected five provinces from which to select managers and policy makers, three of which were the same provinces for which we profiled HTA production (Alberta, Ontario and Quebec), one of which was another populous province (British Columbia), and one of which was a less populous province but still one in which there was a sizable health research community (Nova Scotia). Within each province, we attempted to identify three potential study participants from positions within government, one participant from a regional health authority, and one participant from a community care organization. We sought to achieve variation in the size of regional health authorities and in the mix of junior and senior positions and areas of content focus (e.g., prescription drugs and devices). We identified potential study participants through key informants in each province, Web sites, and our initial pool of interviewees.

We sent to each potential study participant (by both e-mail and post) an invitation letter, brief description of our research project, and consent form in their preferred language of correspondence (English or French). We sent to each individual who agreed to participate in the study (again in their preferred language): (i) two versions of the structured summary (one with the take-home messages on a separate page up front and a second with the take-home messages at the end of the executive summary); (ii) three completed structured summaries in English and, for French-speakers, one completed structured summary in French; and (iii) the relevance-assessment prototype. We purposively sampled the HTAs that were the focus of the structured summaries to achieve variation in the features on which we focused for our profiles. One of the three HTAs focused on health system arrangements. One individual (M.G.W.) conducted all English-language interviews and another individual (M.O.) conducted all French-language interviews. The interview guide addressed the overall structure of the structured summaries (including whether the take-home messages should appear up front or at the end of the executive summary), the content of the three sections with the structured summary, and the overall structure, questions and scales in the relevance-assessment prototype. We asked about how meaningful each element was and probed to identify missing elements, redundant elements, and unclear wording. All interviews were audio-taped. One individual (M.G.W.) developed structured summaries of all interviewees. We followed a constant comparative method and iteratively revised the prototypes, interview questions, and thematic analysis as two individuals independently coded groups of structured summaries.

RESULTS

Profile of HTA Production

We identified and coded 223 HTAs produced by nine HTA agencies between 2003 and 2006 (Table 2). The most frequently assessed technologies were drugs (28 percent), followed by devices (22 percent), diagnostics (16 percent), and healthcare delivery arrangements (15 percent). The clinical “other” category was assigned to 24 percent of HTAs; however, it was uniquely assigned to only 18 percent of assessed technologies. Forty-four HTAs (20 percent) addressed health system arrangements. The most frequently used method was a systematic review (92 percent), often either alone (49 percent) or in combination with an economic evaluation (38 percent). Only 12 HTAs (5 percent) incorporated a systematic sociopolitical assessment using explicit methods. Fifty HTAs (22 percent) provided context-specific actionable messages (e.g., for a specific organization or system).

Table 2. Characteristics of HTAs produced by nine agencies between September 2003 and August 2006

aThese abstracts were originally coded as having a financial arrangement component but the full HTA reports did not.

bFull HTA was defined as including a systematic review, economic evaluation, and a systematic sociopolitical assessment using explicit methods.

cHTA reports could be coded as addressing multiple types of technologies and therefore the total number of HTAs reviewed may not correspond to the total number of HTA reports addressing different types of technologies.

Structured Summary Prototype

We conducted one-on-one semistructured telephone interviews about the structured summary prototype with nineteen Canadian healthcare managers and policy makers, of whom five were from British Columbia, three from Alberta, four from Ontario, four from Quebec, one from Nova Scotia, and two from a national perspective. We achieved a good balance across government (n = 12), regional health authorities (n = 4), and community organizations (n = 3), in the mix of junior and senior positions, and in areas of content focus; however, in one jurisdiction (hereafter called jurisdiction A), two interviewees' comments suggested that they felt more aligned with the community of HTA producers than with the community of HTA users.

Overall, the structured summaries were well received by healthcare managers and policy makers and they responded in the same way to the structured summary of an HTA focused on health system arrangements as they did to the HTA with a clinical focus and the HTA with a public health focus. With the exception of two interviewees from jurisdiction A, all interviewees thought that the prototype provided a very useful way to summarize HTAs in a user-friendly and policy-relevant manner. For instance, one interviewee from jurisdiction B indicated that “what we do in my branch is we do briefing notes on most of the external reports and some HTAs that come in to the ministry. I was struck about how similar the objective is in our briefing notes and the objective of your [HTA summary].” This similarity between our structured summary and documents prepared internally to inform decision making was reiterated by an interviewee from jurisdiction C who said “this stuff would end up getting cut and paste[d] right into our business cases.” An interview from a regional health authority concluded: “this is helpful stuff. It can really help us make better use of the evidence, for sure.” A civil servant added that structured summaries “would make my life a whole lot easier if, let's say, all reports . . . had these things.”

With respect to the structure and content of the prototype, all interviewees thought the structured summaries would assist them in scanning for decision-relevant information. As one interviewee stated: “the way it's structured, I think it picks out the highlights that readers will want to know and they look for out of the report.” With the exception of three interviewees (each from a different sector), all interviewees argued that the take-home messages from the HTA should be placed at the beginning of the “friendly front end.” Their justification for placing the take-home messages up front was typically that it is the first place their peers would look for information. All interviewees argued for retaining prompts for data elements that were not touched on in the report (but in a lighter font). Here their justification was that knowing what had not been assessed would help to generate discussion about these issues. Interviewees found useful both the division of findings into benefits, harms and costs and (with the exception of one interviewee in government) the additional separation of benefits into “evidence of benefits,” “no evidence of benefits,” and “lack of evidence for benefits.” (The interviewee with a different view believed that benefits should be placed under one general heading.) While interviewees found the harms and costs sections to be useful, many of them suggested that the language used in these sections could be made more accessible to managers and policy makers and that system-level costs (which are a critical consideration in management and policy making) could be better highlighted. Several interviewees noted that cost-effectiveness information is typically difficult to understand but should still be included because it is useful to some of their peers.

Interviewees provided five specific suggestions about how to improve the prototype. First, most interviewees noted that the structured summaries need to be formatted in a way that makes each of the important headings very prominent and hence makes it easier for managers and policy makers to scan for important information. Second, many interviewees suggested that a scoring of an HTA's quality would be helpful (e.g., using some sort of accepted HTA appraisal instrument or a hierarchy of evidence approach). Third, many participants suggested that the intent (and heading) needs to be made clearer for the section entitled “For context-specific HTAs, can the component studies be used separately?” This section generated a great deal of confusion. Fourth, many participants suggested that a short plain-language summary be placed at the beginning of each structured summary (along with the take-home messages) because much of the content is still geared toward those with some background in and understanding of research. Fifth, some participants suggested that it would be helpful to extract from the HTA report and include in the structured summaries a listing of previously completed or soon-to-be-completed HTAs on the same topic (e.g., from HTA agencies in other jurisdictions). On the basis of these interviews we modified the structured summary prototype, and in particular we acted on the first and third suggestion (Supplementary Appendix 1, which can be viewed online at www.journals.cambridge.org/thc2010027).

Relevance-Assessment Prototype

In contrast to the feedback about the structured summary prototype, we received mixed feedback about the relevance-assessment prototype from the same nineteen Canadian healthcare managers and policy makers. While only two interviewees (each from a different sector) indicated that they did not think that a relevance-assessment process was worthwhile, the remaining interviewees typically offered suggestions that went beyond minor modifications. Many interviewees found confusing our distinction between immediate and potential relevance. Their suggestions about how to address the confusion ranged from combining the two questions into one to changing the term “potential relevance” to “future relevance” (which seemed more descriptive to some). More interviewees found the response options to be redundant, burdensome or both, and many suggested replacing the seven-point scale with a five-point scale and putting as much of the description for each response option in the text of the preceding question. Only one interviewee argued for keeping the seven-point scale. One interviewee argued strongly for including a user-assessment of each HTA's quality. Other interviewees suggested asking for an assessment of whether or not the HTA would be useful for decision making, providing a tracking system on a Web site for how many times the HTA has been accessed by other users, and provide a forum for users to provide comments about the HTA (similar to consumer reviews on store Web sites).

On the basis of these interviews we modified the relevance-assessment prototype in the following ways: (i) we modified the response options to fit within a five-point scale; (ii) we reduced the text in the response options by including more of the text in the questions that precede the response options; (iii) we modified the question about potential relevance to ask about the future relevance of the HTA; (iv) we modified the question about “informativeness” to ask how useful the HTA would be for decision-making; and (v) we added a question that asks the respondent to provide an overall assessment of quality for the HTA using a five-point scale (Supplementary Appendix 2).

DISCUSSION

Principal Findings

Our review of the 223 HTAs produced by nine HTA agencies between 2003 and 2006 identified that: (i) 20 percent addressed health system arrangements; (ii) 92 percent incorporated a systematic review, while only 5 percent incorporated a sociopolitical assessment using explicit methods; and (iii) 22 percent contained context-specific actionable messages. Our interviews with a purposive sample of nineteen Canadian health system managers and policy makers identified significant support for both the general idea of an HTA summary and the prototype's specific elements, but mixed views about using peer assessments of relevance. The revised structured summary prototype describes the focus of the HTA (categorized both according to the nature of the technology and according to the populations, interventions, comparators and outcomes studied) and an overall weighing of benefits, harms, and costs, as well as more details about the benefits, harms, and costs or cost-effectiveness (including equity considerations within a health system and applicability considerations across health systems) and more details about the report itself (e.g., methods used and conflicts of interest reported). The revised prototype addresses immediate and future relevance, overall quality, and usefulness.

Strengths and Weaknesses of Our Study

Our study has two main strengths: (i) we used an iteratively developed coding framework and independent coders to profile HTA production; and (ii) we based the prototypes on design features that had been identified as important in the research literature and then revised these features iteratively in response to feedback from healthcare managers and policy makers. Our study has two main weaknesses. First, we encountered difficulties in securing the participation of health system managers and policy makers from a smaller province so we may have missed some important feedback unique to such jurisdictions. However, the interviewees whose participation we were able to secure provided richly detailed perspectives on the role of structured summaries and relevance-assessment forms and how to enhance this role. Second, we did not assess the use of the summaries in policy making about health systems.

Strengths and Weaknesses as Compared to Other Studies

Our profile of the technologies assessed and methods used by HTA agencies are consistent with those found both internationally and in Canada (Reference Draborg and Gyrd-Hansen8;Reference Jonsson and Banta23;Reference Mears, Taylor, Littlejohns and Dillon34;Reference Menon and Topfer36;Reference Velasco Garrido, Perleth and Drummond43). As one concrete example, others have also noted that, while writings about HTAs typically suggest that they include what we call a systematic sociopolitical assessment (i.e., an assessment of the professional, social, political, legal and ethical context in which an HTA is being considered), most HTAs tend to focus solely on epidemiological (effectiveness) and economic (cost-effectiveness) analyses (Reference DeJean, Giacomini, Schwartz and Miller6;Reference Giacomini13;Reference Lehoux and Blume29;Reference Lehoux, Denis, Tailliez and Hivon30). One notable difference, however, is that a lower proportion (less than a quarter compared with roughly a half) of HTAs were found to provide context-specific actionable messages than had been found in a previous study (Reference Draborg, Gyrd-Hansen, Poulsen and Horder9). Preferences for providing recommendations can vary across jurisdictions (Reference Draborg and Anderson7). Both EUR-ASSESS and INAHTA, for example, recommend that HTAs provide clear conclusions (Reference Granados, Jonsson and Banta16;Reference Hailey17), whereas Canadian authorities often do not want HTAs to provide clear recommendations (Reference Draborg and Anderson7). To our knowledge, no one has developed structured decision-relevant summaries and a relevance-assessment form for HTAs and pilot-tested them with health system managers and policy makers. However, the EUnetHTA project has initiated complementary work on a toolkit for assessing the applicability of findings from HTAs, which will help provide further guidance on how to use the findings from HTAs (11). The use of these summaries and form, as well as the applicability toolkit, could help in the dissemination of HTAs, particularly in Canada where HTA dissemination efforts have been previously described as “ad hoc in nature” and as using “a piecemeal approach” (Reference Lehoux, Denis, Tailliez and Hivon30).

IMPLICATIONS

Those involved in supporting the use of HTAs in policy making about health systems may wish to produce structured decision-relevant summaries for their systematic review-containing HTAs to increase the prospects for their HTAs being used outside the jurisdiction for which they were produced. Additional testing of the relevance-assessment form is required, including a psychometric evaluation of its inter-rater reliability and validity and an evaluation of policy makers' perceptions about and use of information about relevance.

SUPPLEMENTARY MATERIAL

Supplementary Appendix 1

Supplementary Appendix 2

www.journals.cambridge.org/thc2010027

CONTACT INFORMATION

John N. Lavis, MD, PhD (), Professor, Department of Clinical Epidemiology and Biostatistics and Department of Political Science, McMaster University; Director, McMaster Health Forum, 1280 Main Street West, CRL-209, Hamilton, Ontario L8S 4K1, Canada

Michael G. Wilson, BHSc (), Doctoral Candidate, Health Research Methodology Program, McMaster University, 1280 Main St. West, CRL-209, Hamilton, Ontario L8S 4K1, Canada

Jeremy M. Grimshaw, MB, ChB, PhD (), Professor, Department of Medicine, University of Ottawa; Senior Scientist, Clinical Epidemiology Program, Ottawa Hospital Research Institute, 1053 Carling Avenue, Ottawa, Ontario K1H 5S4, Canada

R. Brian Haynes, MD, PhD (), Professor, Department of Clinical Epidemiology and Biostatistics and Department of Medicine, McMaster University; Attending Staff, Department of Medicine, Hamilton Health Sciences, 1200 Main Street, Hamilton, Ontario L8N 3Z5, Canada

Mathieu Ouimet, PhD (), Assistant Professor, Department of Political Science, Université Laval, 2325 Rue de l'Université, Québec, Québec, G1V 0A6, Canada; Researcher, CHUQ Research Centre, 10 Rue de l'Espany, Québec, Québec G1L 3L5, Canada

Parminder Raina, PhD (), Professor, Department of Clinical Epidemiology and Biostatistics, McMaster University, 1280 Main Street West, DTC-306, Hamilton, Ontario L8S 4L8, Canada

Russell L. Gruen, MBBS, PhD (), Professor, Departments of Surgery and Public Health, Monash University; Director, National Trauma Research Institute, Alfred Hospital, 89 Commercial Road, Melbourne, VIC 3004, Australia

Ian D. Graham, PhD (), Associate Professor, School of Nursing, University of Ottawa, 451 Smyth Road, Ottawa, Ontario K1H 8M5, Canada; Vice-President, Knowledge Translation, Canadian Institutes of Health Research, 160 Elgin Street, AL 4809A, Ottawa, Ontario K1A 0W9, Canada

CONFLICT OF INTEREST

R.B. Haynes, J.N. Lavis, P. Raina, and M.G. Wilson have received a grant for their institutes from the Canadian Agency for Drugs and Therapeutics in Healthcare for this work. All other authors report no potential conflicts of interest.

References

REFERENCES

1. Battista, RN, Banta, HD, Jonsson, E, Hodge, M, Gelbland, H. Lessons from the eight countries. Health Policy. 1994;30:397421.CrossRefGoogle ScholarPubMed
2. Battista, RN, Lance, J-M, Lehoux, P, Regnier, G. Health technology assessment and the regulation of medical devices and procedures in Quebec: Synergy, collusion, or collision? Int J Technol Assess Health Care. 1999;15:593601.CrossRefGoogle ScholarPubMed
3. Canadian Agency for Drugs and Technologies in Health. FAQ. Ottawa, Canada: Canadian Agency for Drugs and Technologies in Health. http://www.cadth.ca/index.php/en/hta/faq (accessed December 23, 2009).Google Scholar
4. Chou, R, Helfand, M. Challenges in systematic reviews that assess treatment harms. Ann Intern Med. 2005;142 (pt 2):10901099.CrossRefGoogle ScholarPubMed
5. Cookson, R, Maynard, A. Health technology assessment in Europe: Improving clarity and performance. Int J Technol Assess Health Care. 2000;16:639650.CrossRefGoogle ScholarPubMed
6. DeJean, D, Giacomini, M, Schwartz, L, Miller, FA. Ethics in Canadian health technology assessment: A descriptive review. Int J Technol Assess Health Care. 2009;25:463469.CrossRefGoogle ScholarPubMed
7. Draborg, E, Anderson, CK. Recommendations in health technology assessments worldwide. Int J Technol Assess Health Care. 2006;22:155160.CrossRefGoogle ScholarPubMed
8. Draborg, E, Gyrd-Hansen, D. Time-trends in health technology assessments: An analysis of developments in composition of international health technology assessments from 1989 to 2002. Int J Technol Assess Health Care. 2005;21:492498.CrossRefGoogle ScholarPubMed
9. Draborg, E, Gyrd-Hansen, D, Poulsen, PB, Horder, M. International comparison of the definition and the practical application of health technology assessment. Int J Technol Assess Health Care. 2005;21:8995.CrossRefGoogle ScholarPubMed
10. Eisenberg, JM. Ten lessons for evidence-based technology assessment. JAMA. 1999;282:18651869.CrossRefGoogle ScholarPubMed
11. EUnetHTA Work Package 5 Members. Applicability testing of WP5 toolkit – Round two: Summary report. Southampton, England: NIHR Coordinating Centre for Health Technology Assessment (NCCHTA); 2008.Google Scholar
12. Fretheim, A, Munabi-Babigumira, S, Oxman, AD, Lavis, JN, Lewin, S. SUPPORT Tools for evidence-informed health Policymaking (STP) 6: Using research evidence to address how an option will be implemented. Health Res Policy Syst. 2009;7 (Suppl 1):S6.CrossRefGoogle Scholar
13. Giacomini, M. The which-hunt: Assembling health technologies for assessment and rationing. J Health Polit Policy Law. 1999;24:715758.CrossRefGoogle ScholarPubMed
14. Goodman, CS, Ahn, R. Methodological approaches of health technology assessment. Int J Med Inform. 1999;56:97105.CrossRefGoogle ScholarPubMed
15. GRADE Working Group. Education and debate: Grading quality of evidence and strength of recommendations. BMJ. 2004;328:14901494.CrossRefGoogle Scholar
16. Granados, A, Jonsson, E, Banta, HD, et al. EUR-ASSESS project subgroup report on dissemination and impact. Int J Technol Assess Health Care. 1997;13:220286.CrossRefGoogle ScholarPubMed
17. Hailey, D. Toward transparency in health technology assessment. Int J Technol Assess Health Care. 2003;19:17.CrossRefGoogle ScholarPubMed
18. Haynes, RB. bmjupdates+, a new free service for evidence-based clinical practice. Evid Based Nurs. 2005;8:39.CrossRefGoogle ScholarPubMed
19. Haynes, RB, Cotoi, C, Holland, J, et al. Second-order peer review of the medical literature for clinical practitioners. JAMA. 2006;295:18011808.CrossRefGoogle ScholarPubMed
20. Haynes, RB, Holland, J, Cotoi, C, et al. McMaster PLUS: A cluster randomized clinical trial of an intervention to accelerate clinical use of evidence-based information from digital libraries. J Am Med Inform Assoc. 2006;13:593600.CrossRefGoogle Scholar
21. International Network of Agencies for Health Technology Assessment. A checklist for health technology assessment reports. Stockholm, Sweden: INAHTA Secretariat; 2001.Google Scholar
22. International Network of Agencies for Health Technology Assessment. HTA resources: Definitions – technology assessment. Stockholm, Sweden: International Network of Agencies for Health Technology Assessment. http://www.inahta.org (accessed December 23, 2009).Google Scholar
23. Jonsson, E, Banta, D. Management of health technologies: An international view. BMJ. 1999;319:1293.CrossRefGoogle Scholar
24. Lavis, JN. How can we support the use of systematic reviews in policymaking? PLoS Med. 2009;6:e1000141.CrossRefGoogle ScholarPubMed
25. Lavis, JN, Davies, HTO, Oxman, AD, et al. Towards systematic reviews that inform health care management and policy-making. J Health Serv Res Policy. 2005;10 (Suppl 1):S1:35–S1:48.CrossRefGoogle ScholarPubMed
26. Lavis, JN, Posada, FB, Haines, A, Osei, E. Use of research to inform public policymaking. Lancet. 2004;364:16151621.CrossRefGoogle ScholarPubMed
27. Lavis, JN, Wilson, MG, Oxman, AD, et al. SUPPORT Tools for evidence-informed health Policymaking (STP) 5: Using research evidence to frame options to address a problem. Health Res Policy Syst. 2009;7 (Suppl 1):S5.CrossRefGoogle ScholarPubMed
28. Lavis, JN, Wilson, MG, Oxman, AD, Lewin, S, Fretheim, A. SUPPORT Tools for evidence-informed health Policymaking (STP) 4: Using research evidence to clarify a problem. Health Res Policy Syst. 2009;7 (Suppl 1):S4.CrossRefGoogle ScholarPubMed
29. Lehoux, P, Blume, S. Technology assessment and the sociopolitics of health technologies. J Health Polit Policy Law. 2000;25:10831120.CrossRefGoogle ScholarPubMed
30. Lehoux, P, Denis, JL, Tailliez, S, Hivon, M. Dissemination of health technology assessments: Identifying the visions guiding an evolving policy innovation in Canada. J Health Polit Policy Law. 2005;30:603641.CrossRefGoogle ScholarPubMed
31. Lehoux, P, Tailliez, S, Denis, JL, Hivon, M. Redefining health technology assessment in Canada: Diversification of products and contextualization of findings. Int J Technol Assess Health Care. 2004;20:325336.CrossRefGoogle ScholarPubMed
32. Lomas, J, Culyer, T, McCutcheon, C, McAuley, L, Law, S. Conceptualizing and combining evidence for health system guidance. Ottawa, Canada: Canadian Health Services Research Foundation; 2005.Google Scholar
33. McGregor, M, Brophy, JM. End-user involvement in health technology assessment (HTA) development: A way to increase impact. Int J Technol Assess Health Care. 2005;21:263267.CrossRefGoogle ScholarPubMed
34. Mears, R, Taylor, R, Littlejohns, P, Dillon, A. Review of international health technology assessment. London, England: National Institute for Clinical Excellence; 2000.Google Scholar
35. Menon, D. An assessment of health technology assessment in Canada. Can J Public Health. 2000;91:120.CrossRefGoogle ScholarPubMed
36. Menon, D, Topfer, LA. Health technology assessment in Canada: A decade in review. Int J Technol Assess Health Care. 2000;16:896902.CrossRefGoogle Scholar
37. Office of Technology Assessment. Development of medical technologies: Opportunities for assessment. Washington, DC: US Government Printing Office; 1976.Google Scholar
38. Oxman, AD, Guyatt, GH. A consumers' guide to subgroup analyses. Ann Intern Med. 1992;116:7884.CrossRefGoogle ScholarPubMed
39. Pignone, M, Saha, S, Hoerger, T, Lohr, KN, Teutsch, S, Mandelblatt, J. Challenges in systematic reviews of economic analyses. Ann Intern Med. 2005;142 (pt 2):10731090.CrossRefGoogle ScholarPubMed
40. Tsikata, S, Robinson, V, Petticrew, M. Do Cochrane systematic reviews contain useful information about health equity? Barcelona, Spain: 11th Cochrane Colloquium; 2003.Google Scholar
41. Velasco Garrido, M, Busse, R. Health technology assessment: An introduction to objectives, role of evidence and structures in Europe. Copenhagen, Denmark: WHO Regional Office for Europe/ European Observatory on Health Systems and Policies; 2005.Google Scholar
42. Velasco Garrido, M, Gerhardus, A, Røttingen, JA, Busse, R. Developing health technology assessment to address health care system needs. Health Policy. 2010;94:196202.CrossRefGoogle ScholarPubMed
43. Velasco Garrido, M, Perleth, M, Drummond, M, et al. Best practice in undertaking and reporting health technology assessments: Working group 4 report. Int J Technol Assess Health Care. 2002;18:361422.Google Scholar
44. Velasco Garrido, M, Zentner, A, Busse, R. Health systems, health policy and health technology assessment. In: Velasco Garrido, M, Kristensen, FB, Nielsen, CP, Busse, R, eds. Health technology assessment and health policy-making in Europe: Current status, challenges and potential. Copenhagen, Denmark: World Health Organization; 2008:5379.Google Scholar
45. World Health Organization. The Mexico statement on health research: Knowledge for better health: Strengthening health systems. Geneva, Switzerland: World Health Organization; 2004.Google Scholar
46. World Health Organization. World report on knowledge for better health: Strengthening health systems. Geneva, Switzerland: World Health Organization; 2004.Google Scholar
47. World Health Organization. The Bamako call to action on research for health: Strengthening research for health, development, and equity. Geneva, Switzerland: World Health Organization; 2008.Google Scholar
Figure 0

Table 1. HTA Coding Framework

Figure 1

Table 2. Characteristics of HTAs produced by nine agencies between September 2003 and August 2006

Supplementary material: File

Lavis et al. supplementary material

Appendices

Download Lavis et al. supplementary material(File)
File 167.4 KB