Hostname: page-component-7b9c58cd5d-v2ckm Total loading time: 0 Render date: 2025-03-15T18:52:22.573Z Has data issue: false hasContentIssue false

Priority setting for health technology assessments: A systematic review of current practical approaches

Published online by Cambridge University Press:  19 June 2007

Hussein Z. Noorani
Affiliation:
Canadian Agency for Drugs and Technologies in Health
Donald R. Husereau
Affiliation:
Canadian Agency for Drugs and Technologies in Health
Rhonda Boudreau
Affiliation:
Canadian Agency for Drugs and Technologies in Health
Becky Skidmore
Affiliation:
Society of Obstetricians and Gynaecologists of Canada
Rights & Permissions [Opens in a new window]

Abstract

Objectives: This study sought to identify and compare various practical and current approaches of health technology assessment (HTA) priority setting.

Methods: A literature search was performed across PubMed, MEDLINE, EMBASE, BIOSIS, and Cochrane. Given an earlier review conducted by European agencies (EUR-ASSESS project), the search was limited to literature indexed from 1996 onward. We also searched Web sites of HTA agencies as well as HTAi and ISTAHC conference abstracts. Agency representatives were contacted for information about their priority-setting processes. Reports on practical approaches selected through these sources were identified independently by two reviewers.

Results: A total of twelve current priority-setting frameworks from eleven agencies were identified. Ten countries were represented: Canada, Denmark, England, Hungary, Israel, Scotland, Spain, Sweden, The Netherlands, and United States. Fifty-nine unique HTA priority-setting criteria were divided into eleven categories (alternatives; budget impact; clinical impact; controversial nature of proposed technology; disease burden; economic impact; ethical, legal, or psychosocial implications; evidence; interest; timeliness of review; variation in rates of use). Differences across HTA agencies were found regarding procedures for categorizing, scoring, and weighing of policy criteria.

Conclusions: Variability exists in the methods for priority setting of health technology assessment across HTA agencies. Quantitative rating methods and consideration of cost benefit for priority setting were seldom used. These study results will assist HTA agencies that are re-visiting or developing their prioritization methods.

Type
GENERAL ESSAYS
Copyright
© 2007 Cambridge University Press

The number of health technologies needing evaluation far outweighs available resources (12;17). Therefore, all health technology assessment (HTA) agencies must set priorities for their research projects. All HTA agencies are faced with problems of prioritization (4;13;15;17;27;32), and many agencies currently use a criteria-based system for the prioritization of research projects (4;17;27). There is, however, a lack of consensus on an appropriate method for priority setting among HTA agencies (17).

Several quantitative models for priority setting in HTA were proposed in the early 1990s (10;19;28). This included a priority-setting process proposed by the Institute of Medicine (1992) in the United States (U.S.) that uses seven criteria, seven steps, a Delphi process, and nominal group techniques with multidisciplinary teams (10). This model has been adopted by other agencies, including the Basque Office for HTA (OSTEBA) in Spain (4).

More recently, the EUR-ASSESS project (1994–1997), an international group designed to coordinate developments in HTA in Europe and to improve decision making concerning adoption and use of health technology, has produced a set of guidelines for the priority setting of HTA projects (17). The recommendations of EUR-ASSESS have been used in large part by European HTA agencies (15).

Despite improved awareness regarding the principles and processes that can be used to guide priority setting, a 2004 international comparison of technology assessment processes across four HTA agencies revealed “a lack of explicit, quantitative methods to inform the prioritization process of technologies to be assessed using societal criteria” (13). Further to this and despite a long-standing awareness of the need for human and political context (3;17), the report highlighted little explicit mention of any political deliberation and participation of stakeholders (13).

The purpose of this research was to identify and compare practical approaches for priority setting among all HTA agencies since the EUR-ASSESS recommendations. This strategy includes an examination of methods used to identify HTA topics, criteria used for setting topic priorities, and rating and scoring methods.

METHODS

To be included in this study, a report had to describe, in whole or part, a method for priority setting for the assessment of new (i.e., at point of adoption) or diffused health technologies. We excluded reports that solely described priority setting for emerging technologies. We did not specifically exclude publications from non–International Network of Agencies for HTA (INAHTA) member agencies. An electronic literature search was performed from January 1996 onward across PubMed, MEDLINE, EMBASE, BIOSIS, and Cochrane on February 15, 2004. The MEDLINE, EMBASE, and BIOSIS searches were updated to June 23, 2006. The year 1996 was selected as the beginning date for the literature search, as an update to the review of priority setting of the EUR-ASSESS project (literature search from the years 1984 to 1996) (17). There were no language restrictions. Web sites of member agencies of the INAHTA (as of February 2005) were also searched for descriptions on priority setting.

Citations identified by the electronic searches were reviewed for relevance by two of the coauthors (H.N. and D.H.) on the basis of title and abstract (if available). If at least one reviewer identified a citation as potentially relevant, the published report was obtained. Retrieved reports were then reviewed (by H.N. and D.H.) and selected if both authors judged that they met selection criteria. Reference lists of these selected reports were scanned for further potentially relevant citations.

A brief textual description was written for each priority-setting system included for selection, and the following information was extracted: name, setting, and contact information of HTA agency; organizational details (budget, population served, relationship to government, functions related to and outside of HTA); methods for identifying topics (e.g., committees, research proposals); and priority-setting framework (types of technologies, process, criteria, rating or scoring system). We contacted all identified agencies on several occasions up to November 2006 to gather missing data, to validate the description of their prioritization frameworks, and to ensure that it was currently being used.

Once all priority-setting criteria were identified, two researchers (H.N. and D.H.) grouped them into several key themes. One or several descriptive questions were also created to capture all identified criteria. Disagreements were resolved by consensus (unanimity minus one) with a third researcher (R.B.).

RESULTS

A total of twelve current priority-setting systems from eleven agencies were identified from seventeen reports that met the inclusion criteria for review. These included seven reports related to the approach used by the NCCHTA (8;9;16;23;29;31;32), four reports related to the approach used in The Netherlands (ZonMW) (24–27), two reports in conference-abstract form for AHFMR (5;6), and one report each describing the priority-setting approaches for HunHTA (14), ICTAHC (30), NHS QIS (formerly Health Technology Board of Scotland) (18), OSTEBA (4), and SBU (7). AETMIS has two frameworks for prioritization. In addition, Web sites of member agencies of the INAHTA identified further descriptions on priority-setting approaches for AETMIS (1), NCCHTA (21), MAS (20), and AHRQ (2). Agency representatives provided further details on their respective approaches.

Ten countries were represented: Canada, Denmark, England, Hungary, Israel, Scotland, Spain, Sweden, The Netherlands, and United States. The characteristics of the various priority-setting frameworks identified are described in Table 1. A majority (7 of 12) of priority-setting frameworks used a panel or committee to provide advice regarding priorities. AETMIS uses two approaches: Requests submitted by macrolevel decision makers are prioritized at the Ministry level, and other requests are submitted directly to the agency and prioritized by the Board members.

In all cases, committees contained representatives from healthcare system funders, health professionals, and researchers. Advice from a board of directors was used in four priority-setting systems and in conjunction with a committee in two of these. Other mechanisms to provide advice on priority setting were the use of a stakeholder group by AHRQ (a volunteer group that includes clinicians, researchers, third-party payers, consumers of Federal and State beneficiary programs, and healthcare industry professionals), a prioritization strategy group by NCCHTA (composed of clinicians, medical advisors, and researchers), a medical advisor with internal executive staff for NHS QIS, and direction from the Ministry of Health for ZonMW.

Four of the twelve frameworks identified used a rating system to inform priorities. In all cases, these were used in conjunction with a committee. Two systems explicitly considered the cost benefit of conducting the assessment in deciding priorities (Table 1).

Criteria for Priority Setting

We identified fifty-nine unique priority-setting criteria across the eleven agencies identified through our search. The median number of criteria reported by the agencies was five (ranging from three to ten). Whereas the description of prioritization criteria differed across agencies, they could be grouped under 11 categories as shown in Table 2 below. These criteria were generally applicable to both new and diffused technologies. One agency (HunHTA) listed criteria specific to the assessment of pharmaceuticals. Table 3 illustrates the frequency of reported criteria used among the eleven agencies. A majority could be categorized into the categories of clinical impact (100 percent), economic impact (91 percent), and budget impact (55 percent).

DISCUSSION

Our review of the available literature identified twelve descriptions, in whole or in part, of frameworks for the priority setting of health technology assessments. Although we did not specifically exclude published reports from non-INAHTA member agencies, all of the agencies identified were INAHTA members. We did not include CADTH in this study due to an absence of explicit prioritization criteria within the CADTH framework. We were able to separate all identified priority-setting criteria into eleven categories. Although we did not conduct a formal analysis, there did not appear to be extensive overlap in criteria across HTA agencies.

Our study shows that variability exists in the methods for prioritizing technologies for assessment across HTA agencies. This variability may be interpreted as reflecting differences in values, reporting structures, or healthcare priorities among agencies with unique mandates in different sociopolitical contexts.

Observed variability may also be a reflection of previous priority-setting recommendations (10;17). This is reflected in the EUR-ASESS priority-setting subgroup report: “the general approaches to priority setting should reflect the goals of the program, the resources available and the preferred method of working of those who need to be involved” (17). However, we did not see any particular pattern that emerged when comparing frameworks within HTA programs with larger and smaller budgets. The use of committees, ratings, or consideration of cost benefit appeared to have been equally used by larger and smaller HTA agencies.

Two reviewers systematically applied selection criteria to all available literature to identify relevant material. We believe this approach will lend to the accuracy of our findings and reduce the chance of missing relevant information.

One limitation of this study is that the identified agencies currently represent one quarter of the member organizations of INAHTA. This finding suggests that the process of deciding on technologies for assessment is readily available in most organizations. A survey of all member organizations on how they prioritize technologies for assessment may have provided more insights on priority setting in HTA. However, as the process of making a final decision on which technologies to assess is implicit within many agencies, there would be limitations to such survey results (11). Because the original intent of our study was to conduct an environmental scan for the purpose of developing a robust priority-setting framework for CADTH, we assumed the most rigorous systems would be explicit and more likely to be documented.

We are not aware of any other similar recent reviews on diffused technologies. Eddy (12) summarized thirty-eight criteria collected from six programs in the United States and categorized these into just three elements: health importance, economic importance, and expectation that an assessment will make a difference. A recent survey among horizon-scanning systems (11) identified differences with regard to the criteria and actors involved in the final decisions of which emerging technologies to assess. The survey revealed that most agencies use costs and health benefits when prioritizing. Our results were consistent with this finding.

We believe the implications of our findings can be viewed in light of other EUR-ASSESS Priority Setting Subgroup recommendations. The first two recommendations suggest HTA programs should have an explicit, agreed-upon process (17). Although our study did not delve into the latter, we did discover that each agency we contacted could easily provide us with a clear process describing actors, criteria, and methods.

The EUR-ASSESS recommendations also suggest that priorities reflect the likely costs and benefits of the possible health technology assessments being considered (17). Of the twelve frameworks we identified, only two had an explicit process for considering the efficiency of conducting an assessment. Future research may need to focus on why this gap between recommendations and current practice exists, and what standard methods can be adopted. Although it might seem contradictory that a majority of organizations that evaluate the potential economic impact of health technologies do not evaluate the potential impact of their own assessments, there may be legitimate reasons to explain this deficiency.

The EUR-ASSESS recommendations also suggest cost-benefit be considered in light of a rating using systematically applied criteria (17). Our findings suggest only one third of frameworks have adopted a rating system. This may, in part, explain the lack of considerations of cost-benefit, as stated above. CADTH has not explicitly considered cost-benefit when setting priorities. However, it is our intention to use our recently developed rating system to do so in the near future.

We anticipate this snapshot of current priority-setting frameworks will stimulate further discussion among HTA researchers. In our experience, this work is applicable and of great interest to researchers and organizations involved in priority setting of other knowledge synthesis endeavors (22). It is our hope that continued thinking in this area will facilitate the final two EUR-ASSESS recommendations: sharing information on priorities and evaluating processes and outcomes of priority setting (17).

CONCLUSIONS

Variability exists in the methods for priority setting of health technology assessment across HTA agencies. Quantitative rating methods and consideration of cost-benefit for priority setting were seldom used. These study results will assist those individual HTA agencies that are developing prioritization methods in terms of increased timeliness and relevance of topics under evaluation, improved technology tracking, and identifying and refining criteria for new and emerging technologies.

CONTACT INFORMATION

Hussein Z. Noorani, MSc (), Research Officer, Donald R. Husereau, BScPharm, MSc (), Director, Health Technology Assessment Development, Rhonda Boudreau, MA, BEd (), Research Officer, Health Technology Assessment Directorate, Canadian Agency for Drugs and Technologies in Health (CADTH), 600-865 Carling Avenue, Ottawa, Ontario K1S 5S8, Canada Becky Skidmore, BA (Hon), MLS (), Medical Research Analyst, Society of Obstetricians and Gynaecologists of Canada, 780 Echo Drive, Ottawa, ON K1S 5R7, Canada

Funding was provided by the Canadian Agency for Drugs and Technologies in Health (formerly, Coordinating Office for Health Technology Assessment) to conduct this project. Hussein Z. Noorani, Don Husereau, Rhonda Boudreau, and Becky Skidmore disclosed no conflicts of interest.

References

Agence d'évaluation des technologies et des modes d'interven- tion en santé 2006. Selection of assessment topics. Montreal: Agence d'évaluation des technologies et des modes d'intervention en santé; Available at: http://www.aetmis.gouv.qc.ca/site/index.php?en_evaluation_selection. Accessed 11 March 2006.
Agency for Healthcare Research and Quality. 2005. EPC topic nomination and selection. Rockville, MD: Agency for Healthcare Research and Quality; Accessed 11 March 2006.
Banta DH, Andreasen PB. 1990 The politcal dimension in health care technology assessment programs. Int J Technol Assess Health Care. 6: 115123.Google Scholar
Basque Office for Health Technology Assessment. 1996. The prioritisation of evaluation topics of health: Report. Donostia-San Sebastian: Osteba;
Borowski H. Alberta health technologies decision process: Selecting technologies for provincial review [oral presentation]. CCOHTA Invitational Symposium; 25 April 2005; Ottawa. Accessed 11 March 2006.
Borowski H. 2005 The Alberta health technologies decision process: A structure and process under development - early lessons [abstract]. Ital J Public Health. 2: 77.Google Scholar
Carlsson P. 2004 Health technology assessment and priority setting for health policy in Sweden. Int J Technol Assess Health Care. 20: 4454.Google Scholar
Chase D, Milne R, Stein K, Stevens A. 2000 What are the relative merits of the sources used to identify potential research priorities for the NHS HTA programme? Int J Technol Assess Health Care. 16: 743750.Google Scholar
Davies L, Drummond M, Papanikolaou P. 2000 Prioritizing investments in health technology assessment. Can we assess potential value for money? Int J Technol Assess Health Care. 16: 7391.Google Scholar
Donaldson MS, Sox HC. 1992. Setting priorities for health technology assessment: A model process. Washington, DC: National Academy Press;
Douw K, Vondeling H. 2006 Selection of new health technologies for assessment aimed at informing decision making: A survey among horizon scanning systems. Int J Technol Assess Health Care. 22: 177183.Google Scholar
Eddy DM. 1989 Selecting technologies for assessment. Int J Technol Assess Health Care. 5: 485501.Google Scholar
Garcia-Altes A, Ondategui-Parra S, Neumann PJ. 2004 Cross-national comparison of technology assessment processes. Int J Technol Assess Health Care. 20: 300310.Google Scholar
Gulácsi L, Boncz I, Drummond M. 2004 Issues for countries considering introducing the “fourth hurdle”: The case of Hungary. Int J Technol Assess Health Care. 20: 337341.Google Scholar
Hagenfeldt K, Asua J, Bellucci S, et al. 2002 Systems for routine information sharing in HTA. Working group 2 report. Int J Technol Assess Health Care. 18: 273320.Google Scholar
Harper G, Townsend J, Buxton M. 1998 The preliminary economic evaluation of health technologies for the prioritization of health technology assessments. A discussion. Int J Technol Assess Health Care. 14: 652662.Google Scholar
Henshall C, Oortwijn W, Stevens A, Granados A, Banta D. 1997 Priority setting for health technology assessment. Theoretical considerations and practical approaches. A paper produced by the Priority Setting Subgroup of the EUR-ASSESS Project. Int J Technol Assess Health Care. 13: 144185.Google Scholar
Kohli H, Hutchens D. Here's a good idea for a health technology assessment…: An analysis of health technology assessment topics proposed to the Health Technology Board for Scotland [abstract]. In: ISTAHC 2003. Improving outcomes through health technology assessment. Abstracts. 22 June 2003. p. 51.
Lara ME, Goodman C, editors. 1990. National priorities for the assessment of clinical conditions and medical technologies: Report of a pilot study [IOM Publication 89-14]. Washington: National Academy Press;
Medical Advisory Secretariat 2006 The application process Toronto: Medical Advisory Secretariat; Available at: http://www.health.gov.on.ca/english/providers/program/mas/application/app_mn.html. Accessed 11 March 2006.
NIHR Health Technology Assessment programme. 2006 Prioritising research. Southampton, UK: National Coordinating Centre for Health Technology Assessment;.
Noorani H Boudreau R Skidmore B Husereau D.Development of a new prioritization method for health technology assessment [oral presentation]. Melbourne. Abstract available at: http://www.cochrane.org/colloquia/abstracts/melbourne/P-089.htm. Accessed 22 October 2005.
Oliver S, Milne R, Bradburn J, et al. 2001 Involving consumers in a needs-led research programme: A pilot project. Health Expect. 4: 1828.Google Scholar
Oortwijn W, Banta D, Vondeling H, Bouter L. 1999 Identification and priority setting for health technology assessment in The Netherlands: Actors and activities. Health Policy. 47: 241253.Google Scholar
Oortwijn WJ, Vondeling H, Bouter L. 1998 The use of societal criteria in priority setting for health technology assessment in The Netherlands. Initial experiences and future challenges. Int J Technol Assess Health Care. 14: 226236.Google Scholar
Oortwijn W Vondeling H van Barneveld T van Vugt C Bouter L. Priority setting for HTA in The Netherlands [abstract]. 16th Annual Meeting of the International Society of Technology Assessment in Health Care; 18 June 2000; The Hague.
Oortwijn WJ, Vondeling H, van Barneveld T, van Vugt C, Bouter LM. 2002 Priority setting for health technology assessment in The Netherlands: Principles and practice. Health Policy. 62: 227242.Google Scholar
Phelps CE, Parente ST. 1990 Priority setting in medical technology and medical practice assessment. Med Care. 28: 703723.Google Scholar
Royle J, Oliver S. 2004 Consumer involvement in the health technology assessment program. Int J Technol Assess Health Care. 20: 493497.Google Scholar
Shani S, Siebzehner MI, Luxenburg O, Shemer J. 2000 Setting priorities for the adoption of health technologies on a national level—The Israeli experience. Health Policy. 54: 169185.Google Scholar
Stevens A, Milne R. 2004 Health technology assessment in England and Wales. Int J Technol Assess Health Care. 20: 1124.Google Scholar
Townsend J, Buxton M, Harper G. 2003 Prioritisation of health technology assessment. The PATHS model: Methods and case studies. Health Technol Assess. 7: 194.Google Scholar
Figure 0

Characteristics of Current Priority Setting Frameworks

Figure 1

Criteria Used in Current Priority Setting across HTA Agencies, by Category

Figure 2

Prevalence of Criteria across HTA Agencies