The number of health technologies needing evaluation far outweighs available resources (12;17). Therefore, all health technology assessment (HTA) agencies must set priorities for their research projects. All HTA agencies are faced with problems of prioritization (4;13;15;17;27;32), and many agencies currently use a criteria-based system for the prioritization of research projects (4;17;27). There is, however, a lack of consensus on an appropriate method for priority setting among HTA agencies (17).
Several quantitative models for priority setting in HTA were proposed in the early 1990s (10;19;28). This included a priority-setting process proposed by the Institute of Medicine (1992) in the United States (U.S.) that uses seven criteria, seven steps, a Delphi process, and nominal group techniques with multidisciplinary teams (10). This model has been adopted by other agencies, including the Basque Office for HTA (OSTEBA) in Spain (4).
More recently, the EUR-ASSESS project (1994–1997), an international group designed to coordinate developments in HTA in Europe and to improve decision making concerning adoption and use of health technology, has produced a set of guidelines for the priority setting of HTA projects (17). The recommendations of EUR-ASSESS have been used in large part by European HTA agencies (15).
Despite improved awareness regarding the principles and processes that can be used to guide priority setting, a 2004 international comparison of technology assessment processes across four HTA agencies revealed “a lack of explicit, quantitative methods to inform the prioritization process of technologies to be assessed using societal criteria” (13). Further to this and despite a long-standing awareness of the need for human and political context (3;17), the report highlighted little explicit mention of any political deliberation and participation of stakeholders (13).
The purpose of this research was to identify and compare practical approaches for priority setting among all HTA agencies since the EUR-ASSESS recommendations. This strategy includes an examination of methods used to identify HTA topics, criteria used for setting topic priorities, and rating and scoring methods.
METHODS
To be included in this study, a report had to describe, in whole or part, a method for priority setting for the assessment of new (i.e., at point of adoption) or diffused health technologies. We excluded reports that solely described priority setting for emerging technologies. We did not specifically exclude publications from non–International Network of Agencies for HTA (INAHTA) member agencies. An electronic literature search was performed from January 1996 onward across PubMed, MEDLINE, EMBASE, BIOSIS, and Cochrane on February 15, 2004. The MEDLINE, EMBASE, and BIOSIS searches were updated to June 23, 2006. The year 1996 was selected as the beginning date for the literature search, as an update to the review of priority setting of the EUR-ASSESS project (literature search from the years 1984 to 1996) (17). There were no language restrictions. Web sites of member agencies of the INAHTA (as of February 2005) were also searched for descriptions on priority setting.
Citations identified by the electronic searches were reviewed for relevance by two of the coauthors (H.N. and D.H.) on the basis of title and abstract (if available). If at least one reviewer identified a citation as potentially relevant, the published report was obtained. Retrieved reports were then reviewed (by H.N. and D.H.) and selected if both authors judged that they met selection criteria. Reference lists of these selected reports were scanned for further potentially relevant citations.
A brief textual description was written for each priority-setting system included for selection, and the following information was extracted: name, setting, and contact information of HTA agency; organizational details (budget, population served, relationship to government, functions related to and outside of HTA); methods for identifying topics (e.g., committees, research proposals); and priority-setting framework (types of technologies, process, criteria, rating or scoring system). We contacted all identified agencies on several occasions up to November 2006 to gather missing data, to validate the description of their prioritization frameworks, and to ensure that it was currently being used.
Once all priority-setting criteria were identified, two researchers (H.N. and D.H.) grouped them into several key themes. One or several descriptive questions were also created to capture all identified criteria. Disagreements were resolved by consensus (unanimity minus one) with a third researcher (R.B.).
RESULTS
A total of twelve current priority-setting systems from eleven agencies were identified from seventeen reports that met the inclusion criteria for review. These included seven reports related to the approach used by the NCCHTA (8;9;16;23;29;31;32), four reports related to the approach used in The Netherlands (ZonMW) (24–27), two reports in conference-abstract form for AHFMR (5;6), and one report each describing the priority-setting approaches for HunHTA (14), ICTAHC (30), NHS QIS (formerly Health Technology Board of Scotland) (18), OSTEBA (4), and SBU (7). AETMIS has two frameworks for prioritization. In addition, Web sites of member agencies of the INAHTA identified further descriptions on priority-setting approaches for AETMIS (1), NCCHTA (21), MAS (20), and AHRQ (2). Agency representatives provided further details on their respective approaches.
Ten countries were represented: Canada, Denmark, England, Hungary, Israel, Scotland, Spain, Sweden, The Netherlands, and United States. The characteristics of the various priority-setting frameworks identified are described in Table 1. A majority (7 of 12) of priority-setting frameworks used a panel or committee to provide advice regarding priorities. AETMIS uses two approaches: Requests submitted by macrolevel decision makers are prioritized at the Ministry level, and other requests are submitted directly to the agency and prioritized by the Board members.
In all cases, committees contained representatives from healthcare system funders, health professionals, and researchers. Advice from a board of directors was used in four priority-setting systems and in conjunction with a committee in two of these. Other mechanisms to provide advice on priority setting were the use of a stakeholder group by AHRQ (a volunteer group that includes clinicians, researchers, third-party payers, consumers of Federal and State beneficiary programs, and healthcare industry professionals), a prioritization strategy group by NCCHTA (composed of clinicians, medical advisors, and researchers), a medical advisor with internal executive staff for NHS QIS, and direction from the Ministry of Health for ZonMW.
Four of the twelve frameworks identified used a rating system to inform priorities. In all cases, these were used in conjunction with a committee. Two systems explicitly considered the cost benefit of conducting the assessment in deciding priorities (Table 1).

Criteria for Priority Setting
We identified fifty-nine unique priority-setting criteria across the eleven agencies identified through our search. The median number of criteria reported by the agencies was five (ranging from three to ten). Whereas the description of prioritization criteria differed across agencies, they could be grouped under 11 categories as shown in Table 2 below. These criteria were generally applicable to both new and diffused technologies. One agency (HunHTA) listed criteria specific to the assessment of pharmaceuticals. Table 3 illustrates the frequency of reported criteria used among the eleven agencies. A majority could be categorized into the categories of clinical impact (100 percent), economic impact (91 percent), and budget impact (55 percent).


DISCUSSION
Our review of the available literature identified twelve descriptions, in whole or in part, of frameworks for the priority setting of health technology assessments. Although we did not specifically exclude published reports from non-INAHTA member agencies, all of the agencies identified were INAHTA members. We did not include CADTH in this study due to an absence of explicit prioritization criteria within the CADTH framework. We were able to separate all identified priority-setting criteria into eleven categories. Although we did not conduct a formal analysis, there did not appear to be extensive overlap in criteria across HTA agencies.
Our study shows that variability exists in the methods for prioritizing technologies for assessment across HTA agencies. This variability may be interpreted as reflecting differences in values, reporting structures, or healthcare priorities among agencies with unique mandates in different sociopolitical contexts.
Observed variability may also be a reflection of previous priority-setting recommendations (10;17). This is reflected in the EUR-ASESS priority-setting subgroup report: “the general approaches to priority setting should reflect the goals of the program, the resources available and the preferred method of working of those who need to be involved” (17). However, we did not see any particular pattern that emerged when comparing frameworks within HTA programs with larger and smaller budgets. The use of committees, ratings, or consideration of cost benefit appeared to have been equally used by larger and smaller HTA agencies.
Two reviewers systematically applied selection criteria to all available literature to identify relevant material. We believe this approach will lend to the accuracy of our findings and reduce the chance of missing relevant information.
One limitation of this study is that the identified agencies currently represent one quarter of the member organizations of INAHTA. This finding suggests that the process of deciding on technologies for assessment is readily available in most organizations. A survey of all member organizations on how they prioritize technologies for assessment may have provided more insights on priority setting in HTA. However, as the process of making a final decision on which technologies to assess is implicit within many agencies, there would be limitations to such survey results (11). Because the original intent of our study was to conduct an environmental scan for the purpose of developing a robust priority-setting framework for CADTH, we assumed the most rigorous systems would be explicit and more likely to be documented.
We are not aware of any other similar recent reviews on diffused technologies. Eddy (12) summarized thirty-eight criteria collected from six programs in the United States and categorized these into just three elements: health importance, economic importance, and expectation that an assessment will make a difference. A recent survey among horizon-scanning systems (11) identified differences with regard to the criteria and actors involved in the final decisions of which emerging technologies to assess. The survey revealed that most agencies use costs and health benefits when prioritizing. Our results were consistent with this finding.
We believe the implications of our findings can be viewed in light of other EUR-ASSESS Priority Setting Subgroup recommendations. The first two recommendations suggest HTA programs should have an explicit, agreed-upon process (17). Although our study did not delve into the latter, we did discover that each agency we contacted could easily provide us with a clear process describing actors, criteria, and methods.
The EUR-ASSESS recommendations also suggest that priorities reflect the likely costs and benefits of the possible health technology assessments being considered (17). Of the twelve frameworks we identified, only two had an explicit process for considering the efficiency of conducting an assessment. Future research may need to focus on why this gap between recommendations and current practice exists, and what standard methods can be adopted. Although it might seem contradictory that a majority of organizations that evaluate the potential economic impact of health technologies do not evaluate the potential impact of their own assessments, there may be legitimate reasons to explain this deficiency.
The EUR-ASSESS recommendations also suggest cost-benefit be considered in light of a rating using systematically applied criteria (17). Our findings suggest only one third of frameworks have adopted a rating system. This may, in part, explain the lack of considerations of cost-benefit, as stated above. CADTH has not explicitly considered cost-benefit when setting priorities. However, it is our intention to use our recently developed rating system to do so in the near future.
We anticipate this snapshot of current priority-setting frameworks will stimulate further discussion among HTA researchers. In our experience, this work is applicable and of great interest to researchers and organizations involved in priority setting of other knowledge synthesis endeavors (22). It is our hope that continued thinking in this area will facilitate the final two EUR-ASSESS recommendations: sharing information on priorities and evaluating processes and outcomes of priority setting (17).
CONCLUSIONS
Variability exists in the methods for priority setting of health technology assessment across HTA agencies. Quantitative rating methods and consideration of cost-benefit for priority setting were seldom used. These study results will assist those individual HTA agencies that are developing prioritization methods in terms of increased timeliness and relevance of topics under evaluation, improved technology tracking, and identifying and refining criteria for new and emerging technologies.
CONTACT INFORMATION
Hussein Z. Noorani, MSc (husseinz@cadth.ca), Research Officer, Donald R. Husereau, BScPharm, MSc (donh@cadth.ca), Director, Health Technology Assessment Development, Rhonda Boudreau, MA, BEd (rhondab@cadth.ca), Research Officer, Health Technology Assessment Directorate, Canadian Agency for Drugs and Technologies in Health (CADTH), 600-865 Carling Avenue, Ottawa, Ontario K1S 5S8, Canada Becky Skidmore, BA (Hon), MLS (bskidmore@sogc.com), Medical Research Analyst, Society of Obstetricians and Gynaecologists of Canada, 780 Echo Drive, Ottawa, ON K1S 5R7, Canada
Funding was provided by the Canadian Agency for Drugs and Technologies in Health (formerly, Coordinating Office for Health Technology Assessment) to conduct this project. Hussein Z. Noorani, Don Husereau, Rhonda Boudreau, and Becky Skidmore disclosed no conflicts of interest.