Published online by Cambridge University Press: 01 August 2004
Objectives: To compare methods and results among four health technology assessment organizations in different countries.
Methods: All assessment reports published between 1999 and 2001 by VATAP (United States), NICE (United Kingdom), CCOHTA (Canada), and AETS (Spain), were reviewed. Detailed information about the organization, the technology assessed, the methods used, and the recommendations made were collected. A descriptive analysis of the variables, as well as comparisons of means and proportions, was performed.
Results: Sixty-one reports assessing seventy-six technologies were published: nine (11.8 percent) by VATAP, thirty-nine (51.3 percent) by NICE, twenty (26.3 percent) by CCOHTA, and eight (10.5 percent) by AETS. A total of 64.5 percent of the technologies assessed were related to a high prevalence disease in the corresponding country. Most of the assessments addressed treatments (73.7 percent) and were mostly drugs (56.6 percent) and devices (23.7 percent). Most organizations used reviews of effectiveness and economic evaluations (64.5 percent), systematic reviews (21.1 percent), and original economic evaluations (36.7 percent). In 38.1 percent, the technology was recommended; the rest of the cases had no formal recommendations.
Conclusions: Critical issues for future technology assessment efforts are making assessment processes more consistent, transparent, and evidence-based; formalizing the inclusion of economic and ethical considerations; and making more explicit the prioritization process for selecting technologies for assessment and reassessment.
Formal health technology assessment (HTA) offers an appealing, evidence-based approach to help inform coverage and reimbursement decisions about medical advances. HTA is defined as the evaluation of a medical technology for evidence of its safety, efficacy, cost, and cost-effectiveness, and its ethical and legal implications, both in absolute terms and in comparison with other competing technologies (63). In recent years, the number of organizations conducting technology assessment worldwide has proliferated (35;36;64;66;67). In the United States, rapid growth of health technology assessment activities has occurred in the private sector (64;67). In Europe, HTA activities started in the 1980s, with the creation of formal assessment groups directly related to government decision making, and have been growing continuously (14;58).
Whereas previous investigators have reviewed HTA activities in the United States and abroad (27;64;67), little empirical research has been conducted at the technology assessment level to understand the nature or impacts of different policies. Although investigators have examined aspects of the process in Australia (31;32;65), Europe (12;22), and Canada (37), there is little in the way of cross-national comparisons.
One might expect national technology assessment organizations to have similar assessment processes in terms of the types of technologies assessed and the methods used. The objective of this study was to analyze four health technology assessment organizations in the United States and abroad to investigate the extent to which this is true. In particular, we examined: (i) the types of technologies assessed, (ii) the methods used for the assessment, (iii) the reasons for the assessment, (iv) the degree of stakeholder participation, and (v) the recommendations made. We also discussed health policy implications.
A data collection form was designed to obtain systematically the information of interest. The form included variables regarding the technology under assessment, and the assessment process (Table 1). The form was pilot tested twice. In the pilot tests, two trained readers, each with graduate education in technology assessment and economic evaluation, read the same thirteen reports, respectively, using a draft form, and then convened to review discrepancies in their findings and to improve the form's clarity.
Each technology assessed was characterized in terms of the disease category covered (coded using ICD-9 codification), the type of technology (drug, device, medical procedure, surgery procedure, or educational/behavior), its function (prevention, diagnostic, treatment, or rehabilitation), and its novelty (innovation, advance over an already existing technology, use of an already existing technology in a new indication, experimental, not yet allowed for its use, or already existing technology; Table 1).
We also coded the explicit mention of the reason for assessment, the assessment method used (e.g., randomized controlled trial, systematic review, economic evaluation, etc.), the decision (recommended, recommended with conditions, not recommended), an explicit mention of stakeholders' participation in the report, and mention of funding sources for the project.
We reviewed all reports published between 1999 and 2001 by Veterans Administration–Technology Assessment Program (VATAP, USA), National Institute for Clinical Excellence (NICE, United Kingdom), Canadian Coordinating Office for Health Technology Assessment (CCOHTA, Canada), and Agencia de Evaluación de Tecnologías Sanitarias (AETS, Spain). The time period of analysis was chosen to include all NICE's reports since its creation (1999), and the last complete year before starting the study (2001). The organizations were selected to reflect geographical distribution, and health policy relevance, while maintaining a degree of homogeneity in terms of including publicly funded agencies, with similar missions (Table 2).
Two readers independently read each report and completed the data collection form. A consensus meeting was held for readers to reach agreement about areas of disagreement. The organizations produced a total of sixty-seven reports during this time period: six from VATAP (83–88), thirty-one from NICE (13;17;19;20;23;26;38;40–44;47;49;50;55;56;59;61;62;69;71;76–78;80;81;89–92), eighteen from CCOHTA (16;24;34;45;46;48;51–54;60;68;72–75;93;94), and eleven from AETS (1–11). Almost all were available through their Web pages; one report was not possible to download (11); two were requested by mail (51;53); and two were excluded, because they were not technology assessment reports (i.e., catalog of publications [9], and guidelines for the elaboration of technology assessment reports [10]). The final sample comprised sixty-one reports. Because some reports (20;41;44;54;69;86;88;91;93) contained the assessment of more than one technology (e.g., drugs for Alzheimer's disease [20]) or the same technology applied to different conditions (i.e., predictive genetic testing for breast and prostate [54]), or updated information of previous reports (56;77), the unit of analysis considered was the technology rather than the report, per se, resulting in a final sample of eighty units of analysis.
We conducted descriptive analyses of the variables, as well as comparisons of means and proportions (analysis of variance, Chi-statistic). Data were stored and analyzed with SPSS 10.1 for Windows.
Table 3 shows the technologies assessed by each organization between 1999 and 2001. Only one, Zanamivir for the treatment of influenza, was analyzed by more than one agency (16;17). Assessments were mostly directed to technologies covering neoplasms (31 percent) and mental disorders (14 percent; Table 4).
The organizations most commonly assessed drugs (58.7 percent) and devices (22.5 percent), although there were significant differences in the types of technologies examined across organizations (p=.000). Most assessments focused on treatments (75 percent). In terms of novelty, assessments focused primarily on existing technologies (51 percent) as opposed to innovations or new uses of existing technologies (36 percent; Table 4).
The nature of the process differed across organizations in terms of whether the assessment resulted from a formal prioritization process, whether it included an economic evaluation, and the extent to which stakeholders participated (Table 5): VATAP and NICE always stated the reason for their assessments; NICE and CCOHTA mostly used economic evaluation methods; and CCOHTA made the participation of stakeholders explicit in their reports.
The funding of the project was seldom mentioned: only CCOHTA mentioned funding from public grants, as well as authors with associations with pharmaceutical companies, and in one case that no conflict of interest existed (Table 5). All these differences among organizations were statistically significant.
Organizations also differed in terms of the frequency with which they recommended a technology after an assessment. VATAP, NICE, and AETS recommended the technology or recommended with conditions 33 percent, 51 percent, and 38 percent, respectively, for example. CCOHTA made general comments in 50 percent of cases and recommended against in 25 percent.
The process of HTA typically includes the identification and prioritization of the technologies for assessment; search, review, synthesis, and production of the scientific evidence; context analysis, including the analysis of the effectiveness, efficiency, and equity and legal aspects of the application of the technology in a specific context; elaboration of policy recommendations; dissemination activities; and impact analysis (29).
However, our analysis reveals significant differences in assessment processes across four large organizations. In particular, there are differences in the diseases covered, the types of technologies assessed, the technology's function and novelty, the assessment methods used, the recommendations made, and the funding of the projects.
First, the results suggest that the types of technologies assessed do not typically depend on the specific characteristics of each country and organization. For example, NICE has assessed many drugs for neoplasms, although age-standardized cancer incidence and mortality rates in the United Kingdom are not higher than those of other countries; the same is true for mental disorders in Canada (95). On the other hand, very few assessments in all four countries have targeted diseases of the circulatory or respiratory systems, that are important causes of death (95). Similarly, the types of technologies assessed also differ across organizations (e.g., VATAP and AETS assess mainly devices, while NICE and CCOHTA assess mainly drugs), for reasons that are not readily apparent.
We only found one matching assessment among organizations in the time period analyzed. This finding may, in part, reflect attempts at coordination among European technology assessment organizations through the International Network of Agencies for Health Technology Assessment (INAHTA) a body that, among other things, tries to ensure no duplication of assessment efforts.
Second, the data highlight the different way in which recommendations are made, with some organizations issuing general guidance, rather than mandatory decisions.
Third, the organizations generally lack explicit processes for prioritization, and they do not make explicit both why they assess what they are assessing and who participates in the assessment. NICE notes that the basis of selection includes criteria such as health benefit, significant impact on other health-related government policies (i.e. reduction in health inequality), significant impact on NHS resources, and adding value by issuing a national guideline (79). VATAP mentions the uncertainty regarding the worthiness of the technology by financing and planning bodies as the reason for the assessment. However, in general terms, there is little in the way of explicit, quantitative methods to inform the prioritization process of technologies to be assessed using societal criteria such as burden of disease, uncertainty about the effectiveness and cost-effectiveness of the intervention, and potential benefits and impact of the assessment (33;57;70). On the same line, there is not explicit mention about any political deliberation that leads to the assessment of certain technologies and the participation of stakeholders in any step of the process, steps that are key for an open, systematic, and unbiased decision making (30;33).
Fourth, organizations differ in the extent to which they include economic evaluation. The idea of using cost-effectiveness to inform coverage and reimbursement decisions has gained popularity (21). But our results showed continued variation in the methods used (18). In particular, only NICE and CCOHTA regularly use economic evaluation studies in their assessments.
The main limitation of this analysis is the small sample of organizations used. Organizations were selected to reflect geographical distribution and health policy relevance, while maintaining a degree of homogeneity in terms of including publicly funded agencies, with similar missions. They are not representative of the entire health technology assessment community, although they are well known and play an important role in coverage decisions in their respective countries. Nonetheless, the sample is big enough to show a lot of variability in a process—technology assessment—that, apart from the adaptation of the technology to the local context, is supposed to be standard, and lack of explicitness, something that is so important in a process related to the inclusion of new technologies in a health-care system.
Researchers have identified a series of relevant issues in the dissemination of HTA results such as barriers to change, timing, assessment of target groups, and credibility of both the message and the messenger (28). There is evidence suggesting that the simple diffusion of information is not sufficient to promote the application of research results in clinical practice (15) and that more research is needed on the effectiveness of different dissemination tools among citizens, politicians, and mass media (28).
Others have emphasized the importance of social, political, and ethical aspects of health technology (39). Often, policy decisions will be made on this basis of a trade-off between the evidence available on clinical and cost-effectiveness, and several other considerations, including political pressures, availability of funding, or patient and caregiver opinion. The challenge under these circumstances is to maintain transparency and consistency of the decision making process in the face of these factors, in both the public and private sector (25;29).
We recommend that decision-makers make explicit why a particular technology is assessed, who participates in the assessment process, what determines the decisions, the sources of founding of each project the prioritization process, and recommendations for further research. Medicare officials in the United States in particular should consider these issues as they seek to improve the coverage process, in terms of length of time required to make coverage decisions and the explicitness and openness of the process (82).
This study was funded in part by the Robert Wood Johnson Foundation under grant 046071.
Description of the Analyzed Variables
Missions of the Organizations
List of Technologies Assessed by NICE, VATAP, CCOHTA, and AETS between 1999 and 2001a
Frequencies Distribution of the Variables Regarding the Technology under Assessment, and Its Outcomea
Health Policy Issues Dealt with in the Assessments