Hostname: page-component-6bf8c574d5-7jkgd Total loading time: 0 Render date: 2025-02-20T21:47:08.198Z Has data issue: false hasContentIssue false

Methods, procedures, and contextual characteristics of health technology assessment and health policy decision making: Comparison of health technology assessment agencies in Germany, United Kingdom, France, and Sweden

Published online by Cambridge University Press:  21 July 2009

Ruth Schwarzer
Affiliation:
UMIT–University of Health Sciences, Medical Informatics and Technology
Uwe Siebert
Affiliation:
UMIT–University of Health Sciences, Medical Informatics and Technology; Harvard School of Public Health; and Harvard Medical School
Rights & Permissions [Opens in a new window]

Abstract

Objectives: The objectives of this study were (i) to develop a systematic framework for describing and comparing different features of health technology assessment (HTA) agencies, (ii) to identify and describe similarities and differences between the agencies, and (iii) to draw conclusions both for producers and users of HTA in research, policy, and practice.

Methods: We performed a systematic literature search, added information from HTA agencies, and developed a conceptual framework comprising eight main domains: organization, scope, processes, methods, dissemination, decision, implementation, and impact. We grouped relevant items of these domains in an evidence table and chose five HTA agencies to test our framework: DAHTA@DIMDI, HAS, IQWiG, NICE, and SBU. Item and domain similarity was assessed using the percentage of identical characteristics in pairwise comparisons across agencies. Results were interpreted across agencies by demonstrating similarities and differences.

Results: Based on 306 included documents, we identified 90 characteristics of eight main domains appropriate for our framework. After applying the framework to the five agencies, we were able to show 40 percent similarities in “dissemination,” 38 percent in “scope,” 35 percent in “organization,” 29 percent in “methods,” 26 percent in “processes,” 23 percent in “impact,” 19 percent in “decision,” and 17 percent in “implementation.”

Conclusion: We found considerably more differences than similarities of HTA features across agencies and countries. Our framework and comparison provides insights and clarification into the need for harmonization. Our findings could serve as descriptive database facilitating communication between producers and users.

Type
General Essays
Copyright
Copyright © Cambridge University Press 2009

Health technology assessment (HTA) is increasingly used to inform health policy decisions. It has become a powerful tool when linked to jurisdictional legislation that determines reimbursement and pricing policies. Because these decisions must be politically and legally defensible, they have hastened the need for recognized “best” practice in HTA (1;Reference Bronner7;Reference Drummond, Schwartz and Jonsson24;Reference Ferguson, Dubinsky and Kirsch30;Reference Francke and Hart31. Additionally, the globalization of “health” means the decisions taken and fiduciary responsibility of local health systems has an increased global importance (Reference Cox14). As with the institution of science in general, the increasing importance of HTA strongly suggests the need to study it in a transparent and comparative way (Reference Kuhn52).

Despite increased activity worldwide, there is currently a lack of understanding of the differences in its application of HTA, leading to questions about its quality, comparability, generalizability, applicability, and practical usefulness (Reference Chinitz13;Reference Cox14;27;Reference Freeman32). Additionally, a lack of recognized standards in quality assurance and the debate regarding the need for harmonized HTA methods and processes has been recently recognized (Reference Draborg and Andersen20;27). Despite increased harmonization activity, the need for recommended international standards has been communicated at the European level and from experts in the field (Reference Cox14;27;Reference Petherick, Villanueva, Dumville, Bryan and Dharmage73;84).

Descriptions of HTA methods and processes (Reference Brehaut and Juzwishin6;Reference Gagnon, Sanchez and Pons33;Reference Hailey39;Reference Hivon, Lehoux, Denis and Tailliez45;Reference Hutton, McGrath and Frybourg47;Reference Lafortune, Farand, Mondou, Sicotte and Battista53;Reference Lehoux and Williams-Jones55;Reference Oliver, Mossialos and Robinson64;Reference Philips, Bojke, Sculpher, Claxton and Golder74;Reference Wanke, Juzwishin, Thornley and Chan79) as well as international comparisons (e.g., Reference Banta, Gelband, Jonsson and Battista3;Reference Barbieri, Drummond and Willke4;Reference Chinitz13;Reference Dickson, Hurst and Jacobzone17;Reference Draborg and Andersen19;Reference Draborg, Gyrd-Hansen, Poulsen and Horder22;Reference Garcia-Altes, Ondategui-Parra and Neumann34;Reference Hjelmgren, Berggren and Andersson46;Reference Martelli, Torre and Ghionno57;Reference Oortwijn, Banta and Cranovsky65;Reference Perry, Gardner and Thamer70Reference Perry and Thamer72;Reference Wild and Gibis83) already exist. The importance of institutional relationships with HTA has also been well recognized (Reference Chinitz13). However, we were not aware of any attempt to develop a systematic description and comparison of features across HTA agencies. Because a descriptive framework could be helpful to those studying and developing HTA programs, we sought to develop and apply a descriptive framework using selected examples of European HTA agencies that could be applied to any HTA organization internationally.

METHODS AND DATA

Selection of Sample Agencies

The choice of cases relies on conceptual, not on representative grounds (Reference Miles and Huberman59). We based our selection on the following characteristics: The included agencies should be leading institutions in industrialized European countries. They have to have had an established agency history, operate nationally, and be mainly publicly financed. We wanted to identify at least two contrary healthcare structures (social health versus national health insurance), within which potentially particular differences could be distinguished (centralized versus decentralized). Information had to be available in German or English, and contact persons and experts had to be accessible. From a larger list of possibilities, we chose the following agencies: - German Agency for HTA at the German Institute for Medical Documentation and Information (Germany); HAS – French National Authority for Health (France); IQWiG – Institute for Quality and Efficiency in Health Care (Germany); NICE – National Institute for Health and Clinical Excellence (England, Wales, UK); and the SBU – Swedish Council on Technology Assessment in Health Care (Sweden).

At the time of our research, , IQWiG, HAS, and NICE were in transition due to legislation amendments and organizational and financial factors (812;Reference Degos16;41;61;62; Goehlen, personal communication [24 January 2008]; Meyer, personal communication [10 April 2008]).

Data and Information Collection

Data were collected from a systematic literature review, handsearch, and survey. We used databases, Web sites, and staff of HTA organizations to source data. A literature search was performed (R.S.) without limits of time or study type in electronic medical and health-economic databases between October 4, 2007, and November 11, 2007, and last updated in spring 2008 (for details see Figure 1). Titles and abstracts were screened (R.S.), data extracted and encoded (R.S.), and the results checked by a second reviewer (U.S.). Differences were resolved by discussion between both authors. We used all search terms describing HTA in the title field and included HTA reports on methods. Information on HTA and HTA agencies not provided by electronic medical databases was searched using Internet search engines.

Figure 1. Flowchart of identification and inclusion of literature and information search. Notes: Date of systematic database searches: 4 October through 8 November 2007. EMBASE, Excerpta Medica Database; Econlit, Economic Literature Database; MEDLINE, Medical Literature Analysis and Retrieval System Online; SCI/SSCI, Science Citation Index Expanded, IJTAHC, International Journal of Technology Assessment in Health Care; CRD, Centre for Reviews and Dissemination; NCCHTA, National Coordinating Centre for Health Technology Assessment.

We used the selected articles to identify elementary features, which described HTA and HTA agencies. Descriptive features were deemed as relevant if they were discussed by experts in the field. Experts were defined as authors of respective publications or members of (inter)national working groups on HTA, describing HTA (Reference Banta2;Reference Battista5;Reference Drummond, Manca and Sculpher23;Reference Drummond, Schwartz and Jonsson24;Reference Gerhardus and Dintsios35 and experienced by collaboration in large projects such as those financed by the European Commission (Reference Cranovsky, Matillon and Banta15;2729;Reference Hailey39;Reference Henshall, Oortwijn, Stevens, Granados and Banta44;48;Reference Liberati, Sheldon and Banta56;Reference Velasco-Garrido, Perleth and Drummond77;Reference Werko and Banta82). Being “elementary” was defined as disaggregated and qualitatively judged as important, necessary, typical, and constitutive for a meaningful description.

We then searched agency and other relevant Web sites for supplementary data and information if organizational specifics or methodological and procedural aspects of HTA agencies were missing or dated.

Finally, we contacted staff from each agency for particular information for any still-missing data. We used questionnaires describing missing information for each agency and then either sent them by email or used them to conduct telephone interviews. Data were extracted into an evidence table.

Framework Development

From the collected data, we identified elements and characteristics of “HTA” that were not context- or region-specific. We classified these elements using a two-level scheme. Main categories of elements were called “domains” and descriptors of these domains were called “items.” Each item was then classified according to a standardized value called an “indicator.” If we did not find any adequate information for an item, we entered “unknown.”

Data Extraction and Comparison

We used a cross-case comparison methodology, which involves drawing up a matrix of features that have been found to be present in the cases, and marking whether each feature is present or not in each case. This allows for determinations of difference or similarity between cases (Reference Weed80). To do this, we arranged domains, items, and indicators in a standardized descriptive table format. Information was encoded along the standardized indicators and entered in a descriptive table. At this nonaggregated level, we accumulated a complex qualitative dataset comprising 5 (number of the agencies) times 90 (number of the items) “data-points.” We used a simple quantitative algorithm to operationalize and assess the similarity between agencies regarding items and domains. For this purpose, for each item, we performed pairwise comparisons of all five agencies (i.e., 10 comparisons: agency 1 versus agency 2, agency 1 versus agency 3. . . agency 4 versus agency 5). We assigned 1 point for equal comparisons and 0 points otherwise. We defined item similarity as the percentage of actual points out of the maximum points (i.e., 10 points). For example, if among five agencies, all used a societal perspective for economic evaluation, then similarity was 100 percent. If three agencies used a societal perspective and two agencies the payer's perspective, then 3 points would be given for the three identical pairs with the societal perspective plus 1 point for the one pair with payer's perspective, yielding 4 points, and, 4/10 = 40 percent similarity. Domain similarity across agencies was then calculated as the average of all item similarities of this domain.

We then compared and qualitatively interpreted the table entries across the agencies regarding similarities and differences. Finally, based on all preceding steps, we explored and interpreted similarities and differences across all agencies by taking into account the logic behind the conceptualization of our framework across three areas: the institutional setting; the legal context of the agency; and/or the country.

RESULTS

Search

As shown in Figure 1, a total of 11,662 citations were identified from the literature search. Additional material derived from references and other sources increased this to 12,036. After exclusion of material in languages other than English and German (n = 53), and removing duplicates (n = 233), 11,803 citations remained. After screening titles and abstracts, we excluded 10,335 publications, leaving 1,468 potentially relevant publications. Full texts of these publications were retrieved, the method and results sections examined for potential relevance. Those not excluded at this stage were screened in detail; ultimately 306 publications were included.

Framework

Our search for frameworks applicable to HTA revealed some generally suitable examples sourced from public health, (new) public management and other disciplines and science perspectives (Reference Easton26;Reference Hailey39;Reference Hansson40;Reference Hutton, McGrath and Frybourg47;Reference Jann, Wegrich, Schubert and Bandelow49;Reference Mintzberg60;Reference Reinermann75;Reference von Rosenstiel, Molt and Rüttinger78;Reference Weiss81). From these perspectives, we derived an initial crude generic concept including both static-structural and dynamic-procedural aspects covering the elements “input” (representing HTA organization, infrastructure, environment), activity (performing HTA, maintaining organization), “output” (reports, results, recommendations), “usage” (decision preparation, decision making, dissemination, implementation), and “impact” (change of different parameters).

From three previously identified frameworks (Reference Hailey39;Reference Jonsson, Banta, Henshall and Sampietro-Colom50;Reference Wanke, Juzwishin, Thornley and Chan79), we found ninety items that fell under the following domains: (a) organization, (b) scope, (c) processes, (d) methods, (e) dissemination, (f) decision, (g) implementation, and (h) impact. Using these domains, we constructed a structure- and sequence-based HTA framework that connects an outcome area (population), to a decision-making area (policy), and back to a production location (science). Figure 2 depicts the domains and general framework.

Figure 2. Proportion of similarity across Domains and Areas. Notes: To be read from the left to the right across three areas: The scope of an organization determines the processes and the methods used. The product of the organization is then disseminated reaching the policy area for decision making, decisions are intended to be implemented in society and to have an impact. Theoretically, a feedback loop back to the policy and the science area could be assumed.

Application and Interpretation of the Framework

The number of items per domain ranged from twenty in the domain “methods” to six in the domains “implementation” and “impact.” Four domains contained between eleven and twenty items: “methods,” “processes” (n = 11 items), “scope” (n = 13), and “organization” (n = 16), whereas the other four domains had between six and nine items. Table 1 shows the percentage similarity at both item and domain levels. Disaggregated information of all ninety items in eight domains is provided in detail in a supplementary list which is available from the authors on request.

Table 1. Percentages of Similarities in Domains

Note. For each item, we performed pairwise comparisons of all five agencies (10 comparisons). We assigned 1 point per pair that showed identical item characteristics and 0 points otherwise. Item similarity was defined as the percentage of actual points out of the 10 possible points. 5/5: all agencies identical regarding this item; similarity score = 10; 4/1: 4 agencies identical, one agency different from all others; similarity score = 6; 3/2: 3 agencies identical, another two agencies identical but different from the other three; similarity score = 4; 3/1/1: 3 agencies identical, each of the remaining two agencies different from these three and also different from each others; similarity score = 3; 2/2/1: 2 agencies identical, another 2 agencies identical, but different from the first two agencies, one agency different from all others; similarity score = 2; 2/1/1/1: 2 agencies identical, each of the remaining 3 agencies different from each other; similarity score = 1; 1/1/1/1/1: all agencies are different; similarity score = 0.

“Dissemination” was the domain with the highest similarity percentage (40 percent) while “implementation” showed the lowest similarity by 17 percent (see Table 1). From highest to lowest, other domains rated as follows: Scope (38 percent), Organization (35 percent), Methods (29 percent), Process (26 percent), Impact (23 percent), and Decision (19 percent). We observed a higher similarity across HTA agencies in the “science” area ranging from 26 percent to 40 percent in contrast to policy and population areas, which ranged between 17 percent and 23 percent (see Figure 2). When framework sequence is taken into account, the analysis reveals the lowest percentage similarity (“policy”) occurs downstream of the highest percentage similarity (“science”). Table 2 provides a descriptive summary of the greatest similarities and differences explicitly.

Table 2. Similarities and Differences Across Agencies at Item Level

Note. Read from left right. Letters and numbers A1 to H6 listed in columns ‘most similar’ and ‘least similar’ represent the Items per domain A-H. DM, Decision maker.

DISCUSSION

We developed a universal framework for comparing structural and procedural elements and characteristics of HTA organizations and applied it to five organizations from four countries. A table with a total of ninety items falling into eight domains (“organization,” “scope,” “processes,” “methods,” “dissemination,” “decision,” “implementation,” and “impact”) was constructed.

Our comparison of a sample of five HTA agencies (, HAS, IQWiG, NICE, and SBU) revealed considerably more differences (83–60 percent) than similarities across agencies and countries at all levels of the framework structure. The magnitude of similarity expressed as percentage of identical characteristics in pairwise comparisons across agencies was moderate and ranged between 17 and 40 percent across all domains. The greatest similarity is present in the domain “dissemination” which addresses the distribution of HTA information. Three of eight domains, “decision,” “implementation,” and “impact,” were not able to show complete similarity for any of the items in their respective domains and scored below 25 percent.

Strengths and Limitations of the Study

The strength of our work is that we took a more systematic and rigorous approach to developing this framework than had previously been attempted. We also have included information current up to 2008 and have explored important domains in greater depth than previous attempts by delineating their constituent items.

Our work has several limitations. First, we focused on a small sample of national agencies and have ignored hospital-based, only for-profit or private HTA agencies, academic HTA units, or units not involved in decision making on a national level.

Second, despite using a systematic approach for information gathering and data collection, the qualitative process used can introduce biases leading to misclassification of characteristics from, for example, a lack of standardization of terms, the use of different types and sources of information, and misrepresented information in published or translated documents. Our framework does not also capture pragmatic factors such as resources and hidden politicized processes, social and cultural values, subjective, intuitive reasoning, implicit, nontransparent principles (Reference Dowie18), or psychological group effects from procedural contexts (Reference Dowie18) that may also play a role.

Finally, due to the small number of included institutions at this step we could not perform a comprehensive quantitative analysis to describe the frequencies of HTA features among agencies or, more interestingly, the correlations between different HTA features.

Comparison to Other Work

Our approach regarding the conceptualization of the framework and the choice of its elements approximates the closest to those of Hailey, Jonsson et al., and Wanke et al. (Reference Hailey39;Reference Jonsson, Banta, Henshall and Sampietro-Colom50;Reference Wanke, Juzwishin, Thornley and Chan79). However, we believe this work is more up-to-date and explores the features of HTA agencies in greater depth.

Previous frameworks have been developed but have used fewer descriptive criteria (Reference Garcia-Altes, Ondategui-Parra and Neumann34;Reference Perry, Gardner and Thamer70) or have focused specifically on agency performance (Reference Hailey39) or function (Reference Lafortune, Farand, Mondou, Sicotte and Battista53) and have used different methods. Some studies have examined a wider sample of agencies (but with fewer criteria) (Reference Perry, Gardner and Thamer70), identified agencies after developing a framework (Reference Martelli, Torre and Ghionno57), or identified agencies specific to a single country (Reference Lehoux, Tailliez, Denis and Hivon54). One of the strengths of our study compared with some of these efforts is that agencies were identified a priori and criteria have been disaggregated to allow readers to judge whether criteria have been accurately represented.

Other previous studies have had a narrower focus than ours, with one study focused only on agencies with an explicit connection to pharmaceutical licensing, reimbursement, and pricing (86) and another examining aspects specific to the conduct of health economic evaluation across agencies. In contrast to our findings, Hjelmgren et al. (Reference Hjelmgren, Berggren and Andersson46) found disagreement in choice of perspective, resources, included costs, and “in methods of evaluating resources used.”

Our study is different from previous publications on key principles in HTA (Reference Drummond, Schwartz and Jonsson24;Reference Drummond, Schwartz and Jonsson25;Reference Liberati, Sheldon and Banta56) in that our work is descriptive and has no normative aspects. Also, as opposed to recent efforts exploring relationships between HTA-relevant issues and their implications, we did not rely on existing HTA reports in a quantitative manner (Reference Draborg and Andersen19Reference Draborg, Gyrd-Hansen, Poulsen and Horder22;Reference Lehoux, Tailliez, Denis and Hivon54). We were, therefore, unable to show time trends and used HTA reports only to illustrate partial aspects like the reporting structure.

We believe these findings shed some light on the question of HTA harmonization and suggest this could be difficult. In particular, we identified differences of up to 83 percent per domain. However, agencies like SBU are almost 20 years in charge different from other agencies. This begs the question as to what extent harmonization or differentiation is needed, for which purpose and who will profit. In this context, it is worthy to mention, that very recently representatives of the EUnetHTA movement (Reference Kristensen51) emphasized that a higher similarity is to be expected regarding HTA methods which could be further standardized. Similarly, Liberati et al. (Reference Liberati, Sheldon and Banta56) stated in the report on methodology of the subgroup of the EUR-ASSESS project that, “factors such as the particularities of decisions and the decision-making process, political factors and influences, and cultural variability mean that there can never be one process or method of HTA applicable to all circumstances.”

Like Hutton et al. (Reference Hutton, McGrath and Frybourg47), we attempted to better understand the potential use of HTA and identified nearly-sufficient capacity at least in the largest agencies studied (NICE and HAS). However, we did not find evidence for economies of scale in dissemination, implementation, or impact.

Our findings support those of a previous 3-year OECD analysis that suggested “only limited evidence of the effectiveness of HTA in terms of its influence on decision making, on health technology use or on health outcomes” (66). In line with our analysis, a lack of linkage between HTA and policy making was found.

CONCLUSIONS, POLICY IMPLICATIONS, and RECOMMENDATIONS

In conclusion, our study presents a detailed structured and contextual framework on HTA as a standardized template. Our template can be useful for purposes within the agency context, when comparing an agency with other agencies, and across the areas of science, policy, and population.

The application of our framework within a restricted HTA landscape of five HTA agencies in four countries demonstrates that there is great diversity regarding agency characteristics. The fact that considerably more differences than similarities exist when assessing only five agencies shows how difficult harmonization could be, but this must be confirmed when our database is extended with further agencies. Nevertheless, according to our systemic approach and due to the genuinely multidisciplinary nature of “HTA,” we recommend an improved interdisciplinary dialogue between users and producers from different areas.

Our findings suggest some key factors for exploring harmonization, including contextual (i.e., framing) factors and exploring their relevance within a country-specific context. We also found the characteristics of some agencies were bound to country-specific organizational and procedural views, which could be explained by obligations to answer to primarily national demands.

Our findings also suggest the field of HTA needs to be better studied to better interpret and challenge differences across organizations. Efforts to further examine domains or items within the legal, policy, and healthcare system context should be encouraged.

CONTACT INFORMATION

Ruth Schwarzer, MA, MPH, ScD (), Senior Scientist, Institute of Public Health, Medical Decision Making and Health Technology Assessment, UMIT–University for Health Sciences, Medical Informatics and Technology, Eduard Wallnoefer Center 1, A-6060 Hall i.T., Austria

Uwe Siebert, MD, MPH, MSc, ScD (), Professor of Public Health (UMIT), Chair, Department of Public Health, Information Systems and Health Technology Assessment, UMIT–University for Health Sciences, Medical Informatics and Technology, Eduard Wallnoefer Center 1, Hall i.T., Austria, A-6060; Adjunct Professor of Health Policy and Management, Center for Health Decision Science, Department of Health Policy and Management, Harvard School of Public Health, 718 Huntington Avenue, Boston, Massachusetts 02115; Director of Cardiovascular Research Program, Institute for Technology Assessment and Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 101 Merrimac Street, Boston, Massachusetts 02114

References

REFERENCES

1. Anonymous. [Evaluation of the technology employed in health care]. Rev Panam Salud Publica. 1997;2:363372.Google Scholar
2. Banta, D. A review of health technology assessment methods in the field of pharmaceuticals. Poland: TNO Prevention and Health. Ministry of Health, Office for Foreign Aid Programs in Health Care, Poland. This report is part of a project that was financed through World Bank Loan 3466-POL; 2002.Google Scholar
3. Banta, HD, Gelband, H, Jonsson, E, Battista, RN. Special issue: Health care technology and its assessment in eight countries: Australia, Canada, France, Germany, Netherlands, Sweden, United Kingdom, United States. Health Policy. 1994; 30:1421.CrossRefGoogle Scholar
4. Barbieri, M, Drummond, M, Willke, R, et al. Variability of cost-effectiveness estimates for pharmaceuticals in Western Europe: Lessons for inferring generalizability. Value Health. 2005;8:1023.CrossRefGoogle ScholarPubMed
5. Battista, RN. Expanding the scientific basis of health technology assessment: A research agenda for the next decade. Int J Technol Assess Health Care. 2006;22:275280; discussion 280–272.CrossRefGoogle ScholarPubMed
6. Brehaut, JD, Juzwishin, D. Bridging the gap: The use of research evidence in policy development. Alberta, Canada: Alberta Heritage Foundation of Medical Research (AHFMR); 2005:129.Google Scholar
7. Bronner, D, Federal Joint Committee (G-BA). Assessment of benefit Implementation of medical innovations in Germany. (Oral presentation). College voor zorgverzekeringen (CVZ) and Gemeinsamer Bundesausschuss (G-BA), January 26. Amsterdam 2007.Google Scholar
8. Bundesgesetzblatt. Art. §35b Benefit and cost assessment of drugs. [Social Code Book V. Statutory Health Insurance. Based on Art. 1 Social Security Code V on Social Health of 20th December 1988, BGBl. I S. 2477. Last revision based on Art. 5 G of 2007 April 20]. Bundesgesetzblatt.Google Scholar
9. Bundesgesetzblatt. Art. §139a The Institute for Quality and Efficiency in Health Care. [Social Code Book V. Statutory Health Insurance. Based on Art. 1 Social Security Code V on Social Health of 20th December 1988, BGBl. I S. 2477. Last revision based on Art. 5 G of 2007 April 20]. Bundesgesetzblatt.Google Scholar
10. Bundesgesetzblatt. Art. §139b Conduct of tasks. [Social Code Book V. Statutory Health Insurance. Based on Art. 1 Social Security Code V on Social Health of 20th December 1988, BGBl. I S. 2477. Last revision based on Art. 5 G of 2007 April 20]. Bundesgesetzblatt.Google Scholar
11. Bundesgesetzblatt. Art. §139c Funding. [Social Code Book V. Statutory Health Insurance. Based on Art. 1 Social Security Code V on Social Health of 20th December 1988, BGBl. I S. 2477. Last revision based on Art. 5 G of 2007 April 20]. Bundesgesetzblatt.Google Scholar
12. Bundesgesetzblatt. Statutory Health Insurance [SHI] - Act to Promote Competition (GKV-Wettbewerbsstaerkungsgesetz – GKV-WSG) of March 26 2007. Bundesgesetzblatt Part I No. 11 administered at Bonn March 30, 2007. pp. 378–473.Google Scholar
13. Chinitz, D. Health technology assessment in four countries: Response from political science. Int J Technol Assess Health Care. 2004;20:5560.CrossRefGoogle ScholarPubMed
14. Cox, P. Financing sustainable health care in Europe: New approaches for new outcomes. Conclusions from a collaborative investigation into contentious areas of healthcare. Helsinki: www.sustainhealthcare.org 2007. pp 1192.Google Scholar
15. Cranovsky, R, Matillon, Y, Banta, D. EUR-ASSESS project subgroup report on coverage. Int J Technol Assess Health Care. 1997;13:287332.CrossRefGoogle ScholarPubMed
16. Degos, L. Benefit of health technologies: Where do we come from, where are we now, where do we go? (Oral presentation). IQWiG Herbstsymposium: Wissen als Entscheidungsgrundlage für Patienten und Ärzte (Nov 23). Der finanzielle Wert von Krankheit und Gesundheit (Nov 24). Cologne, Germany: November 24, 2007.Google Scholar
17. Dickson, M, Hurst, J, Jacobzone, S. Survey of pharmacoeconomic assessment activity in eleven countries. Paris: Directorate for Employment, Labour and Social Affairs, Employment, Labour and Social Affairs Committee; 2003:144.Google Scholar
18. Dowie, J. Research implications of science-informed, value-based decision making. Int J Occup Med Environ Health. 2004;17:8390.Google ScholarPubMed
19. Draborg, E, Andersen, CK. Recommendations in health technology assessments worldwide. Int J Technol Assess Health Care. 2006;22:155160.CrossRefGoogle ScholarPubMed
20. Draborg, E, Andersen, CK. What influences the choice of assessment methods in health technology assessments? Statistical analysis of international health technology assessments from 1989 to 2002. Int J Technol Assess Health Care. 2006;22:1925.CrossRefGoogle Scholar
21. Draborg, E, Gyrd-Hansen, D. Time-trends in health technology assessments: An analysis of developments in composition of international health technology assessments from 1989 to 2002. Int J Technol Assess Health Care. 2005;21:492498.CrossRefGoogle ScholarPubMed
22. Draborg, E, Gyrd-Hansen, D, Poulsen, PB, Horder, M. International comparison of the definition and the practical application of health technology assessment. Int J Technol Assess Health Care. 2005;21:8995.CrossRefGoogle ScholarPubMed
23. Drummond, M, Manca, A, Sculpher, M. Increasing the generalizability of economic evaluations: Recommendations for the design, analysis, and reporting of studies. Int J Technol Assess Health Care. 2005;21:165171.CrossRefGoogle ScholarPubMed
24. Drummond, M, Schwartz, JS, Jonsson, B, et al. The international working group for HTA advancement. Key principles for the improved conduct of health technology assessments for resource allocation decisions. Int J Technol Assess Health Care. 2008;24:244258.CrossRefGoogle Scholar
25. Drummond, M, Schwartz, JS, Jonsson, B, et al. The International Working Group for HTA Advancement. Key principles for the improved conduct of health technology assessments for resource allocation decisions: Authors reply. Int J Technol Assess Health Care. 2008;24:367368.CrossRefGoogle Scholar
26. Easton, D. The political system: An inquiry into the stale of political science. New York: Knopf; 1953.Google Scholar
27. EUnetHTA. European network for Health Technology Assessment (Oral presentation). Fourth Annual Meeting of Health Technology Assessment International (HTAi) June 17-20. Barcelona, Spain 2007.Google Scholar
28. European Commission. The Swedish Council on Technology Assessment in Health Care (SBU). The ECHTA/ECAHI Project. 1999:1–552.Google Scholar
29. European Union (EU) [homepage on the Internet]. Health-EU. The public health portal of the European Union. Medicines and treatment. http://ec.europa.eu/health-eu/care_for_me/medicines_and_treatment/index_en.htm (accessed May 2008).Google Scholar
30. Ferguson, JH, Dubinsky, M, Kirsch, PJ. Court-ordered reimbursement for unproven medical technology. Circumventing technology assessment. [see comment]. JAMA. 1993;269:21162121.CrossRefGoogle ScholarPubMed
31. Francke, R, Hart, D. [HTA in the decision-making processes of health care institutions. Current state and relevant questions of regulatory health law]. Bundesgesundheitsblatt, Gesundheitsforschung, Gesundheitsschutz. 2006;49:241250.CrossRefGoogle ScholarPubMed
32. Freeman, JM. Beware: The misuse of technology and the law of unintended consequences. Neurotherapeutics. 2007;4:549554.CrossRefGoogle ScholarPubMed
33. Gagnon, MP, Sanchez, E, Pons, JM. Integration of health technology assessment recommendations into organizational and clinical practice: A case study in Catalonia. Int J Technol Assess Health Care. 2006;22:169176.CrossRefGoogle ScholarPubMed
34. Garcia-Altes, A, Ondategui-Parra, S, Neumann, PJ. Cross-national comparison of technology assessment processes. Int J Technol Assess Health Care. 2004;20:300310.CrossRefGoogle ScholarPubMed
35. Gerhardus, A, Dintsios, CM. [Der Einfluss von HTA-Berichten auf die gesundheitspolitische Entscheidungsfindung - Eine systematische Übersichtsarbeit] The impact of HTA reports on decision-making processes in the health sector in Germany. Series of the German Institute for Medical Documentation and Information commissioned by the Federal Ministry of Health. Köln: ; 2005:1–111.Google Scholar
36. Gibis, B, Rheinberger, P. [Experiences with and impact of health technology assessment on the German Standing Committee of physicians and patients]. Z Arztl Fortbild Qualitatssich. 2002;96:8290.Google ScholarPubMed
37. Goodman, CS. HTA 101: Introduction to health technology assessment [Update of 1998, webpublished on NLM]. Virginia: The Lewin Group; 2004:1-155.Google Scholar
38. Granados, A, Jonsson, E, Banta, HD, et al. EUR-ASSESS project subgroup report on dissemination and impact. Int J Technol Assess Health Care. 1997;13:220286.CrossRefGoogle ScholarPubMed
39. Hailey, D. Elements of effectiveness for health technology assessment programs. Alberta, Canada: Alberta Heritage Foundation of Medical Research; 2003:141.Google Scholar
40. Hansson, SO. Decision theory. Brief introduction. Stockholm, Sweden: Department of Philosophy and the History of Technology, Royal Institute of Technology (KTH); 1994:194.Google Scholar
41. Haute Autorité de Santé. Code de la sécurité sociale. Version à venir au 1 juin 2008. Chapitre 1 bis. Paris: Haute Autorité de Santé Article L161-37; 2008.Google Scholar
42. Hemminki, E, Hailey, D, Koivusalo, M. Health care policy - The courts - A challenge to health technology assessment. Science. 1999;285:203204.CrossRefGoogle Scholar
43. Henshall, C, Koch, P, von Below, GC, et al. Health technology assessment in policy and practice - Working group 6 report. Int J Technol Assess Health Care. 2002;18:447455.CrossRefGoogle Scholar
44. Henshall, C, Oortwijn, W, Stevens, A, Granados, A, Banta, D. Priority setting for health technology assessment. Theoretical considerations and practical approaches. Priority setting Subgroup of the EUR-ASSESS Project. Int J Technol Assess Health Care. 1997;13:144185.CrossRefGoogle ScholarPubMed
45. Hivon, M, Lehoux, P, Denis, JL, Tailliez, S. Use of health technology assessment in decision making: Coresponsibility of users and producers? Int J Technol Assess Health Care. 2005;21:268275.CrossRefGoogle ScholarPubMed
46. Hjelmgren, J, Berggren, F, Andersson, F. Health economic guidelines–similarities, differences and some implications. [see comment]. Value Health. 2001;4:225250.CrossRefGoogle ScholarPubMed
47. Hutton, J, McGrath, C, Frybourg, JM, et al. Framework for describing and classifying decision-making systems using technology assessment to determine the reimbursement of health technologies (fourth hurdle systems). Int J Technol Assess Health Care, 2006;22:1018.CrossRefGoogle ScholarPubMed
48. Introduction to the EUR-ASSESS Report. Int J Technol Assess Health Care. 1997;13:133143.CrossRefGoogle Scholar
49. Jann, W, Wegrich, K. Phasenmodelle und Politikprozesse: Der Policy Cycle. In: Schubert, K, Bandelow, N, eds. Lehrbuch der Politikfeldanalyse. München/Wien; 2003.Google Scholar
50. Jonsson, E, Banta, D, Henshall, C, Sampietro-Colom, L. The ECHTA/ECAHI project. European Commission; 1999:1552.Google Scholar
51. Kristensen, FB. First plenary session. Health technology assessment (HTA) in Europe - is harmonization possible? (Oral presentation). 11th Annual Congress of the International Society of Pharmacoeconomics and Outcomes Research (ISPOR). Athens, Greece, November 8–11, 2008.Google Scholar
52. Kuhn, T. The structure of scientific revolutions. Chicago: University of Chicago Press; 1962.Google Scholar
53. Lafortune, L, Farand, L, Mondou, I, Sicotte, C, Battista, R. Assessing the performance of health technology assessment organizations: A framework. Int J Technol Assess Health Care. 2008;24:7686.CrossRefGoogle ScholarPubMed
54. Lehoux, P, Tailliez, S, Denis, JL, Hivon, M. Redefining health technology assessment in Canada: Diversification of products and contextualization of findings. Int J Technol Assess Health Care. 2004;20:325336.CrossRefGoogle ScholarPubMed
55. Lehoux, P, Williams-Jones, B. Mapping the integration of social and ethical issues in health technology assessment. Int J Technol Assess Health Care. 2007;23:916.CrossRefGoogle ScholarPubMed
56. Liberati, A, Sheldon, TA, Banta, HD. EUR-ASSESS project subgroup report on methodology. Methodological guidance for the conduct of health technology assessment. Int J Technol Assess Health Care. 1997;13:186219.CrossRefGoogle ScholarPubMed
57. Martelli, F, Torre, GL, Ghionno, ED, et al. Health technology assessment agencies: An international overview of organizational aspects. Int J Technol Assess Health Care. 2007;23:414424.CrossRefGoogle ScholarPubMed
58. McGivney, WT. Coverage, technology assessment, and the courts. Physician Exec. 1991;17:3638.Google ScholarPubMed
59. Miles, MB, Huberman, AM. Qualitative data analysis. CA: Thousand Oaks: Sage; 1994.Google Scholar
60. Mintzberg, H. Die Mintzberg-Struktur. Organisationen effektiver gestalten. Landsberg/Lech: Verlag Moderne Industrie; 1992.Google Scholar
61. National Institute for Health and Clinical Excellence (NICE). Homepage on the Internet. http://www.nice.org.uk/ (accessed May 2008).Google Scholar
62. National Institute for Health and Clinical Excellence (NICE). Homepage on the Internet. Guide to the methods of technology appraisal [issued June 2008]. London, UK: http://www.nice.org.uk (accessed October 19, 2008).Google Scholar
63. Newcomer, LN. Technology assessment, benefit coverage, and the courts. In: Gelijns, AC, Dawkins, HV, eds. Adopting new medical technology, vol. 4. Washington, DC: National Academy Press; 1994:117124.Google Scholar
64. Oliver, A, Mossialos, E, Robinson, R. Health technology assessment and its influence on health-care priority setting. Int J Technol Assess Health Care. 2004;20:110.CrossRefGoogle ScholarPubMed
65. Oortwijn, W, Banta, HD, Cranovsky, R. Introduction: Mass screening, health technology assessment, and health policy in some European countries. Int J Technol Assess Health Care. 2001;17:269274.CrossRefGoogle ScholarPubMed
66. Organisation for Economic Co-operation and Development. The OECD Health Project. Health technology and decision making. Paris: OECD; 2005.Google Scholar
67. Perleth, M, Busse, R, Gerhardus, A, Gibis, BR, Luhmann, D, (Hrsg.). Health technology assessment. Konzepte, Methoden, Praxis für Wissenschaft und Entscheidungsfindung. Berliner Schriftenreihe Gesundheitswissenschaften. Berlin: Medizinisch Wissenschaftliche Verlagsgesellschaft; 2008:1260Google Scholar
68. Perleth, M, Jakubowski, E, Busse, R. [“Best practice” in health care–or why we need evidence-based medicine, guidelines and health technology assessment]. Z Arztl Fortbild Qualitatssich. 2000;94:741744.Google ScholarPubMed
69. Perleth, M, Jakubowski, E, Busse, R. What is ‘best practice’ in health care? State of the art and perspectives in improving the effectiveness and efficiency of the European health care systems. Health Policy. 2001;56:235250.CrossRefGoogle ScholarPubMed
70. Perry, S, Gardner, E, Thamer, M. The status of health technology assessment worldwide. Results of an international survey. Int J Technol Assess Health Care. 1997;13:8198.CrossRefGoogle ScholarPubMed
71. Perry, S, Thamer, M. Health technology assessment: Decentralized and fragmented in the US compared to other countries (corrected). Health Policy. 1997;42:269290.Google Scholar
72. Perry, S, Thamer, M. Evaluation of health care technologies in the united states compared to Canada and European countries. J Public Health Policy. 1999;20:168191.CrossRefGoogle Scholar
73. Petherick, ES, Villanueva, EV, Dumville, J, Bryan, EJ, Dharmage, S. An evaluation of methods used in health technology assessments produced for the Medical Services Advisory Committee. Med J Aust. 2007;187:289292.CrossRefGoogle ScholarPubMed
74. Philips, Z, Bojke, L, Sculpher, M, Claxton, K, Golder, S. Good practice guidelines for decision-analytic modelling in health technology assessment: A review and consolidation of quality assessment. Pharmacoeconomics. 2006;24:355371.CrossRefGoogle ScholarPubMed
75. Reinermann, H. Neues Politik- und Verwaltungsmanagement: Leitbild und theoretische Grundlagen. http://www.dhv-speyer.de/rei/publica/online/spah130.pdf (accessed February 28, 2008).Google Scholar
76. Sassi, F. The European way to health technology assessment. Lessons from an evaluation of EUR-ASSESS. Int J Technol Assess Health Care. 2000;16:282290.CrossRefGoogle ScholarPubMed
77. Velasco-Garrido, M, Perleth, M, Drummond, M, et al. Best practice in undertaking and reporting health technology assessments. Working group 4 report. Int J Technol Assess Health Care. 2002;18:361422.Google Scholar
78. von Rosenstiel, L, Molt, W, Rüttinger, B. Organisationspsychologie. Stuttgart: W. Kohlhammer; 2005.CrossRefGoogle Scholar
79. Wanke, M, Juzwishin, D, Thornley, R, Chan, L. An exploratory review of evaluations of health technologies assessment agencies. Alberta, Canada: Alberta Heritage Foundation of Medical Research (AHFMR); 2006:161.Google Scholar
80. Weed, M. Meta interpretation: A method for the interpretive synthesis of qualitative research [53 paragraphs]. Forum Qualitative Sozialforschung/Forum: Qualitative Social Research. 2005; 6:Art. 37. http://nbn-resolving.de/urn:nbn:de:0114-fqs0501375 or http://www.qualitative-research.net/index.php/fqs/article/viewArticle/508/1096.Google Scholar
81. Weiss, CH. The many meanings of research utilization. Public Adm Rev. 1976;39:426431.CrossRefGoogle Scholar
82. Werko, L, Banta, D. Report from the EUR-ASSESS Project. Int J Technol Assess Health Care. 1995;11:797799.Google ScholarPubMed
83. Wild, C, Gibis, B. Evaluations of health interventions in social insurance-based countries: Germany, the Netherlands, and Austria. Health Policy. 2003;63:187196.CrossRefGoogle ScholarPubMed
84. Working Group on Relative Effectiveness. 5th Meeting of the Working Group on Relative Effectiveness October 2007. Brussels, Belgium. http://ec.europa.eu/health/ph_overview/other_policies/pharmaceutical/ev_20071002_mi_en.pdf (accessed March 23, 2008.Google Scholar
85. Zentner, A, Velasco-Garrido, M, Busse, R. Methoden zur vergleichenden Bewertung pharmazeutischer Produkte. Eine internationale Bestandsaufnahme zur Arzneimittelevaluation. Köln: Deutsches Institut für Medizinische Dokumentation und Information (DIMDI); 2005:1–158.Google Scholar
Figure 0

Figure 1. Flowchart of identification and inclusion of literature and information search. Notes: Date of systematic database searches: 4 October through 8 November 2007. EMBASE, Excerpta Medica Database; Econlit, Economic Literature Database; MEDLINE, Medical Literature Analysis and Retrieval System Online; SCI/SSCI, Science Citation Index Expanded, IJTAHC, International Journal of Technology Assessment in Health Care; CRD, Centre for Reviews and Dissemination; NCCHTA, National Coordinating Centre for Health Technology Assessment.

Figure 1

Figure 2. Proportion of similarity across Domains and Areas. Notes: To be read from the left to the right across three areas: The scope of an organization determines the processes and the methods used. The product of the organization is then disseminated reaching the policy area for decision making, decisions are intended to be implemented in society and to have an impact. Theoretically, a feedback loop back to the policy and the science area could be assumed.

Figure 2

Table 1. Percentages of Similarities in Domains

Figure 3

Table 2. Similarities and Differences Across Agencies at Item Level