Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-06T08:49:08.214Z Has data issue: false hasContentIssue false

Are Key Principles for improved health technology assessment supported and used by health technology assessment organizations?

Published online by Cambridge University Press:  08 January 2010

Peter J. Neumann
Affiliation:
Tufts Medical Center
Michael F. Drummond
Affiliation:
University of York
Bengt Jönsson
Affiliation:
Stockholm School of Economics
Bryan R. Luce
Affiliation:
United BioSource Corporation
J. Sanford Schwartz
Affiliation:
University of Pennsylvania
Uwe Siebert
Affiliation:
University for Health Sciences, Medical Informatics and Technology
Sean D. Sullivan
Affiliation:
University of Washington
Rights & Permissions [Opens in a new window]

Abstract

Previously, our group—the International Working Group for HTA Advancement—proposed a set of fifteen Key Principles that could be applied to health technology assessment (HTA) programs in different jurisdictions and across a range of organizations and perspectives. In this commentary, we investigate the extent to which these principles are supported and used by fourteen selected HTA organizations worldwide. We find that some principles are broadly supported: examples include being explicit about HTA goals and scope; considering a wide range of evidence and outcomes; and being unbiased and transparent. Other principles receive less widespread support: examples are addressing issues of generalizability and transferability; being transparent on the link between HTA findings and decision-making processes; considering a full societal perspective; and monitoring the implementation of HTA findings. The analysis also suggests a lack of consensus in the field about some principles—for example, considering a societal perspective. Our study highlights differences in the uptake of key principles for HTA and indicates considerable room for improvement for HTA organizations to adopt principles identified to reflect good HTA practices. Most HTA organizations espouse certain general concepts of good practice—for example, assessments should be unbiased and transparent. However, principles that require more intensive follow-up—for example, monitoring the implementation of HTA findings—have received little support and execution.

Type
POLICIES
Copyright
Copyright © Cambridge University Press 2010

Health technology assessment (HTA) has been defined as “a multi-disciplinary field of policy analysis, studying the medical, economic, social and ethical implications of development, diffusion and use of health technology” (11). A variety of public and private sector organizations, advisory committees and regulatory bodies now conduct HTA worldwide. Previously, our group—the International Working Group for HTA Advancement—proposed a set of fifteen Key Principles to guide HTA assessments (Reference Drummond, Schwartz and Jonsson5). In this commentary, we investigate the extent to which these principles have been supported and implemented at selected HTA organizations around the world.

DATA AND METHODS

The Key Principles

In developing the Key Principles, our group observed that HTA was a dynamic and rapidly evolving process, embracing different types of assessments. We also noted that HTA organizations, which themselves vary in substantial ways, were increasingly undertaking or commissioning HTAs to inform a variety of health policy decisions. We further noted that the landscape for HTA was changing rapidly in the United States, Europe, and parts of Asia and Latin America (Reference Drummond, Schwartz and Jonsson5).

The Key Principles were designed to build upon other efforts, which sought to describe HTA systems or to identify appropriate and inappropriate practice for the conduct of HTAs (Reference Busse, Orvain and Velasco3;Reference Emanuel, Fuchs and Garber6;7;Reference Hutton, McGrath and Frybourg10). In developing the principles, our main focus was on those HTA activities that were linked to, or included a particular resource allocation decision. We also emphasized that it was important to consider the link between the HTA and the decisions that followed them. A detailed description of the principles and the rationale for each is provided elsewhere (Reference Drummond, Schwartz and Jonsson5). Table 1 enumerates the principles, divided into four sections: structure of HTA programs, methods of HTA, processes for conducting HTA, and the use of HTA for decision making.

Table 1. Support and Use of Key HTA Principles Across Selected Organizations

Note. “+” signifies that the organization “supported” the principle in question in written guidelines or other form, regardless of whether they actually follow it.

“++” means that the organization “implemented” the principle in published reports and decisions based on these reports demonstrate adoption of the specific principle.

CMS = Centers for Medicare and Medicaid Services.

DERP = Drug Effectiveness Review Project.

BCBS TEC = Blue Cross Blue Shield Associations, Technology Evaluation Center.

NICE = National Institute for Health and Clinical Excellence.

IQWiG = Institute for Quality and Efficiency in Health Care.

DIMDI = German Agency for Health Technology Assessment at the Institute for Medical Documentation and Information.

TLV = The Dental and Pharmaceutical Benefits Agency.

SBU = The Council on Technology Assessment in Health Care.

CADTH = Canadian Agency for Drugs and Technologies in Health.

HIRA = Department of Health Technology Assessment, Health Insurance Review Agency.

Anvisa = National Health Surveillance Agency (Office of Economic Evaluation of Health Technologies).

PBAC = Pharmaceutical Benefits Advisory Committee.

DHTA = Division of Health Technology Assessment.

a This table refers to CMS's HTA process for national coverage decision making.

b Washington Medicaid is one of 14 participants in the DERP. DERP researchers conduct health technology assessments for drug classes. Participants in the DERP, such as the Washington Medicaid program, retain local authority for interpreting DERP reports and for decision making regarding which drugs to pay for. Though the Medicaid program makes decisions on all technologies, the DERP focuses only on drugs.

c DHTA is within the Center for Drug Evaluation in Taiwan.

d There was disagreement in the group about whether IQWiG warranted a “plus” for principles 1 and 6, and whether methods for assessing costs and benefits (principle 5) were appropriate.

Moreover, at the time of the evaluation, IQWiG had not yet performed a cost-effectiveness assessment so implementation of this principle could not be judged.

Selecting HTA Organizations for the Exercise

We selected a variety of HTA organizations against which to evaluate support and use of the Key Principles. We attempted to capture a sample of agencies worldwide, and to include examples of both established and emerging entities with different roles and objectives. We included both traditional HTA agencies, as well as reimbursement agencies focusing mainly on drugs. We included public and private organizations: in most nations, the organizations that perform HTAs are public sector groups, reflecting the host country's public financing and/or provision of health care. However, private sector organizations also undertake HTAs, particularly in the United States, where private health insurance is common (Reference Luce and Cohen13;Reference Neumann and Sullivan16;Reference Sullivan, Watkins, Sweet and Ramsey18). Finally, we also focused on HTA organizations, based on specific knowledge of members of our group.

Ultimately, we included fourteen HTA organizations for the exercise: the Centers for Medicare and Medicaid Services (CMS) (U.S.); the Washington State Medicaid program/Drug Effectiveness Review Project (DERP) (U.S.); Wellpoint (U.S.); Blue Cross Blue Shield Associations, Technology Evaluation Center (U.S.); National Institute for Health and Clinical Excellence (NICE) (England and Wales); Institute for Quality and Efficiency in Health Care (IQWiG) (Germany); German Agency for Health Technology Assessment at the Institute for Medical Documentation and Information () (Germany); Council on Technology Assessment in Health Care (SBU) (Sweden); Dental and Pharmaceutical Benefits Agency (TLV) (Sweden); Canadian Agency for Drugs and Technologies in Health (CADTH) (Canada); Department of Health Technology Assessment, Health Insurance Review Agency (HIRA) (South Korea); National Health Surveillance Agency (Anvisa) (Brazil); Center for Drug Evaluation (CDE) (Taiwan); and Pharmaceutical Benefits Advisory Committee (Australia).

We recognize that these organizations are not necessarily representative of the entire universe of HTA agencies. Other investigators might have selected other representative organizations among the many dozens of existing HTA entities worldwide. Our goal was to produce a list that was diverse with respect to geography, scope, and stage of development; included leading organizations familiar to group participants; and comprised a useful set for assessing the adequacy of the principles.

Evaluating Support and Implementation of the Principles

We investigated the extent to which each of the fifteen principles have been supported and implemented by the fourteen HTA organizations. By “supported,” we meant that the organization embraces the principle in written guidelines or other form, regardless of whether they actually follow it. By “implemented,” we meant that published reports and decisions based on these reports demonstrate adoption of the specific principle.

We evaluated each principle for each organization in a two-stage process: we gave a “plus sign” to a principle referenced by an organization in its published charter or guidelines, or if we could infer from other information available that the principle was relevant; (i.e., “supported”); we conferred a second plus sign if the principle was actually adhered to by the organization in practice (i.e., “implementation”).

The two-stage process enabled us to capture the extent to which the principles have been supported and, the extent to which they have been realized in practice. Our intention was to focus on uptake and use of the principles, rather than to issue a verdict or report card on the HTA entities evaluated.

One of the co-authors took the lead for the evaluation of the principles for each organization. Each evaluator reviewed the Web site, mission, and activities of the organization in question. In many cases, the co-author conducting the evaluation had participated in technology assessments for the organization and/or had written about the HTA process at the organization and about particular decisions. Based on this knowledge, the evaluator issued an overall judgment about the extent to which published reports and decisions based on these reports were in line with the specific principle. As a rule, we chose not to ask the respective organizations either to self-evaluate or to review and comment on our assessments. However, in a few cases, we sought input from individuals affiliated with an HTA organization in question to clarify certain practices. Those individuals are noted in the acknowledgment section of the study. The judgments were then discussed in a group consensus meeting of the working group. All final evaluations are from the group and do not necessarily reflect the views of others. The evaluations are current as of August 15, 2009.

We recognized at the outset that, despite our attempt to provide objective and thorough assessments, our judgments were by nature somewhat impressionistic. We present this exercise in the spirit of a “commentary,” intended as a first-blush effort to analyze support and use of the principles to advance the discussion about good HTA practices and to facilitate their acceptance and adoption. In the discussion section, we expand upon the effort that might be required to undertake a formal benchmarking exercise.

RESULTS

Table 1 indicates that there is considerable variation around support of the principles. Some principles are broadly supported. Examples include being explicit about HTA goals and scope (supported by all of the fourteen organizations analyzed), being timely in assessments (13 of the 14), considering a wide range of evidence and outcomes (13/14), seeking all available data (12/14), and being unbiased and transparent (11/14). Other principles have received less widespread support. Examples include addressing issues of generalizability and transferability (7/14), being transparent about the link between HTA findings and decision-making processes (7/14), considering a full societal perspective (4/14), and monitoring the implementation of HTA findings (3/14).

There also is variation in terms of the level of implementation of the principles. Examples of principles being implemented more widely include being explicit about the goals and scope of HTA (implemented by 9 of the 14 organizations), being unbiased and transparent (9/14), and considering a wide range of evidence and outcomes (9/14). Implementation of other principles is lagging, including, being transparent about the link between HTA findings and decision-making processes (3/14), considering a full societal perspective (2/14), considering issues of generalizability and transferability (2/4), and monitoring HTA implementation (1/14).

There is variation in the degree to which the HTA organizations we examined are supporting and implementing the key principles. Some agencies, such as NICE, IQWiG, , SBU, CADTH, and PBAC support twelve or more of the principles, for example. Others, such as Washington Medicaid/DERP; Blue Cross Blue Shield TEC; and DHTA (Taiwan) have supported six or fewer.

DISCUSSION

Support and Use of the Principles

The Key Principles were intended to provide a set of universally applicable guideposts for HTA programs in different jurisdictions and across a range of organizations with different roles, objectives, and perspectives. We previously argued that application of the principles had the potential to improve clinical and policy decisions, to enhance access to clinically effective and cost-effective care, to improve the efficiency of care, and to advance the health of the public (Reference Drummond, Schwartz and Jonsson5). We also reasoned that adoption of the principles could help enhance the quality and credibility of HTAs organizations, while building greater trust in and support for HTA programs. While other groups have examined differences across HTA entities in terms of organizational aspects and certain methodological practices (Reference Drummond and Grubert4;Reference Garcia-Altes, Ondategui-Parra and Neumann8;Reference Martelli, La Torre and Di Ghionno14;Reference Schwarzer and Siebert17;Reference Sullivan, Watkins, Sweet and Ramsey18), this commentary represents a first examination of support and use of the key principles.

The analysis illustrates a mixed picture, with some principles receiving broad support and others much less so. If there is a pattern here, perhaps it is that most HTA organizations espouse certain general concepts of good practice—for example, that goals and scope of HTA should be explicit and relevant to its use and that HTA should be unbiased and transparent. On the other hand, principles that require more intensive follow up are not regarded as the primary responsibility of the agency—for example, monitoring the implementation of HTA findings or communicating findings to different decision makers—have received little support and execution. The analysis also suggests a lack of consensus in the field about certain principles—for example, considering a societal perspective—or perhaps a need for more guidance about how to implement certain principles—for example, addressing issues of generalizability and transferability.

Cross-National Differences in HTA

Application of the principles also serves to illustrate variations in the experiences and performances of HTA agencies. Perhaps not surprisingly, no single organization supports and applies all of the principles. In general, there appears to be more support and implementation of the key principles among the HTA organizations we evaluated in Europe, Canada, and Australia, compared with those in the United States, Brazil, and Asia. Notably, three of the four large U.S.-based organizations that we examined (CMS, Washington Medicaid/DERP, and Blue Cross Blue Shield) do not generally support the assessment of costs and benefits.

We have pointed out previously that there is no single way to conduct HTAs that will meet all of the needs of all decision makers, stakeholders, and societies (Reference Drummond, Schwartz and Jonsson5). Thus, one does not expect all HTA organizations to conduct assessments in the same way given their many differences, including differences across jurisdictions in health systems, institutional structures and governance, statutory authority, data availability and resources, cultures, traditions, incomes, and local practice patterns, prices, and preferences. Moreover, the HTA organizations we considered are in different stages of development and vary considerably in the resources available to them. Some, such as PBAC and NICE (established in 1992 and 1999, respectively), are well established while others, such as HIRA's HTA unit (2008), are emerging and have only recently adopted a set of publicly available guidelines.

Key Challenges

Undoubtedly, some will take issue with some of our assessments. As was the case in developing our key principles, our goal in this study was to advance the practice of HTA and to stimulate informed discussion through an extended and interactive process (Reference Drummond, Schwartz and Jonsson5). We acknowledge that there is subjectivity in our evaluations and that other researchers or the agencies themselves may be more or less strict about whether a particular principle has been supported or implemented. In some cases, we judged our assessments to be more or less straightforward, while in others, the evaluation required greater discretion, and caused considerable debate within the group. In particular, there was disagreement around whether IQWiG has supported various principles, including whether the goal and scope was explicit and relevant to its use (principle 1); whether its methods for assessing costs and benefits and Germany's decision not to consider health resource allocation decisions across diseases were appropriate (principle 5); and whether the agency considers a wide range of evidence and outcomes (Reference Jonsson12).

The two-stage process for judging support and implementation has its pros and cons. Generally, it was relatively easy to agree on support, because these assessments are based mainly on written documentation. On the other hand, agreement on implementation was more difficult, because it depends partly on local knowledge and perceptions, as well as actual evidence of performance. In addition, it is important to recognize that our applications are point-in-time snapshots in a very dynamic field in which organizations and programs are continually evolving. In some instances, such as the case of IQWiG's assessments of costs and benefits, the agency has not yet performed economic evaluations, so it remains to be seen whether it will follow this principle. Yet another issue is that some of the organizations we included (e.g., BCBS TEC, DERP) are not the ones themselves making the resource allocation decisions. Thus, it is not straightforward and perhaps not even fair to judge them against all of the principles included here. Still, we included them as examples of leading HTA entities—and because the exercise serves to highlight challenges in moving HTA organizations to support and use universal principles.

We also recognize that not everyone agrees with the fifteen principles themselves. Indeed, we have received many constructive comments and suggestions since their publication. Some observers have argued that certain principles (e.g., stakeholder involvement) are too general, while other principles (e.g., dissemination of findings) lack clarity as to the intended audience (Reference Banta2,Reference Hailey9). Others have questioned whether the principles are realistic and whether it is fair to evaluate organizations relative to all of the principles (Reference Banta2). One reviewer, for example, pointed out that many HTA agencies would find the full process not feasible, opting instead for “quick and dirty” assessments, which would be better than none (Reference Banta2). Several critics emphasized that there is a general tradeoff between rigor and inclusiveness of HTAs on the one hand, and timeliness on the other (Reference Aronson1;Reference Banta2;Reference Hailey9).

A few observers pointed out that the key principles omit potentially important items, such as the degree to which HTAs incorporate new assessment methods (e.g., for evaluating patient preferences and patient reported outcomes), or the extent to which HTAs consider social, psychological, or ethical dimensions (Reference Banta2;Reference Neuhauser15). Some argued that the principles focus undue attention on resource allocation as the basis for HTA (Reference Banta2). Finally, we have heard informally from some who take issue with the premise of our initiative itself. Those individuals have argued that a statement of key principles carries an implicit endorsement of a centralized HTA process including assessment of cost-effectiveness as the gold standard. These critics charge that such efforts serve governmental and payer interests and short-term budgetary mandates rather than patient needs, and that such principles should not be championed, even if they incorporate so-called good practices.

To these and other criticisms, we counter that scientifically rigorous and comprehensive HTA is a critical ingredient for improving the effectiveness and efficiency of the healthcare system. Regarding criticisms that the principles are unrealistic, idealistic, or unachievable, we would emphasize that principles are just that. . .principles. They are not standards or requirements. Furthermore, the principles we put forth are intended to be forward looking and designed to guide the conduct and evaluation of HTA, specifically relative to resource allocation decisions. We believe that the intended audiences of the principles include those working in HTA as well as all decision makers, including patients. We also believe that a focus on resource allocation is justified, given that HTAs are not simply reports disseminated to the public but increasingly influencing coverage and reimbursement decisions. Moreover, because all clinical decisions affect the use of healthcare resources, an explicit consideration of resource allocation is justified (Reference Drummond, Schwartz and Jonsson5).

Next Steps

Our intention is to stimulate debate and to lead policy makers to reflect on ways to improve the concepts and practice of HTA. Ideally, other researchers will conduct their own studies of HTA principles and HTA organizations. It will be useful to have a broader discussion on ways to move HTA organizations to adopt or adapt principles that currently do not receive widespread support, such as considering a societal perspective and monitoring results of HTAs.

In the future, researchers might also consider how to undertake more formal benchmarking exercises. It will be useful to prespecify and quantify more precisely the criteria for achieving a positive verdict on support and use of the principles. For example, an evaluation of whether an HTA organization has successfully implemented principle 13 (“timely HTA”) might stipulate a period (e.g., 6 months) for producing HTA reports, and a criterion (e.g., that 75 percent of reports must have been done within the 6-month window) for an HTA to achieve a favorable evaluation. Such an evaluation was beyond the scope of our exercise. We recognize that some principles will lend themselves more readily to such benchmarking than others. Still, we believe that it will prove a useful endeavor.

We continue to believe that defining, gaining consensus, and adopting good HTA principles are essential for consistent, informed, and rational decision making. This study demonstrates considerable variation in uptake of the principles we have proposed, and, to the extent that the principles have merit, there is much room for improvement. We also believe that there is a need for flexibility as well as a need to revisit continually these and other Key Principles and to apply them to other HTA organizations not evaluated here. Above all, there is a need to engage in thoughtful and constructive dialogue and debate with colleagues across disciplines and those with different viewpoints.

CONTACT INFORMATION

Peter J. Neumann, ScD (), Professor, Tufts University School of Medicine; Director, Center for the Evaluation of Value and Risk in Health, Tufts Medical Center, 800 Washington Street #63, Boston, Massachusetts 02111

Michael Drummond, BSc (), MCom, DPhil, Professor of Health Economics, Centre for Health Economics, University of York, Heslington, York, North Yorkshire YO10 5DD, UK

Bengt Jonsson, PhD () Professor, Department: Department of Economics, Stockholm School of Economics, 65, Sveavagen, Stockholm, SE 11383 Sweden

Bryan R. Luce, PhD, MBA (), Senior Vice President, Science Policy, United BioSource Corporation, 7101 Wisconsin Avenue, Bethesda, Maryland 20817; Adjunct Senior Fellow, Leonard Davis Institute of Health Economics, University of Pennsylvania, 3641 Locust Walk, Philadelphia, Pennsylvania 19104

J. Sanford Schwartz, MD (), Professor of Medicine, Health Care Management, and Economics, School of Medicine and Wharton School, University of Pennsylvania, Blockley Hall Suite #1120, Philadelphia, Pennsylvania 19104

Uwe Siebert, MD, MPH, MSc, ScD (), Professor of Public Health (UMIT), Chair, Department of Public Health, Information Systems and Health Technology Assessment, UMIT–University for Health Sciences, Medical Informatics and Technology, Eduard Wallnoefer Center 1, Hall i.T., Austria, A-6060; Adjunct Professor of Health Policy and Management, Center for Health Decision Science, Department of Health Policy and Management, Harvard School of Public Health, 718 Huntington Avenue, Boston, Massachusetts 02115

Sean D. Sullivan, PhD () Professor, Pharmaceutical Outcomes Research and Policy Program, University of Washington, 1959 NE Pacific Avenue, Box 357630, Seattle, Washington 98195

References

REFERENCES

1. Aronson, N. Establishing key principles for health technology assessment. ISPOR 13th Annual International Meeting. 2008.Google Scholar
2. Banta, HD. Commentary on the article, “Key principles for the improved conduct of health technology assessments for resource allocation decisions.Int J Technol Assess Health Care. 2008;24:362368.CrossRefGoogle Scholar
3. Busse, R, Orvain, J, Velasco, M, et al. Best practice in undertaking and reporting health technology assessments. Int J Technol Assess Health Care. 2002;18:361422.CrossRefGoogle Scholar
4. Drummond, M, Grubert, N. International trends in the use of health economic data. Waltham MA: Spectrum Report, Decision Resources; 2007.Google Scholar
5. Drummond, MF, Schwartz, JS, Jonsson, B, et al. Key principles for the improved conduct of health technology assessments for resource allocation decisions. Int J Technol Assess Health Care. 2008;24:244258.Google Scholar
6. Emanuel, EJ, Fuchs, VR, Garber, AM. Essential elements of a technology and outcomes assessment initiative. JAMA. 2007;298:13231325.CrossRefGoogle ScholarPubMed
7. European Federation of Pharmaceutical Industries and Associations. The use of health technology assessments (HTA) to evaluate medicines: EFPIA key principles. Brussels: EFPIA; 2005.Google Scholar
8. Garcia-Altes, A, Ondategui-Parra, S, Neumann, PJ. Cross-national comparison of technology assessment processes. Int J Technol Assess Health Care. 2004;20:300310.Google Scholar
9. Hailey, D. Commentary on the article, “Key principles for the improved conduct of health technology assessments for resource allocation decisions.Int J Technol Assess Health Care. 2008;24:362368.CrossRefGoogle Scholar
10. Hutton, J, McGrath, C, Frybourg, JM, et al. Framework for describing and classifying decision-making systems using technology assessment to determine the reimbursement of health technologies (fourth hurdle systems). Int J Technol Assess Health Care. 2006;22:1018.CrossRefGoogle ScholarPubMed
11. International Network of Agencies for Health Technology Assessment (INAHTA). http://www.inahta.org/HTA (accessed March 31, 2008).Google Scholar
12. Jonsson, B. IQWiG: An opportunity lost? Eur J Health Econ. 2008;9:205207.Google Scholar
13. Luce, B, Cohen, RS. Health technology assessment in the United States. Int J Technol Assess Health Care. 2009;25 (Suppl 1):3341.Google Scholar
14. Martelli, F, La Torre, G, Di Ghionno, E, et al. Health technology assessment agencies: An international overview of organizational aspects. Int J Technol Assess Health Care. 2007;23:414424.CrossRefGoogle ScholarPubMed
15. Neuhauser, D. Commentary on the article, “Key principles for the improved conduct of health technology assessments for resource allocation decisions”. Int J Technol Assess Health Care. 2008;24:362368.CrossRefGoogle Scholar
16. Neumann, PJ, Sullivan, SD. Economic evaluation in the US: What is the missing link? Pharmacoeconomics. 2006;24:11631168.CrossRefGoogle ScholarPubMed
17. Schwarzer, R, Siebert, U. Methods, procedures, and contextual characteristics of health technology assessment and health policy decision making: Comparison of health technology assessment agencies in Germany, United Kingdom, France, and Sweden. Int J Technol Assess Health Care. 2009;25:305314.Google Scholar
18. Sullivan, SD, Watkins, J, Sweet, B, Ramsey, SD. Health technology assessment in health care decisions in the United States. Value Health. 2009;12:s39s44.Google Scholar
Figure 0

Table 1. Support and Use of Key HTA Principles Across Selected Organizations