Hostname: page-component-7b9c58cd5d-g9frx Total loading time: 0 Render date: 2025-03-15T17:52:27.889Z Has data issue: false hasContentIssue false

Benchmarking health technology assessment agencies—methodological challenges and recommendations

Published online by Cambridge University Press:  08 September 2020

Ting Wang*
Affiliation:
Centre for Innovation in Regulatory Science, London, UK Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, Utrecht University, Utrecht, The Netherlands
Iga Lipska
Affiliation:
Centre for Innovation in Regulatory Science, London, UK Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, Utrecht University, Utrecht, The Netherlands
Neil McAuslane
Affiliation:
Centre for Innovation in Regulatory Science, London, UK
Lawrence Liberti
Affiliation:
Centre for Innovation in Regulatory Science, London, UK
Anke Hövels
Affiliation:
Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, Utrecht University, Utrecht, The Netherlands
Hubert Leufkens
Affiliation:
Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, Utrecht University, Utrecht, The Netherlands
*
Author for correspondence: Ting Wang, E-mail: twang@cirsci.org
Rights & Permissions [Opens in a new window]

Abstract

Objectives

The objectives of the study were to establish a benchmarking tool to collect metrics to enable increased clarity regarding the differences and similarities across health technology assessment (HTA) agencies, to assess performance within and across HTA agencies, identify areas in the HTA processes in which time is spent and to enable ongoing performance improvement.

Methods

Common steps and milestones in the HTA process were identified for meaningful benchmarking among agencies. A benchmarking tool consisting of eighty-six questions providing information on HTA agency organizational aspects and information on individual new medicine review timelines and outcomes was developed with the input of HTA agencies and validated in a pilot study. Data on 109 HTA reviews from five HTA agencies were analyzed to demonstrate the utility of this tool.

Results

This study developed an HTA benchmarking methodology, comparative metrics showed considerable differences among the median timelines from assessment and appraisal to final HTA recommendation for the five agencies included in this analysis; these results were interpreted in conjunction with agency characteristics.

Conclusions

It is feasible to find consensus among HTA agencies regarding the common milestones of the review process to map jurisdiction-specific processes against agreed metrics. Data on characteristics of agencies such as their scope and remit enabled results to be interpreted in the appropriate local context. This benchmarking tool has promising potential utility to improve the transparency of the review process and to facilitate both quality assurance and performance improvement in HTA agencies.

Type
Method
Copyright
Copyright © The Author(s), 2020. Published by Cambridge University Press

All health technology assessment (HTA) agencies have the same or similar underlying objectives and obligations to ensure that the utilization of health technologies provides the best value for money (Reference Sorenson, Drummond and Kanavos1). As the HTA environment becomes more globalized and newer collaborative and integrated ecosystems develop, there needs to be a clear understanding of how the different processes and practices within the HTA environment are evolving. In order to enable increased collaboration, quantitative and qualitative comparative information on HTA agencies' processes, practices, and performance are needed as the platform on which to build trust in and across agencies.

There is a common understanding and general acceptance that HTA agencies should adhere to certain key principles including independence, transparency, inclusiveness, scientific basis, timeliness, consistency, and legal framework. Drummond (Reference Drummond2) proposed fifteen key principles to asses HTA activities. Drummond and colleagues (Reference Drummond, Neumann, Jönsson, Luce, JS and Siebert3) suggest that such key principles could be augmented and used to formulate audit questions to measure HTA agencies' performance.

On the other hand, there is also almost full agreement as to the existence of differences among HTA agencies in their national procedural frameworks, as well as methodologies for clinical and economic assessments (4). In particular, one important output from HTA is the recommendation of pharmaceutical products to be listed on the national or local formulary (Reference Drummond2). Therefore, the challenge and the opportunity for agencies, companies, and other stakeholders are the identification of truly comparative metrics to recognize similarities and differences among HTA agencies in order to appropriately interpret different HTA recommendations for pharmaceutical products.

The move toward increased HTA transparency is unavoidable as collaborative networks grow and in fact, independent comparisons of HTA activities are already underway (5;6). Therefore, HTA organizations should facilitate open discussion of the scientific basis for their decisions, although factoring the diversity in local context, especially when diverse coverage decisions for the same new medicine occur across jurisdictions (7;8). The most recent public consultation by the European Commission on strengthening EU cooperation on HTA, which had responses from across twenty-one member states and representatives from industry and service providers, public administrators, patients and consumers, healthcare providers, academic or scientific institutions and payers, revealed that transparency of the HTA process is seen as a relevant factor of very high or high importance (83 and 16 percent of survey replies respectively) (4). As HTA agencies processes and practices have been mapped by different stakeholders, the main focus has been on outcomes and timelines.

Agencies have been measured by divergent stakeholders including academics, pharmaceutical companies, and consultancies. A set of fourteen best practice principles was constructed by Wilsdon and colleagues (Reference Wilsdon, Fiz and Haderi9) based on the revision of existing principles developed by Drummond (Reference Drummond2) and demonstrated to some extent the consensus between academia, payers, and industry. Although the authors concluded that it was a challenge to apply one set of HTA best practice principles because of the variety of HTA processes and mandates jurisdictions, they proposed metrics that could be modified for each principle and used to compare the role of HTA in selected healthcare systems (Reference Wilsdon, Fiz and Haderi9). It should be noted that HTA agencies have raised objections (Reference Neumann, Drummond, Jönsson, Luce and Schwartz10) to some of the principles outlined in the studies by Drummond (Reference Drummond2) and Wilsdon and colleagues (Reference Wilsdon, Fiz and Haderi9). However, there was full agreement among agencies that “HTA should be timely” (Reference Drummond2). The results of the European Commission public consultation showed that timely delivery of an assessment report is a relevant factor of very high, high, and medium importance (51, 41 and 8 percent of replies, respectively) (4). However, timely HTA delivery does not depend only on the procedural frameworks and review performance of HTA agencies, as it is also impacted by companies' practice in terms of both the quality and timing of submissions to HTA agencies.

Although HTA agencies are concerned regarding cross-agency comparisons because of differences in agency mandates and lexicons as well as in how decisions are made, the assessment and appraisal period for all agencies can be broken into detailed components of overall processes. The breakdown of processes leads to identification of common stages during HTA review between agencies, and in turn the establishment of comparative milestones at each stage. Data on quantitative metrics of timelines as well as qualitative information on HTA agencies' procedural frameworks enable comparison to be made between agencies, the results could facilitate both quality assurance and performance improvement within the agencies.

Objectives

This paper describes a benchmarking tool that was developed with active HTA agency participation in order to build with the agencies an agreed methodology that enables comparative data to be collected and interpreted. According to the Oxford Dictionary, benchmarking is “evaluating something by a comparison with a standard.” Benchmarking could also be considered as a continuous systematic process for comparing performance indicators across peer organizations for the purpose of organizational improvement. The specific objectives of the benchmarking study were to collect comparative metrics to enable clarity regarding the differences and similarities across HTA agencies, to identify the processes and timing of processes in individual HTA agencies, and to enable comparisons to be made within agencies for quality assurance, as well as between agencies for performance improvement.

Methods

The study was initiated by the Centre for Innovation in Regulatory Science (CIRS, London, UK) in 2012.

The study protocol was designed based on the premise that notwithstanding the apparent variances among the HTA processes of different agencies, these processes are made up of a set of basic stages or building blocks that allow cross agency comparisons. These steps in the HTA process were identified and common milestones were defined for meaningful benchmarking. Our study was divided into three main phases (Figure 1).

Figure 1. Phases of study development.

Phase I—Identification of Appropriate HTA Agencies and Initiation of Collaboration

First, based on the information available in the public domain and on personal communication with individual HTA agencies, process maps for individual jurisdiction were developed to illustrate the relationship between national regulatory authorities, HTA organizations, and pricing and/or reimbursement decision-making bodies and to identify the appropriate HTA agencies to be benchmarked in this study (11). Second, a call-for-interest proposal for a benchmarking study was developed and sent to eighteen HTA agencies using a purposive sampling method, based on their differences in size, the number of years in HTA experiences, and interest in collaboration. The first CIRS–HTA agency meeting was held on 25 June 2012 to discuss the domains of the questionnaire and relevant benchmarking metrics.

Phase II—The Development of the Questionnaire and its Use in the Pilot Phase

Based on the outcome from the first CIRS–HTA meeting and built on prior CIRS work and experience in benchmarking regulatory agencies (Reference Hirako, McAuslane, Salek, Anderson and Walker12), the HTA benchmarking questionnaire was developed. Ten HTA agencies agreed to collaborate in the study to achieve an understanding of the different processes employed by each agency, highlighting areas of similarities and differences that were considered particularly important for benchmarking.

Participating agencies

  • AAZ—Agency for Quality and Accreditation in Health Care and Social Welfare, Croatia

  • CADTH—Canadian Agency for Drugs and Technologies in Health, Canada

  • CONITEC—National Committee for Technology Incorporation, Brazil

  • INESSS—National Institute of Excellence in Health and Social Services, Canada, Quebec

  • INFARMED—National Authority for Medicines and Health Products, Portugal

  • KCE—Belgian Health Care Knowledge Centre, Belgium

  • NICE—National Institute for Health and Care Excellence, UK England

  • PBAC—Pharmaceutical Benefits Advisory Committee, Australia

  • SMC—Scottish Medicines Consortium at NHS National Services, UK Scotland

  • VASPVT—State Health Care Accreditation Agency at the Ministry of Health Lithuania

Collaborating HTA agencies were consulted through email and face-to-face discussions during the questionnaire development. The questionnaire consisted of two main domains: information on agency organizational aspects and information on individual new medicine review timelines and outcomes. As part of the methodology, a generic process map was developed with common milestones. Although the review processes vary among collaborating HTA agencies, it was agreed by the agencies that individual steps in their review processes could be mapped to milestones common to all the agencies. Therefore, even though the sequence of each milestone during the review may differ, the defined metrics enabled comparison of individual systems and timelines among agencies.

A pilot study questionnaire in Excel format was distributed in May 2013 to participating agencies to collect organizational information and information on four individual products per agency that underwent single-technology assessment (STA); two of the most recent products that received a positive HTA recommendation (including positive recommendations with restrictions), and two of the most recent products that received a negative HTA recommendation from each agency.

Phase III—The Development of the Final Version of the Questionnaire and Data Collection for the Full Study

Feedback from the pilot study was discussed at the third CIRS–HTA meeting on 3 October 2013 and amendments were made to the questionnaire. The revised version of the questionnaire was sent to HTA agencies for their comments and feedback and the final version of the questionnaire was discussed at the fourth CIRS–HTA agency meeting on 31 May 2014. The final questionnaire retained the same structure as the original; that is, general information and individual product information.

The Excel questionnaires were distributed to ten HTA agencies for the fully study during May–September 2014. In the full study, we collected the information on all new active substances (NASs) that had undergone STA and received HTA recommendation in 2013. In general, HTA agencies provided data through completion of the Excel questionnaire; however, some parts of the questionnaire were pre-filled by the study authors based on the information available in the public domain to facilitate the data collection and the information was reviewed and verified by the HTA agencies.

In this paper, we provide full details of the benchmarking methodology. To demonstrate the feasibility of this benchmarking tool, we analyzed metrics on timelines and agency characteristics. Timelines were chosen as a focus because of their interest to patients and other healthcare stakeholders as a marker of availability of new medicines. In addition, timelines have also been utilized by researchers as an overall indicator for agency performance; however, it is important that any time measures are contextualized in order to truly understand process efficiency. We calculated timelines based on the data directly provided or verified by HTA agencies. We have also focused on the subset of questions of budget and resources for agency comparison to provide the context of individual systems and processes necessary to interpret timeline results.

The analysis was based on results from five HTA agencies that were selected from the ten agencies that agreed to participate in the study based on the completeness of the milestone data provided, in order to assess their timelines during the assessment and appraisal phase. Because the focus of this paper is to demonstrate the validity of the benchmarking tool rather than current specific agency performances and to preserve confidentiality, data were collected under the condition of individually anonymized reporting.

The median times of overall processes from HTA submission to recommendation were analyzed to compare the performance across all agencies. In order to understand where time was spent during the process, the median time was further calculated for the common stages (assessment, appraisal, and appraisal to recommendation) at each agency, breaking down by agency time and company response time. The median time, 25th and 75th percentiles for each agency were calculated to show time variance. Finally, in order to explore the different approaches that may be employed by agencies, we further investigated the timeline for products with different HTA recommendations (positive, positive with restrictions, and negative), as well as for oncology versus non-oncology products.

Results

A benchmarking tool was developed to systematically compare HTA agencies; the details of the questionnaire are provided in Table 1. The questionnaire included two main domains: general information domain and individual product domain. The general information domain covered five main aspects (Scope and remit, Resource and budget, Appraisal/scientific committee, Transparency, and Review procedures and processes) containing fifty-one questions. The individual product portion of the questionnaire consisted of four main aspects (Review timelines, Assessment/appraisal process, Outcome, and Scientific advice) containing thirty-five questions. In total, data for 109 HTA reviews from five HTA agencies were analyzed to demonstrate the utility of the tool. The characteristics of the participating HTA agencies are summarized in Table 2. The size of HTA agencies varied considerably; four agencies consisted of more than 100 full-time employees (FTEs) and one agency had less than 100 FTEs. The total number of FTEs assigned to HTA activities at the agencies varied from fourteen to eighty-eight, which amounts to less than 25 percent of total FTEs for two of the agencies, between 50 and 75 percent for two agencies and more than 75 percent for one agency. Total agency budgets ranged from less than 2 million USD to almost 115 million USD at the time of this study. Out of the five agencies, four indicated that they had experiences using external resources for HTA-related activities, among which three agencies have outsourced to universities or academic groups and four agencies have outsourced to individual independent contractors or consultancy companies. The frequency of outsourcing was not specified. The types of activities outsourced differed across agencies and may have included the development of the full HTA report, rapid HTA report, review of manufacturer's submissions, and educational activities. Median time taken from HTA submission to HTA recommendation (excluding company response time) varied between 99 and 862 days (Table 2).

Table 1. Questionnaire for HTA agency benchmarking study

Table 2. Resources for HTA-related activities versus median time of HTA process

a“Stop the clock” refers to the procedure in which the agency pauses activity to wait for a response from the manufacture for clarification or additional data

Detailed Timelines

To understand where time was spent in agency processes and enable cross agency comparison, a generic map was developed as part of the methodology to show the breakdown of HTA processes at individual agencies. Seven main stages were identified as common to HTA decision-making processes: receipt of data; HTA assessment; sponsor input during assessment; HTA appraisal; sponsor input during appraisal; appraisal to HTA recommendation; and coverage decision for the product. Common milestones for each stage during the processes were agreed by participating agencies. Figure 2 presents the details of the generic map and uses two agencies as examples to show the breakdown of the timeline. The example agencies were selected based on their extreme values for median time from HTA submission to HTA recommendation (862 and 99 d for agencies A and E, respectively). Although the processes used by the selected agencies allowed companies to respond during the assessment and appraisal phase, the time differences were mainly attributed to agency time.

Figure 2. Comparison—where time is spent between HTA submission and final recommendation.

The median time for HTA agencies during the assessment phase was 435 and 50 days for agencies A and E, respectively, and the median time for the appraisal phase also differed substantially, from 347 to 12 days for agencies A and E. These results need to be interpreted with caution as the different systems and processes between the agencies could influence the timelines, as shown in Table 2.

In Figure 3, the time between submission to the HTA agency and final recommendation is presented for individual products and also for oncology versus non-oncology products. Three agencies (E, D, and B) had consistent median times across oncology and non-oncology products, varying from 109 to 293 days for oncology products and from 99 to 247 days for non-oncology products. Agency C did not evaluate oncology products within the time period of the data collection. For agency A, there was considerable difference between the median time for oncology versus non-oncology products (552 and 1,006 d, respectively) at that agency.

Figure 3. Time spent between submission to HTA agency and recommendation by HTA agency, analyzed by oncology versus non-oncology and by HTA agency.

The timelines between HTA submission and HTA recommendation were analyzed according to HTA outcome (positive, positive with restrictions, and negative). For agencies A and B, there were considerable differences in the median time by HTA outcomes: 767 and 975 days for positive and negative HTA outcomes respectively in case of agency A; 208, 260 and 315 days respectively for positive, positive with restrictions and negative HTA outcomes for agency B. For agencies C and D, the median times were very consistent across different HTA outcomes; however, there were no positive HTA outcomes included in this study for agency C. Agency E showed the shortest timelines (99 d for all products), the median time for negative HTA outcome was considerably longer (123 d) compared with positive and positive with restrictions HTA outcomes (95 and 96 d, respectively).

Discussion

This study presents a benchmarking tool to compare HTA agencies and considers its potential for future use. Despite the variety of healthcare systems and HTA processes and outcomes, we propose that HTA processes can be mapped with common milestones identified and agreed, to understand and compare HTA agencies. HTA agencies have been compared by external groups (5;6;9); however, these analyses are often criticized by HTA agencies due to the lack of comparable bases. The methodology developed for this study could be used to provide comparative analysis across agencies by external stakeholders as well as within and across HTA agencies for their self-improvement.

Benchmarking HTA Agencies: Improving Timeliness and Transparency

Our study shows that participating HTA agencies can agree on common milestones during HTA processes, which enabled comparison of overall time, as well as where time was spent at each stage between HTA submission and recommendation. The generic process map and our study methodology can be taken further to support the design of procedures in newly established HTA agencies and the improvement of processes in existing HTA agencies.

Timelines of HTA processes are measurable but are not a measure themselves and should be always interpreted with a full understanding of the HTA processes. In his key principles of HTA, Drummond (Reference Drummond2) indicates that “HTA should be timely” which is considered to be the agreed principle within broader subgroup of key principles regarding the use of HTA in decision making.

Because time is one indicator that can be measured precisely based on data provided by HTA agencies with common identified milestones, benchmarking HTA process time can create a valuable baseline to compare agencies. For HTA agencies, the results could facilitate internal performance improvement and the assessment of adherence to defined review target times for internal quality assurance, as well as improving the transparency of the HTA for external stakeholders in terms of where time was spent during the processes.

Benchmarking HTA Agencies: Understanding Organizational Context and Process

We emphasize in our study that to compare HTA agencies and measure and interpret timelines, an in-depth understanding of HTA processes across agencies and the numerous factors behind those processes is needed. Our study shows considerable differences among the median timelines from assessment through appraisal and final HTA recommendation for the five participating agencies. In the study, we collected fifty-one questions regarding the HTA organizational information to support interpretation of the timelines. The resources allocated for HTA activities are associated with review timelines: in the group of agencies analyzed in our study only one agency has more than 75 percent of its resources dedicated to HTA activities and this agency has the shortest median timelines. This was the only agency in the study where HTA processes constitute the core activities of the organization, whereas for the remaining four agencies, HTA activities are only part of broader scope of the organization's activities. This is particularly the case for two of the agencies, for which the percentage of FTEs dedicated to HTA activities is less than 25 percent and where the median timelines of the whole HTA process are the longest. This interpretation needs to be regarded with caution as there are several other organizational factors that can impact timelines. First, different median timelines could be explained by the HTA processes in place in agencies; for example, extensive stakeholder involvement (including patients, clinicians, and pharmaceutical companies) in the processes, public consultation of draft documents or the appeal procedure available in case of negative HTA outcome (Reference Rosenberg-Yunger, Thorsteinsdóttir, Daar and Martin13). Second, the frequency of appraisal committee meetings can also affect timelines, especially during the appraisal phase. In some organizations, committees meet several times per month and in some, several times per year. In this study, the frequency of committee meeting range is from twelve to twenty-one times per year. Third, delays can also be caused by pharmaceutical company strategy; for example, if a particular market is not a priority for a company, providing additional evidence or clarifications to an HTA agency could take longer.

This study shows that for three of the five studied agencies, the median time of overall processes were not affected by the HTA outcome whereas for the other two agencies, the products that received a positive recommendation took the shortest time and the products that received a negative recommendation took the longest time. The results may indicate that for these two agencies, the HTA practice for assessing the products with negative outcome is different. For example, the longer timeline could be attributed to the involvement of stakeholders such as patient groups and clinicians, depending on the various mechanisms in place. Cai and colleagues (Reference Cai, McAuslane and Liberti14) investigated the time taken for products to receive the first HTA recommendation in six European jurisdictions, revealing that products that received a negative recommendation took longer to receive an HTA recommendation from the time of European Medicines Agency (EMA) approval. Although longer HTA timelines can delay patients' access to medicines, it is worth noting that time can be also spent on pharmaceutical company input such as additional evidence submission, comments and communication.

Has an International Standard or HTA Best Practice Already been Set and Implemented?

There has been an impressive number of internationally recognized initiatives to develop standards for best practice in HTA as well as practical HTA tools. Best practice in undertaking and reporting HTA has already been proposed by research groups in Europe over recent decades (Reference Velasco, Perleth, Drummond, Gürtner, Jørgensen and Jovell15). Also, some steps have been taken to establish internationally recognized good practices in HTA (Reference Goodman16). Consensus has been reached around the practical tools and methods in the field of HTA in Europe (Reference Kristensen, Lampe, Chase, SH, Wild and Moharra17) including the HTA Core Model (Reference Lampe, Mäkelä, Garrido, Anttila, Autti-Rämö and Hicks18) and rapid relative effectiveness assessments of new pharmaceuticals to be used for European collaboration (Reference Kleijnen, Pasternack, Van de Casteele, Rossi, Cangini and Di Bidino19Reference Kleijnen, Toenders, de Groot, Huic, George and Wieseler21). Continuous benchmarking of performance will be of great value to capture changes in the system. For example, in light of the EUnetHTA Joint Action 3 Work Package 4 joint production of HTA, for the products that underwent joint assessment, milestone metrics at individual HTA agencies could be collected using this methodology and used as a measure to assess the uptake time of EUnetHTA assessment in member states.

A recent report by the ISPOR HTA council suggested there was a lack of good practices in defining the organizational aspects of HTA and measuring the impact of HTA (Reference Kristensen, Husereau, Huic and Drummond22). The implementation of HTA best practice into real healthcare system settings and thus the objective and reliable comparison of HTA agencies' outcomes and performance has yet to be resolved. This study uses quantitative metrics to measure agencies in terms of where time was spent at each stage of the HTA process, and the timeline can now be interpreted with qualitative information on agencies' process characteristics. This will facilitate a future study on setting a framework of good HTA practice.

Evidence from a regulatory agency benchmarking study showing a long queuing time in one agency led to an increase in resources at the agency to improve the submission validation process (Reference Hirako, McAuslane, Salek, Anderson and Walker12); similarly, HTA agencies could use benchmarking outcomes to improve processes by learning more effective and efficient ways to undertake reviews from other agencies.

Study Limitations

This study has some limitations that are worth noting. First, the number of agencies studied was small, as inclusion was based on data completeness. Second, the data sets used in the analyses were not up to date, as the results were intended to demonstrate the utility of the benchmarking tool, rather than assess the current performance of agencies. Another limitation of this study is the use of a trichotomous system of HTA recommendations (positive, positive with restrictions, and negative), which is a simplified categorization of HTA outcome. Further categorization has been used in research to provide more insight on different types of restrictions, but the detailed classification was used to investigate the divergences of decisions within a single HTA agency (Reference O'Neill and Devlin23). To allow for comparison of HTA recommendations across agencies, the trichotomous classifications have been used in previous studies (Reference Lipska, Hovels and McAuslane24Reference Allen, Liberti, Walker and Salek26).

The lack of assessment of the quality of industry submissions is another limitation of this study. Benchmarking is commonly associated with measuring quantitative metrics such as time, process, resource, and cost, but it is also possible to use qualitative measures in a systematic fashion to assess more difficult-to-measure parameters such as quality. However, although we consider that quality is an extremely important parameter, as the quality of an industry submission to an HTA agency can substantially impact timeliness of the HTA processes, it was considered to be outside of the scope of this research. Further studies to assess the quality of HTA submissions would be of benefit.

Conclusions

Our study shows that it is feasible to find consensus among participating HTA agencies regarding the common milestones of the HTA review process in order to map a jurisdiction-specific process against an agreed generic process. It is also possible to identify the detailed characteristics of each agency that enables these results to be interpreted in the appropriate context. Such benchmarking studies should be performed systematically and be based on the data provided directly by HTA agencies. Although a number of HTA agencies publish their recommendation date in the public domain, submission date to HTA agencies, and companies' responding time are not available. As one of the benefits of benchmarking HTA performance is to improve HTA transparency and predictability, and therefore we recommended that data on common milestones as well as target timelines be available in the public domain.

We observed that this HTA agency benchmarking tool has promising potential; however, timelines cannot be used as a single measure to compare or measure performance of HTA agencies but rather only in combination with an in-depth understanding of jurisdiction-specific HTA processes.

Financial Support

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

*

Now with National Health Fund, Warsaw, Poland.

**

Now an independent researcher, Bilthoven, The Netherlands.

References

Sorenson, C, Drummond, M, Kanavos, P. Ensuring value for money in health care. The role of health technology assessment in European Union. Brussels, Belgium: European Observatory on Health Systems and Policies; 2008.Google Scholar
Drummond, M. Key principles for the improved conduct of health technology assessments for resource allocation decisions. Int J Technol Assess Health Care. 2002;24:244–58.10.1017/S0266462308080343CrossRefGoogle Scholar
Drummond, M, Neumann, P, Jönsson, B, Luce, B, JS, Schwartz, Siebert, U et al. Can we reliably benchmark health technology assessment organizations? Int J Technol Assess Health Care. 2012;28:159–65.CrossRefGoogle ScholarPubMed
European Commission. Impact assessment: Strengthening of the EU cooperation on Health Technology Assessment (HTA); 2018 Available from: https://ec.europa.eu/health/sites/health/files/technology_assessment/docs/2018_ia_final_en.pdf [Accessed 12 February 2018].Google Scholar
Kleijnen, S, Lipska, I, Leonardo Alves, T, Meijboom, K, Elsada, A, Vervölgyi, V et al. Relative effectiveness assessments of oncology medicines for pricing and reimbursement decisions in European countries. Ann Oncol. 2016;27:1768–75.CrossRefGoogle ScholarPubMed
Nicod, E, Kanavos, P. Commonalities and differences in HTA outcomes: A comparative analysis of five countries and implications for coverage decisions. Health Policy. 2012;108:167–77.CrossRefGoogle ScholarPubMed
Schelleman, H, Dupree, R, Kristensen, FB, Goettsch, W. Why we should have more collaboration on HTA in Europe: The example of sofosbuvir. J Pharm Policy Pract. 2015;8:13.CrossRefGoogle Scholar
Kristensen, FB, Gerhardus, A. Health technology assessments: What do differing conclusions tell us? Br Med J. 2010;341:c5236.CrossRefGoogle ScholarPubMed
Wilsdon, T, Fiz, E, Haderi, A, for Charles River Associates. A comparative analysis of the role and impact of health technology assessment: 2013 Final report; 2014. Available from: https://www.efpia.eu/media/25706/a-comparative-analysis-of-the-role-and-impact-of-health-technology-assessment-2013.pdf [Accessed 12 February 2018].Google Scholar
International Working Group for HTA Advancement, Neumann, PJ, Drummond, MF, Jönsson, B, Luce, BR, Schwartz, JS et al. Are key principles for improved health technology assessment supported and used by health technology assessment organizations? Int J Technol Assess Health Care. 2010;26:7178.Google ScholarPubMed
Centre for Innovation in Regulatory Science (CIRS). Regulatory and Reimbursement Atlas, 2018. Available from: http://www.cirs-atlas.org/ [Accessed 12 February 2018].Google Scholar
Hirako, M, McAuslane, N, Salek, S, Anderson, C, Walker, S. A comparison of the drug review process at five international regulatory agencies. Drug Info J. 2007;41:291308.10.1177/009286150704100302CrossRefGoogle Scholar
Rosenberg-Yunger, ZR, Thorsteinsdóttir, H, Daar, AS, Martin, DK. Stakeholder involvement in expensive drug recommendation decisions: An international perspective. Health Policy. 2012;105:226–35.CrossRefGoogle Scholar
Cai, J, McAuslane, N, Liberti, L. R&D briefing 69: Review of HTA outcomes and timelines in Australia, Canada and Europe 2014–2017. London, UK: Centre for Innovation in Regulatory Science; 2018.Google Scholar
Velasco, M, Perleth, M, Drummond, M, Gürtner, F, Jørgensen, T, Jovell, A et al. Best practice in undertaking and reporting health technology assessments. Working group 4 report. Int J Technol Assess Health Care. 2002;18:361422.Google ScholarPubMed
Goodman, C. Toward international good practices in health technology assessment. Int J Technol Assess Health Care. 2012;28:169–70.CrossRefGoogle ScholarPubMed
Kristensen, FB, Lampe, K, Chase, DL, SH, Lee-Robin, Wild, C, Moharra, M et al. Practical tools and methods for health technology assessment in Europe: Structures, methodologies, and tools developed by the European Network for Health Technology Assessment, EUnetHTA. Int J Technol Assess Health Care. 2009;25:18.CrossRefGoogle ScholarPubMed
Lampe, K, Mäkelä, M, Garrido, MV, Anttila, H, Autti-Rämö, I, Hicks, NJ et al. The HTA Core Model: A novel method for producing and reporting health technology assessments. Int J Technol Assess Health Care. 2009;25:920.CrossRefGoogle ScholarPubMed
Kleijnen, S, Pasternack, I, Van de Casteele, M, Rossi, B, Cangini, A, Di Bidino, R et al. Standardized reporting for rapid relative effectiveness assessments of pharmaceuticals. Int J Technol Assess Health Care. 2014;30:488–96.CrossRefGoogle Scholar
Kleijnen, S, George, E, Goulden, S, d'Andon, A, Vitré, P, Osińska, B et al. Relative effectiveness assessment of pharmaceuticals: Similarities and differences in 29 jurisdictions. Value Health. 2012;15:954–60.CrossRefGoogle ScholarPubMed
Kleijnen, S, Toenders, W, de Groot, F, Huic, M, George, E, Wieseler, B et al. European collaboration on relative effectiveness assessments: What is needed to be successful? Health Policy. 2015;119:569–76.CrossRefGoogle ScholarPubMed
Kristensen, FB, Husereau, D, Huic, M, Drummond, M. Identifying the need for good practices in health technology assessment: Summary of the ISPOR HTA Council Working Group report on good practices in HTA. Value Health. 2019;22:1320.CrossRefGoogle Scholar
O'Neill, P, Devlin, NJ. An analysis of NICE's “restricted” (or “optimized”) decisions. Pharmacoeconomics. 2010;28:987.CrossRefGoogle ScholarPubMed
Lipska, I, Hovels, AM, McAuslane, N. The association between European Medicines Agency approval and health technology assessment recommendation. Value Health. 2013;16:A455A455.CrossRefGoogle Scholar
Allen, N, Liberti, L, Walker, S, Salek, S. A comparison of reimbursement recommendations by European HTA agencies: Is there opportunity for further alignment? Front Pharmacol. 2017;8:834.CrossRefGoogle ScholarPubMed
Allen, N, Liberti, L, Walker, S, Salek, S. Health technology assessment (HTA) case studies: Factors influencing divergent HTA reimbursement recommendations in Australia, Canada, England, and Scotland. Value Health. 2017;20:320–28.10.1016/j.jval.2016.10.014CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. Phases of study development.

Figure 1

Table 1. Questionnaire for HTA agency benchmarking study

Figure 2

Table 2. Resources for HTA-related activities versus median time of HTA process

Figure 3

Figure 2. Comparison—where time is spent between HTA submission and final recommendation.

Figure 4

Figure 3. Time spent between submission to HTA agency and recommendation by HTA agency, analyzed by oncology versus non-oncology and by HTA agency.