The use of health technology assessment (HTA) to inform reimbursement and healthcare resource allocation is well integrated into national decision-making processes in the United Kingdom (UK) and other countries (www.inahta.net). The use of HTA has evolved because of an environment of finite and constrained healthcare budgets where difficult choices at national and regional levels must be made regarding how best to use available resources. According to the EUnetHTA (www.eunethta.eu), the aim of HTA is to provide a systematic, transparent, unbiased, and robust summary of the available evidence base about the medical, social, economic, and ethical issues related to the use of a health technology to allow informed decision making. In the UK, HTA reports are commissioned by a national body, the National Institute for Health Research (NIHR), and used as a source of information to guide resource allocation from national through to regional and local service commissioning (www.hta.ac.uk). Methods of economic evaluation, and specifically decision-analytic model-based cost-effectiveness analysis, have become an integral component of a HTA (Reference Akehurst1).
Using a decision-analytic model allows the systematic assimilation of all available evidence in a structured format to identify the incremental costs and benefits of proposed technologies compared with current practice (Reference Brennan, Chick and Davies2). A key component in any model-based cost-effectiveness study involves identifying and quantifying the uncertainty associated with the model structure, parameter inputs or methodological assumptions used within the analysis (Reference Claxton3). Probabilistic Sensitivity Analysis (PSA) has now become a standard requirement to understand the joint effect of uncertainty in model input parameters (Reference Claxton3). The inclusion of PSA is now generally viewed as a measure of the inherent quality of any published model-based economic analysis (Reference Claxton3;4). In addition, a key advantage of PSA is that it allows an analyst to use Value of Information (VOI) methods to understand the need for future research. VOI methods first emerged in the health economics literature in 1999 when proposed by Claxton (Reference Claxton5). Subsequently, the first example of the VOI methods being applied in the context of HTA was published in 2004 as a series of pilot case studies (Reference Claxton, Ginnelly, Sculpher, Philips and Palmer6). The key strength of VOI methods is that they make it clear and explicit how current parameter uncertainty translates into the need for future research.
Once an analyst has identified and quantified the uncertainty in costs, probabilities, clinical effectiveness and health state utilities then decision-makers charged with resource allocation have to consider whether this available evidence is sufficiently robust to recommend the introduction of the new technology into clinical practice. Decision-makers faced with the findings of a HTA report have to appraise the available evidence base and decide if a new technology should be adopted into clinical practice on the basis of existing information. Importantly, there are (opportunity) costs of making the wrong decision and introducing a new technology when the evidence base is not sufficiently certain (Reference Claxton, Ginnelly, Sculpher, Philips and Palmer6). This appraisal of the evidence introduces four possible decisions that could be reached in terms of whether to adopt the new technology: (i) adopt but request more information to allow the technology to be re-assessed at some later date (sometimes called: coverage with evidence development); (ii) adopt with no request for more information; (iii) do not adopt and request more information; and (iv) do not adopt and do not request more information. Two of these decisions indicate the need for more information, which infers a need for further research. This need for further research could be based on subjective (synonymous with qualitative) or objective (synonymous with quantitative) criteria. Subjective criteria are likely to be the result of deliberative discussions with key stakeholders. In contrast, objective criteria are based on formal quantitative analysis such as VOI methods.
Largely, there are three types of methods that fall under the collective heading of VOI (Reference Claxton, Ginnelly, Sculpher, Philips and Palmer6): (i) Expected Value of Perfect Information (EVPI) analysis; (ii) Expected Value of Partial Perfect Information (EVPPI); and (iii) Expected Value of Sample Information (EVSI). These methods offer a framework with a common aim to inform whether future research is necessary given the identified level of uncertainty in the available evidence base. Specifically, VOI methods offer a structured framework to estimate the amount that a decision-maker should be willing to pay to acquire further information to decrease decision uncertainty. Conducting a VOI analysis involves the calculation of a monetary value of an optimal strategy with further information compared with the optimal strategy without further information (i.e., with current information). Several published papers now clearly explain the key steps and applications of each type of VOI methods: EVPI and EVPPI (Reference Felli and Hazen7–Reference Groot Koerkamp, Myriam Hunink, Stijnen and Weinstein9) and EVSI (Reference Ades, Lu and Claxton10). Using VOI methods has been suggested to be both a conceptually and quantitatively sound way for estimating the expected value of future research (Reference Groot Koerkamp, Myriam Hunink, Stijnen and Weinstein9). In 2003, Coyle et al. (Reference Coyle, Buxton and O’Brien11) claimed that the VOI is the only method that unequivocally calculates the expected benefit of further research.
The first step for any VOI method involves calculating the EVPI, which estimates the difference between the expected value of a decision with perfect information and the expected value of a decision with the current evidence base. The EVPI represents the maximum possible improvement in the net benefit associated with the decision that could be achieved if the decision were to be made in a situation where there is perfect information rather than with the current level of information (Reference Claxton, Ginnelly, Sculpher, Philips and Palmer6). EVPI can be determined either at the individual patient or population level, but ideally the latter as the societal value of research should ideally be estimated across the population of future patients for whom the decision is pertinent. A decision maker can then use the population EVPI to decide if further research is potentially worthwhile given the estimated cost of generating the information required in a research study. Further research is potentially worthwhile if population EVPI exceeds the expected cost for conducting further research.
Once EVPI has been calculated then it is possible to use EVPPI to determine the most valuable input parameter(s) in terms of prioritizing further research. Importantly, EVPPI can provide information on whether to conduct research to inform a single parameter, or a set of parameters. The outputs of an EVPPI analysis, combined with the results of the EVPI, can also inform the type of research needed. A decision-maker must then be cognizant that the type of study required will impact on the cost of the study required to collect the data. This means that in practice, if further information on clinical effectiveness is required then the EVPPI will have to be higher, than if, for example, more data on utility values are needed. If the EVPPI indicates that clinical effectiveness is a key parameter then the gold standard method for assessing clinical effectiveness, the randomized controlled trial (RCT), should be designed and commissioned. However, if the utility value attached to a health state is identified as the key parameter driving EVPPI then a more reasonable study design might be a stated preference study to elicit utility values. If EVPI and EVPPI have indicated the potential worth for future research, then it is possible to use the third type of VOI method; EVSI (Reference Claxton, Ginnelly, Sculpher, Philips and Palmer6). The EVSI puts into practice the concept of being able to estimate the societal value of research designs and establish the best possible sample sizes for primary data collection. The assumption here is that further research will be of value if the expected net benefit of sampling exceeds the cost of sampling (Reference Claxton and Thompson12).
The VOI methodological framework is potentially useful for the reason that it provides a quantitative technique to identify evidence gaps and prioritize future research in the context of national HTA, but it is not clear to what extent the priorities identified by this method are embedded into research. VOI methods have been criticized for not being presented in a way that is meaningful for decision-makers, who may have no formal training in the methods being used (Reference Myers, Sanders and Ravi13). Furthermore, it is not clear whether the use of VOI methods is accepted in practice as a feasible approach to prioritize further research. An important first step to understand the practical use and value of VOI methods is to identify if, and how, they are used as part of a national HTA program. This study, therefore, aimed to identify and critically appraise the use of VOI methods in nationally commissioned HTA reports in an example jurisdiction namely England and Wales.
METHODS
A systematic review of HTA reports, funded by the NIHR on behalf of NICE (National Institute for Health and Care Excellence) to inform its national clinical guidance to the NHS (National Health Service), published between January 2004 and December 2013 was conducted to identify all reports that had used some form of VOI methods. NIHR HTA reports that have included the formal analysis of VOI were identified by using a structured search of the NIHR database of published reports (www.hta.ac.uk/project/htapubs.asp) using the terms “value of information”, “expected value of”, “expected net benefit of sampling”, “VOI”, “EVI”, “EVPI”, “EVPPI”, “PEVPI”, “EVSI”, and “ENBS”.
Two reviewers (S.M. & K.P.) screened titles and executive summaries to identify potentially appropriate HTA reports for inclusion. For the purpose of this review, the definition of VOI methods provided by Claxton et al. (Reference Claxton, Ginnelly, Sculpher, Philips and Palmer6) was used to guide the relevance of the methods used. The review only included completed NIHR HTA reports and excluded reports that were in progress or unpublished. All identified HTA reports that had used VOI method(s) were then summarized in terms of: (i) types of VOI methodology used; (ii) parameters and key assumptions used in the VOI method; and (iii) key findings and conclusions drawn from the VOI framework in terms of the need for further research. A data extraction form was created to systematically summarize the relevant information from each HTA report and the data are presented in tables together with a narrative summary.
RESULTS
A total of 512 NIHR-commissioned HTA reports were identified between January 2004 and December 2013. Of these, 147 reported primary studies, 162 reported methodological studies and 203 (40 percent) reported systematic review and model-based economic evaluation studies. Of the 203 systematic review and model-based studies, 25 (12 percent) had used some form of VOI analyses and were identified as relevant for inclusion in this review. Figure 1 shows the type and number of published HTA reports each year and the number of studies that included a VOI analysis. No studies that included a VOI analysis were published in 2013.
Table 1 summarizes the focus of the model-based economic evaluation and modeling methods used. Table 2 describes the types of VOI analysis conducted together with the key inputs and assumptions used in the analysis, which are needed to interpret the estimated value of information in terms of its relevance to a decision-makers context. All twenty-five HTA reports aimed to address the question of whether to adopt a proposed new technology based on analytic modeling and VOI considerations from the perspective of the UK NHS. The technologies being evaluated comprised medicines, surgical procedures, diagnostics and medical devices for a range of different conditions and study populations. There was no clear pattern in terms of the application of VOI methods regarding specific interventions or patient populations. The types of decision-analytic models used were either Markov models (eleven studies) (Reference Carlton, Karnon, Czoski-Murray, Smith and Marr14–Reference Thompson Coon, Rogers and Hewson24), decision trees (five studies) (Reference Hewitt, Gilbody and Brealey25–Reference Chen, Madan and Welton29), or a combination of a decision tree and Markov model (five studies) (Reference Castelnuovo, Thompson-Coon and Pitt30–Reference Soares, Welton and Harrison34). In addition, one study used (Reference Harris, Felix and Miners35) discrete-event simulation model, one study (Reference Stevenson, Lloyd-Jones and Papaioannou37) used individual patient-based state transition model, one study (Reference Stevenson, Scope and Sutcliffe38) used a simple (linear) mathematical model, and one study (Reference Pandor, Eastham, Beverley, Chilcott and Paisley36) did not specify the model type clearly within the main text.
aSingle electronic aids treatment effects.
bMultiple electronic aids treatment effects.
The majority (n = 19) of the studies assumed a lifetime horizon for the baseline analysis. However, six studies used a shorter time horizon ranging between 12 months and 20 years. A variety of data sources were used including: RCTs; published studies; pooled data; and expert opinion. One study (Reference Hewitt, Gilbody and Brealey25) did not state the data source explicitly. Only eight of the twenty-five HTA reports (Reference Grant, Wileman and Ramsay15;Reference Bhattacharya, Middleton and Tsourapas18;Reference Black, Clar and Henderson19;Reference Colbourn, Asseburg and Bojke26;Reference Chen, Madan and Welton29;Reference Rodgers, McKenna and Palmer32;Reference Harris, Felix and Miners35;Reference Stevenson, Lloyd-Jones and Papaioannou37) used meta-analytic approaches to synthesize selected model parameters. In addition, twenty-two of the studies also stated explicitly that they used expert opinion to populate some of the model parameters.
Three studies (Reference McKenna, McDaid and Suekarran23;Reference Colbourn, Asseburg and Bojke26;Reference Soares, Welton and Harrison34) conducted EVPI, EVPPI and EVSI analyses, thirteen studies conducted EVPI and EVPPI analyses (Reference Grant, Wileman and Ramsay15;Reference Speight, Palmer and Moles17;Reference Black, Clar and Henderson19;Reference Fox, Mealing and Anderson21;Reference Garside, Pitt, Somerville, Stein, Price and Gilbert22;Reference Hewitt, Gilbody and Brealey25;Reference Clegg, Loveman and Gospodarevskaya27;Reference Castelnuovo, Thompson-Coon and Pitt30;Reference Rodgers, McKenna and Palmer32;Reference Rogowski, Burch and Palmer33;Reference Harris, Felix and Miners35;Reference Pandor, Eastham, Beverley, Chilcott and Paisley36,Reference Stevenson, Scope and Sutcliffe38), eight studies (Reference Carlton, Karnon, Czoski-Murray, Smith and Marr14;Reference McKenna, Burch and Suekarran16;Reference Bhattacharya, Middleton and Tsourapas18;Reference Collins, Fenwick and Trowman20;Reference Thompson Coon, Rogers and Hewson24;Reference Brush, Boyd and Chappell28;Reference Chen, Madan and Welton29;Reference Robinson, Palmer and Sculpher31) conducted EVPI analysis alone, and one study (Reference Stevenson, Lloyd-Jones and Papaioannou37) conducted EVSI analysis alone. The description of the VOI method(s) used was then assessed in terms of whether the following key assumptions were reported explicitly: assumed lifetime of the technology, stated size of the relevant population to estimate population EVPI and the stated threshold value of cost per quality-adjusted life-year (QALY) (ceiling cost-effectiveness ratio).
More than half of the studies (n = 16) assumed a 10-year lifetime for the technology in the EVPI calculation. The range of assumed lifetime for the technologies was between 1.5 and 15 years, and 5 studies (Reference Bhattacharya, Middleton and Tsourapas18;Reference Collins, Fenwick and Trowman20;Reference Fox, Mealing and Anderson21;Reference Clegg, Loveman and Gospodarevskaya27;Reference Brush, Boyd and Chappell28) gave some reason for the selection of the lifetime used. One study (Reference Thompson Coon, Rogers and Hewson24) did not report the assumed lifetime of the technology. Of the five studies that gave some reason for the selection of the lifetime used, these were based on expert opinion in three cases (Reference Fox, Mealing and Anderson21;Reference Clegg, Loveman and Gospodarevskaya27;Reference Brush, Boyd and Chappell28) and in one case to keep the analysis in line with a published NICE technology appraisal (Reference Collins, Fenwick and Trowman20). One case did cite a reference in support of the assumed value (Reference Bhattacharya, Middleton and Tsourapas18).
Two studies (Reference Bhattacharya, Middleton and Tsourapas18;Reference Thompson Coon, Rogers and Hewson24) did not report the EVPI at population level. The individual level EVPI was reported explicitly by ten studies. One study (Reference Pandor, Eastham, Beverley, Chilcott and Paisley36) assumed a ceiling ratio of just £1,000 per QALY gained as the willingness to pay threshold, while the majority of the studies (n = 17) assumed a ceiling ratio of £30,000 per additional QALY gained. This value is the upper limit of the threshold range of ceiling ratios (£20,000 to £30,000 per QALY gained) used in the context of NICE decision making to interpret whether an intervention is a cost-effective use of NHS resources. Of interest, only eleven of the twenty-five HTA reports (Reference Grant, Wileman and Ramsay15–Reference Speight, Palmer and Moles17;Reference Garside, Pitt, Somerville, Stein, Price and Gilbert22;Reference McKenna, McDaid and Suekarran23;Reference Colbourn, Asseburg and Bojke26;Reference Chen, Madan and Welton29;Reference Robinson, Palmer and Sculpher31;Reference Soares, Welton and Harrison34;Reference Stevenson, Lloyd-Jones and Papaioannou37;Reference Stevenson, Scope and Sutcliffe38) that had used VOI method(s) reported clear recommendations regarding the interpretation of the analysis presented.
DISCUSSION
This review has shown that VOI methods are used in NIHR-commissioned HTA reports using model-based economic evaluations to identify and quantify the incremental costs and benefits of new healthcare technologies. We focused on the application of VOI methods in the context of nationally funded-research under the jurisdiction of England and Wales. It would be interesting to compare the identified use of VOI methods with other jurisdictions, across Europe, in the United States and Australasia, but this aim was beyond the scope of this current study and in some instances would involve language-translation of reports to extract the data needed.
Broadly, the identified VOI methods were based on the assumption that quantitative prioritization could help assist the performance of research to address the research gaps. The number of HTA reports that had used some form of VOI methods is, however, still very small compared with the number of published reports. In March 2014, when the search was conducted there were some 512 published HTA reports listed on the HTA Web site but just two-fifths of these studies used systematic review and model-based evaluations and thus, potentially eligible for using the VOI method(s).
Of the 203 systematic review and model-based evaluation HTA reports just 25 had used some form of VOI methods. This finding of relatively low levels of use of VOI method(s) in practice has to be interpreted in the context that VOI has only been recognized as a method potentially relevant for using in a national research prioritization context since 2004, when Claxton et al. (Reference Claxton, Ginnelly, Sculpher, Philips and Palmer6) published the first pilot study that explored if, and how, to apply the methods. The frequency of the use of VOI methods in HTA reports in England and Wales does not appear to be increasing at a faster pace over time. The reason for the observed lack of increase in the use of VOI is not known and is a possible topic for further research. Suggested reasons could be a reflection of a constant number of VOI studies been funded for completion by existing research groups familiar with the method or alternatively, could indicate a note of caution on the part of the NIHR funding body in terms of the volume of commissioned studies including VOI.
The most common types of VOI methods used were EVPI and EVPPI. There were very few examples of using EVSI in the context of HTA reports, which is undoubtedly a reflection of the level of computation and technical complexity associated with this analysis. Very few studies reported clear recommendations regarding the interpretation of the VOI analysis presented. The studies differed in terms of their style of reporting and it was not always transparent in relation to which assumptions had been used when conducting the VOI method(s). The omission of explicit reporting of these key assumptions has clear implications for decision makers wanting to understand if the VOI method(s) presented is relevant to their view of clinical practice in their locality in terms of the lifetime of the technology and the assumed population size. Of interest, there seemed to be no standard assumption regarding the lifetime of the technology. This lack of standardization may be appropriate and a reflection of the types of technologies included in the analysis covering a wide range such as medicines, surgical procedures, diagnostics, and devices.
Two arguments could be put forward for the choice of ceiling ratio. The choice of the £30,000 per QALY gained threshold is based on experience of decision making in the context of NICE technology appraisals. The most surprising omission from the majority of the HTA reports was a section that provided decision makers with a clear interpretation of the meaning of the VOI analyses for future research. Only a few studies put the estimated values of information in the context of the predicted cost of future research studies or provided a process by which the VOI analyses could be used to inform future research prioritization. It seems that although VOI methods are being included in HTA reports in the United Kingdom, they are not always being reported explicitly and in a way that decision-makers find accessible. This is a topic for further research requiring qualitative exploration of the relevance and use of VOI methods in clinical and policy practice.
Notwithstanding the interesting results, our analysis has some limitations. First, we excluded economic analyses that proposed priorities for further research based on sensitivity analysis, but which did not include any VOI analysis. Second, we did not attempt to assess the quality of the included HTA reports owing to the fact that we are not aware of any validated system to value reports of research prioritization methods. Third, we did not verify whether any of the HTA reports implemented the research gaps identified through a VOI analysis.
CONCLUSION
This review has shown that VOI is used as a method in NIHR-funded HTA reports albeit in a relatively small number. Importantly, the level of detail of reporting of the VOI methods varied and in some instances key aspects of the assumptions underpinning the analysis were not reported explicitly. We argue that this may limit the ability of a decision maker reading the published HTA to interpret and assess the relevance of the results of the VOI to their own funding decisions. Further research is necessary to understand if, and how, VOI methods are used in different jurisdictions and whether decision-makers can and have interpreted the findings of the published VOI analyses. The relevance and use of VOI methods to inform research prioritization in the context of HTA is an important topic for further research.
CONTACT INFORMATION
Syed Mohiuddin, BSc (Hons), MSc, PhD, Research Fellow in Health Economics, Manchester Centre for Health Economics, Institute of Population Health, The University of Manchester, Oxford Road, Manchester M13 9PL, UK
Elisabeth Fenwick, BA, MSc, MSc, PhD, Professor of Health Economics, Health Economics and Health Technology Assessment, Institute of Health and Wellbeing, University of Glasgow, Glasgow G12 8RZ, Scotland
Katherine Payne, BPharm (Hons), DipClinPharm, MSc, PhD (katherine.payne@manchester.ac.uk), Professor of Health Economics, Manchester Centre for Health Economics, Institute of Population Health, The University of Manchester, Oxford Road, Manchester M13 9PL, UK
CONFLICTS OF INTEREST
The authors report no conflicts of interest.