Introduction
New medicines seeking publicly funded reimbursement submit health technology assessments (HTA) in many countries (Reference Beletsi, Koutrafouri, Karampli and Pavi1). Common features of a submission dossier include an assessment of effectiveness, safety, relative effectiveness, and economic data (Reference Beletsi, Koutrafouri, Karampli and Pavi1). Economic data includes both the consideration of value for money determined through cost-effectiveness analysis, and affordability, assessed through budget impact analysis (BIA); both from the perspective of the payer (Reference Beletsi, Koutrafouri, Karampli and Pavi1).
Cost-effectiveness analysis, a type of economic evaluation, is the comparative analysis of both costs and outcomes of alternative treatments, which makes the trade-offs between costs and consequences apparent with each alternative under consideration; and allows determination of relative cost-effectiveness (Reference Drummond, Sculpher, Torrance, O'Brien and Stoddart2;Reference Shiell, Donaldson, Mitton and Currie3). By calculating the relative cost-effectiveness of alternatives, cost-effectiveness analysis provides information about value for money.
In a cost-effectiveness model, such as a Markov model, decision tree, or discrete event simulation, patients move through biologically or clinically defined health states, with their movement through those states differing due to the technology under consideration (Reference Briggs, Claxton and Sculpher4). Resource use, costs, and outcomes associated with time spent in each state and the movement through health states are tabulated for each comparator. Outcomes from each comparator are then compared quantitatively in the incremental cost-effectiveness ratio(s).
To complement this, BIA estimates financial consequences of a healthcare decision in a specific healthcare system and patient population (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). The impact of the new technology, programme, or intervention on the number of eligible patients, resource use by eligible patients, and cost of illness is compared to the current environment, which does not have the technology in question, to determine the budget impact (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). The budget impact should include the costs of the new technology, hospitalization, physician visits, diagnostic tests, and other supporting therapies that may be affected (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). The International Society for Pharmacoeconomics and Outcomes Research (ISPOR) best practice guidelines highlight key features of the BIA: perspective, use and cost of current and new technologies (such as eligible population, current interventions, uptake and market share (including but not limited to: growth in utilization, changes in prescription restriction and guidelines, distribution of existing mix of treatment modalities at time of launch, future distribution of treatment modalities, and diffusion/uptake curves over BIA follow-up period (Reference Nuijten, Mittendorf and Persson6)) of new technology, and cost of the current/new technology mix), impact on other condition-related or indirect costs, time horizon, discounting, and uncertainty and scenario analysis (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). Estimation of use of current and new technologies, or market share for each comparator, is a defining feature of this type of analysis (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). BIA should incorporate features of the local healthcare system, and provide a computing framework for decision-makers to customize inputs to their specific context (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5).
Inherent to reimbursement decision-making that relies on cost-effectiveness modeling and BIA is uncertainty. There are five key types of uncertainty considered in economic evaluations and cost analyses: variability, parameter uncertainty, decision uncertainty, heterogeneity, and structural uncertainty (Reference Briggs, Claxton and Sculpher4) (Box 1).
Variability – Individual differences between patients that cannot be reduced through collection of individual data. For example, health-related quality of life associated with clinical events
Parameter Uncertainty – Imprecision in model inputs introduced through estimation of input parameters for populations on the basis of information available for samples only. Theoretically, parameter uncertainty can be reduced through the collection of additional information.
Decision Uncertainty – The joint implications of parameter uncertainty in a model produce a distribution of possible outcomes for the comparators considered. According to Briggs et al. (Reference Briggs, Claxton and Sculpher4), there is a strong argument for basing decisions on the expectation of this distribution.
Heterogeneity – The extent to which a proportion of the between-patient variability can be explained by one or more patient characteristics. For example, a particular segment of the population may be more likely to experience a clinical event. Conditional upon those characteristics that make the clinical event more likely, input parameters are estimable, although uncertainty will still be present in those parameters.
Structural Uncertainty – The result of assumptions and scientific judgments made to construct and interpret a model. For example, the changes in expected intervention patterns with the availability of a new intervention and restrictions for use.
In economic evaluations, the use of probabilistic sensitivity analysis to capture parameter uncertainty is well described (Reference Briggs, Claxton and Sculpher4). However, parameter uncertainty is also present in the BIA. Current BIA guidance recommends plausible alternate scenarios with point estimates for population costs (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). However, this prompts the question, is the eligible population numerous enough that a decision-maker can be confident in a point estimate? At best, a point estimate is equally likely to be above or below the true value, with no information about the possible range and spread (Reference Anderson7). ISPOR suggests that data may limit probabilistic sensitivity analysis, and therefore, it cannot be conducted fully (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). In addition, Vreman et al. (Reference Vreman, Naci, Goettsch, Mantel-Teeuwisse, Schneeweiss and Leufkens8) suggests a recent trend toward drugs approved by regulators on the basis of testing in fewer patients and with fewer pivotal trials; both of which result in increased parameter uncertainty.
There is an appetite among decision-makers for estimates of parameter uncertainty in BIA generated through probabilistic sensitivity analysis, although guidance for how it should be quantified is lacking (Reference Lamrock, McCullagh, Tilson and Barry9;10). In this paper, we propose a method to use the established methods to explore uncertainty in cost-effectiveness models to quantify the parameter uncertainty in BIA. Specifically, we focus on BIA in the context of technology submissions for reimbursement or listing decision-making, where the perspective taken is that of the payer. First, similarities and differences between cost-effectiveness models and BIA are highlighted, with a brief discussion of the handling of parameter uncertainty. Second, a method to generate interval estimates of cost outcomes is outlined, relying on the established methods used in cost-effectiveness modeling.
How Cost-Effectiveness Models and Budget Impact Analysis Fit Together
Although BIA would appear to be much more focused on the local context, the local context should influence both the BIA and cost-effectiveness modeling (Reference Drummond, Barbieri, Cook, Glick, Lis and Malik11). ISPOR guidance regarding transferability of economic evaluations across jurisdictions suggests that baseline risk, treatment efficacy, unit prices, resource use, and quality of life/utility are all factors that may differ between jurisdictions (Reference Drummond, Barbieri, Cook, Glick, Lis and Malik11). The local context is an important factor in both the determination of value for money in the cost-effectiveness model, and the determination of affordability in the BIA.
Evidence synthesized in the cost-effectiveness model is adjusted to fit the market share and uptake over time, and patient population size in the BIA. In guidelines from ISPOR, the UK, Belgium, Ireland, and Australia, it is recommended that the BIA is consistent with the clinical and economic assumptions in cost-effectiveness modeling (Reference Foroutan, Tarride, Xie and Levine12). In guidelines from Brazil, the use of a decision tree or Markov model for the BIA is suggested to ensure consistency with cost-effectiveness analysis (Reference Foroutan, Tarride, Xie and Levine12). Despite similarities in guidance for the approach to cost-effectiveness modeling and BIA, the handling of uncertainty remains a key difference between these two methods.
Specific to BIA, ISPOR suggests that parameter uncertainty and structural uncertainty should be considered. Structural uncertainty can be addressed through scenario analysis with model averaging, or inclusion of additional parameters which explicitly include this uncertainty (Reference Bojke, Claxton, Sculpher and Palmer13). And parameter uncertainty could be handled with probabilistic sensitivity analysis (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5).
In probabilistic sensitivity analysis, each parameter takes on a random value within the specified distribution (Reference O'Hagan, Stevenson and Madan14). The method of moments is used to convert sample statistics to population-level parameters (Reference Mishra, Datta-Gupta, Mishra and Datta-Gupta15), and all parameters in the model take on random values simultaneously. For each simulated set of parameters, model outcomes are calculated (Reference O'Hagan, Stevenson and Madan14). Using a sufficient number of Monte Carlo simulations, a probability distribution is generated that represents the consequences of input parameter uncertainty (Reference Briggs16). Probabilistic sensitivity analysis is used to understand parameter uncertainty, which is the result of imprecision in input parameters (Reference Briggs, Claxton and Sculpher4). Use of probabilistic sensitivity analysis within cost-effectiveness models is set out as best practices in some economic evaluations guideline statement and pharmaceutical guidelines for submissions to reimbursement decision-making or advisory bodies (17;18).
For BIA, the ISPOR practice guidelines suggest that because parameter uncertainty cannot be fully quantified, and structural uncertainty is not easily parameterized, scenario analysis is recommended to produce point estimates for plausible alternative scenarios (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). A point estimate implies an unjustified sense of certainty in the presented outcome (Reference Schwartz, Woloshin and Welch19). In BIA, the lack of understanding of parameter uncertainty, and resulting risk of unanticipated cost outcomes may seriously limit a decision-maker's ability to quantify uncertainty in the total investment required of the technologies considered. The presentation of cost outcomes as interval estimates would better inform decision-makers about the strength of the information, indicated by the width of the interval.
To address this, we propose a solution relying on probabilistic sensitivity analysis to generate confidence intervals for estimates generated with BIA. Our proposed solution to the lack of parameter uncertainty in budget impact estimates is intended to identify where this uncertainty is unacceptable, and further exploration or negotiation is warranted before a reimbursement decision is made (Reference Claxton20).
Proposed Method to Include Parameter Uncertainty in the BIA
We suggest that the simplest way to implement probabilistic sensitivity analysis in a BIA is to rely on cost outcomes generated in the cost-effectiveness model to also inform estimates of budget impact. Uncertainty present in the cost distribution estimated through probabilistic sensitivity analysis in the cost-effectiveness model should also inform estimates of budget impact. As the method of moments is used to convert sample statistics to population level parameters, the distribution of outcomes from probabilistic sensitivity analysis reflects the decision uncertainty in population level outcomes. Mean and variance of population level outcomes, given the parameter uncertainty present in model inputs, are estimable from model outputs.
The central limit theorem suggests that as sufficiently large random samples are drawn from a population with replacement, then the distribution of sample means will follow an approximately normal distribution (Reference Kwak and Kim21). Therefore, when drawing samples from the model to inform estimates of budget impact, the assumption of an approximately normal distribution for costs is reasonable. We propose that the variance in the estimate of mean costs from the model also reflects the variance in budget impact estimates and should be used to estimate budget impact confidence intervals (Equation 1).
In this equation, n is the number of eligible patients for each comparator (calculated as the product of eligible patients and market share/uptake), μ represents the population mean budget impact obtained from the model (calculated as treatment minus status quo), z is the critical value from the normal distribution for the required level of confidence, and σ is the standard deviation of the budget impact obtained from the model.
To use cost outcomes from probabilistic sensitivity analysis in the BIA, this would require customization of the model to generate probabilistic outcomes at relevant timepoints. Cost-effectiveness models usually estimate mean costs for a representative individual over a lifetime, but cost outcomes at specified timepoints are calculable. To inform BIA, annual outcomes are likely to be the most useful, over a limited time horizon.
In a cost-effectiveness model, uncertainty intervals are generated with percentiles, specifically the α/2 and (1−α/2) percentiles from probabilistic sensitivity analysis (Reference Briggs, Claxton and Sculpher4). These uncertainty intervals are independent of the number of patients considered. In BIA however, the number of patients likely to receive each treatment is of critical importance. Our proposed method would incorporate the number of patients in the estimation of uncertainty intervals. As the number of eligible patients in the context considered for BIA increases, it is expected that the width of the confidence interval per patient for the estimated budget impact will decrease. As the number of eligible patients decrease, wider confidence intervals per patient are expected.
Unfortunately, there is often insufficient evidence to estimate market share/uptake as a stochastic parameter, which is alluded to in ISPOR BIA recommendations (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). However, the presentation of budget impact as a point estimate conditional upon market share only conceals parameter uncertainty from considerations of affordability in decision-making. Building on the recommendation of ISPOR that BIA relies upon plausible alternate scenarios to explore uncertainty, we would suggest that parameter uncertainty is considered separately from the structural uncertainty present in scenarios that differ by estimates of uptake/market share. Our proposed method would generate probabilistic estimates of budget impact conditional upon the number of eligible patients due to market share/uptake assumptions in each scenario.
A Case Example
This example is intended to illustrate how cost outcomes from a cost-effectiveness model might be used to inform probabilistic BIA. Simulated cost outcomes from a cost-effectiveness model are provided in the Supplementary Appendix. The cost outcomes at one year represent the mean budget impact per patient at one year (calculated as treatment minus status quo), generated from a cost-effectiveness model. One thousand draws from the gamma distribution, parameterized with the method of moments, were used to generate the distribution of budget impact outcomes at one year. For these simulated outcomes, there is mean of $10,060.70 and standard deviation of $2,489.03 (alpha = 16, beta = 625). As the sample size increases, the expected population budget impact is the product of the sample size and the expected budget impact per patient, from the cost-effectiveness model (Figure 1, dashed blue line). Confidence intervals, estimated with the proposed method, are shown with solid blue lines. The dashed black line, indicating the width of the confidence interval divided by the number of patients, shows how uncertainty is affected by including additional patients. Importantly, this suggests that the usefulness of a confidence interval accompanying point estimates of budget impact decreases as the number of eligible patients increases.
Figure 1. Estimated budget impact with confidence intervals by sample size, and budget impact confidence interval width divided by the number of patients.
Discussion
In the context of health technology assessment for reimbursement decision-making, both cost-effectiveness modeling and BIA are often required; and these two facets exploring value for money and affordability of a new technology are closely related. Equipoise in guidance from ISPOR suggests that a similar approach to these types of analysis is used, although the exploration of uncertainty in economic outcomes would differ. Currently, BIA is deterministic. By overlooking parameter uncertainty, decision-makers are unable to consider precision of cost outcomes for their local context. To remedy this, we propose a solution relying on previously developed methods for probabilistic sensitivity analysis to generate confidence intervals for estimates of budget impact. We propose that the variance in mean costs from the cost-effectiveness model also reflects the variance in budget impact estimates and should be used to estimate budget impact confidence intervals. Our proposed solution to the lack of uncertainty in budget impact estimates is not intended to signify statistical significance, but rather, where uncertainty is unacceptable and further exploration or negotiation is warranted before a decision is made (Reference Claxton20).
Consideration of BIA as an extension of cost-effectiveness models, with budget impact estimates conditional upon market share/uptake would be the simplest approach to inclusion of parameter uncertainty in considerations of affordability. Currently, ISPOR recommends that uptake/market share of each intervention is a type of structural uncertainty that is particularly important to consider (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). Regardless of inclusion of parameter uncertainty, market share and market uptake remain as significant sources of uncertainty; often relying on expert opinion, which is low on the hierarchy of evidence (Reference Concato, Shah and Horwitz22). Scenario analysis examining assumptions about intervention mix and changes expected with introduction of a new intervention will continue to be necessary to understand this source of structural uncertainty.
When requested by budget holders, off-label usage may be considered in BIA intended to capture use outside of the registered indication (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5;Reference Nuijten, Mittendorf and Persson6). Nuijten et al. (Reference Nuijten, Mittendorf and Persson6) suggest that financial analysis of off-label use cannot rely on existing cost-effectiveness studies, and separate submodels for off-label indications should be developed. ISPOR guidance for BIA also recommends that with little to no effectiveness or safety data on such off-label use, promotion of off-label use and inclusion in BIA is not recommended. For health technology assessment included as part of a submission dossier for reimbursement decision-making, the inclusion of off-label use in BIA would be outside of this specific context.
Costs of drugs or devices are often considered to be fixed. In the context of cost-effectiveness modeling and BIA conducted for the purposes of reimbursement decision making, drug costs are negotiable – even if the confidential price paid does not match the list price. When uncertainty present in a decision exceeds a decision maker's appetite, uncertainty in cost outcomes could be reduced through negotiation on modifiable sources of uncertainty. For example, adjusting from a price per dose to a price per patient would reduce uncertainty present in dose size per patient. Our proposed solution to the lack of uncertainty in budget impact estimates is not intended to signify statistical significance, but rather, where uncertainty may be unacceptable and warrant further exploration before a decision is made (Reference Claxton20).
The method proposed in this paper, or any method to generate confidence intervals for BIA, is going to be the most useful where there is the greatest risk. The smallest populations are most likely to deviate from the mean cost per patient predicted in cost-effectiveness modeling. Rare diseases, small markets, or subsets of larger markets (e.g., regional healthcare systems), or subsets of a heterogeneous population are the most likely to experience larger uncertainty in point estimates of budget impact. We would note however, that for very small populations, the assumption of normally distributed sample means for cost outcomes is less reliable. Supplementary Appendix also suggests that for very large populations, probabilistic estimates of budget impact add little to point estimates. Also, this method will not apply to all models. Specifically, the context of submission dossiers including both cost-effectiveness modeling and BIA for regulatory decision-making purposes is considered, where the perspective taken is that of the payer. The current presentation of budget impact as a point estimate conditional on market share and market uptake does not address or change the parameter uncertainty present in the decision; it is merely obfuscated. Embracing probabilistic sensitivity analysis to produce interval estimates, conditional on estimates of market share and uptake, would clarify the impact of parameter uncertainty, providing additional information for decision-makers.
Budget impact analysis is intended to inform assessment of affordability and sustainability, or the financial consequences of introduction of a new technology in the short term for a decision-maker (Reference van de Vooren, Duranti, Curto and Garattini23). This proposed solution attempts to incorporate budget impact estimates as intervals into submission dossiers, allowing decision-makers to consider precision of estimates in their decisions. Although Lamrock et al. (Reference Lamrock, McCullagh, Tilson and Barry9) find that probabilistic estimates of budget impact do not improve accuracy, we would stress that probabilistic sensitivity analysis is concerned with precision, or width of interval estimates, rather than accuracy, or the position of an estimate.
One possibility to improve precision in budget impact estimates, is collaboration and data sharing between submitters and decision-makers, or between multiple decision-makers. In the case of small populations, multiple populations could be combined between submitters and decision-makers, or between multiple decision-makers, to increase precision and understanding of the central tendency in budget impact estimates, such as through group purchasing or cost-sharing agreements. There are many other ways in which decision-makers might choose to interpret and use interval estimates of budget impact. How the precision of budget impact estimates should be used by decision makers remains to be decided.
Conclusion
The current presentation of budget impact as a point estimate conditional on market share and market uptake does not address or change the parameter uncertainty present in the decision. In the context of health technology assessment submitted to support regulatory decision-making, we propose a method of estimating parameter uncertainty in budget impact analysis that is conditional upon assumptions of market share/uptake and is most likely to be useful for small populations, such as rare diseases, subsets of heterogeneous populations, and in small markets. Future efforts focusing on how uncertainty in epidemiologic parameters should be captured, or how decision-makers should use interval estimates of budget impact for small populations, are required.
Introduction
New medicines seeking publicly funded reimbursement submit health technology assessments (HTA) in many countries (Reference Beletsi, Koutrafouri, Karampli and Pavi1). Common features of a submission dossier include an assessment of effectiveness, safety, relative effectiveness, and economic data (Reference Beletsi, Koutrafouri, Karampli and Pavi1). Economic data includes both the consideration of value for money determined through cost-effectiveness analysis, and affordability, assessed through budget impact analysis (BIA); both from the perspective of the payer (Reference Beletsi, Koutrafouri, Karampli and Pavi1).
Cost-effectiveness analysis, a type of economic evaluation, is the comparative analysis of both costs and outcomes of alternative treatments, which makes the trade-offs between costs and consequences apparent with each alternative under consideration; and allows determination of relative cost-effectiveness (Reference Drummond, Sculpher, Torrance, O'Brien and Stoddart2;Reference Shiell, Donaldson, Mitton and Currie3). By calculating the relative cost-effectiveness of alternatives, cost-effectiveness analysis provides information about value for money.
In a cost-effectiveness model, such as a Markov model, decision tree, or discrete event simulation, patients move through biologically or clinically defined health states, with their movement through those states differing due to the technology under consideration (Reference Briggs, Claxton and Sculpher4). Resource use, costs, and outcomes associated with time spent in each state and the movement through health states are tabulated for each comparator. Outcomes from each comparator are then compared quantitatively in the incremental cost-effectiveness ratio(s).
To complement this, BIA estimates financial consequences of a healthcare decision in a specific healthcare system and patient population (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). The impact of the new technology, programme, or intervention on the number of eligible patients, resource use by eligible patients, and cost of illness is compared to the current environment, which does not have the technology in question, to determine the budget impact (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). The budget impact should include the costs of the new technology, hospitalization, physician visits, diagnostic tests, and other supporting therapies that may be affected (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). The International Society for Pharmacoeconomics and Outcomes Research (ISPOR) best practice guidelines highlight key features of the BIA: perspective, use and cost of current and new technologies (such as eligible population, current interventions, uptake and market share (including but not limited to: growth in utilization, changes in prescription restriction and guidelines, distribution of existing mix of treatment modalities at time of launch, future distribution of treatment modalities, and diffusion/uptake curves over BIA follow-up period (Reference Nuijten, Mittendorf and Persson6)) of new technology, and cost of the current/new technology mix), impact on other condition-related or indirect costs, time horizon, discounting, and uncertainty and scenario analysis (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). Estimation of use of current and new technologies, or market share for each comparator, is a defining feature of this type of analysis (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). BIA should incorporate features of the local healthcare system, and provide a computing framework for decision-makers to customize inputs to their specific context (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5).
Inherent to reimbursement decision-making that relies on cost-effectiveness modeling and BIA is uncertainty. There are five key types of uncertainty considered in economic evaluations and cost analyses: variability, parameter uncertainty, decision uncertainty, heterogeneity, and structural uncertainty (Reference Briggs, Claxton and Sculpher4) (Box 1).
Box 1. Types of uncertainty in cost-effectiveness modeling and BIA (Reference Briggs, Claxton and Sculpher4;Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5)
Variability – Individual differences between patients that cannot be reduced through collection of individual data. For example, health-related quality of life associated with clinical events
Parameter Uncertainty – Imprecision in model inputs introduced through estimation of input parameters for populations on the basis of information available for samples only. Theoretically, parameter uncertainty can be reduced through the collection of additional information.
Decision Uncertainty – The joint implications of parameter uncertainty in a model produce a distribution of possible outcomes for the comparators considered. According to Briggs et al. (Reference Briggs, Claxton and Sculpher4), there is a strong argument for basing decisions on the expectation of this distribution.
Heterogeneity – The extent to which a proportion of the between-patient variability can be explained by one or more patient characteristics. For example, a particular segment of the population may be more likely to experience a clinical event. Conditional upon those characteristics that make the clinical event more likely, input parameters are estimable, although uncertainty will still be present in those parameters.
Structural Uncertainty – The result of assumptions and scientific judgments made to construct and interpret a model. For example, the changes in expected intervention patterns with the availability of a new intervention and restrictions for use.
In economic evaluations, the use of probabilistic sensitivity analysis to capture parameter uncertainty is well described (Reference Briggs, Claxton and Sculpher4). However, parameter uncertainty is also present in the BIA. Current BIA guidance recommends plausible alternate scenarios with point estimates for population costs (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). However, this prompts the question, is the eligible population numerous enough that a decision-maker can be confident in a point estimate? At best, a point estimate is equally likely to be above or below the true value, with no information about the possible range and spread (Reference Anderson7). ISPOR suggests that data may limit probabilistic sensitivity analysis, and therefore, it cannot be conducted fully (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). In addition, Vreman et al. (Reference Vreman, Naci, Goettsch, Mantel-Teeuwisse, Schneeweiss and Leufkens8) suggests a recent trend toward drugs approved by regulators on the basis of testing in fewer patients and with fewer pivotal trials; both of which result in increased parameter uncertainty.
There is an appetite among decision-makers for estimates of parameter uncertainty in BIA generated through probabilistic sensitivity analysis, although guidance for how it should be quantified is lacking (Reference Lamrock, McCullagh, Tilson and Barry9;10). In this paper, we propose a method to use the established methods to explore uncertainty in cost-effectiveness models to quantify the parameter uncertainty in BIA. Specifically, we focus on BIA in the context of technology submissions for reimbursement or listing decision-making, where the perspective taken is that of the payer. First, similarities and differences between cost-effectiveness models and BIA are highlighted, with a brief discussion of the handling of parameter uncertainty. Second, a method to generate interval estimates of cost outcomes is outlined, relying on the established methods used in cost-effectiveness modeling.
How Cost-Effectiveness Models and Budget Impact Analysis Fit Together
Although BIA would appear to be much more focused on the local context, the local context should influence both the BIA and cost-effectiveness modeling (Reference Drummond, Barbieri, Cook, Glick, Lis and Malik11). ISPOR guidance regarding transferability of economic evaluations across jurisdictions suggests that baseline risk, treatment efficacy, unit prices, resource use, and quality of life/utility are all factors that may differ between jurisdictions (Reference Drummond, Barbieri, Cook, Glick, Lis and Malik11). The local context is an important factor in both the determination of value for money in the cost-effectiveness model, and the determination of affordability in the BIA.
Evidence synthesized in the cost-effectiveness model is adjusted to fit the market share and uptake over time, and patient population size in the BIA. In guidelines from ISPOR, the UK, Belgium, Ireland, and Australia, it is recommended that the BIA is consistent with the clinical and economic assumptions in cost-effectiveness modeling (Reference Foroutan, Tarride, Xie and Levine12). In guidelines from Brazil, the use of a decision tree or Markov model for the BIA is suggested to ensure consistency with cost-effectiveness analysis (Reference Foroutan, Tarride, Xie and Levine12). Despite similarities in guidance for the approach to cost-effectiveness modeling and BIA, the handling of uncertainty remains a key difference between these two methods.
Specific to BIA, ISPOR suggests that parameter uncertainty and structural uncertainty should be considered. Structural uncertainty can be addressed through scenario analysis with model averaging, or inclusion of additional parameters which explicitly include this uncertainty (Reference Bojke, Claxton, Sculpher and Palmer13). And parameter uncertainty could be handled with probabilistic sensitivity analysis (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5).
In probabilistic sensitivity analysis, each parameter takes on a random value within the specified distribution (Reference O'Hagan, Stevenson and Madan14). The method of moments is used to convert sample statistics to population-level parameters (Reference Mishra, Datta-Gupta, Mishra and Datta-Gupta15), and all parameters in the model take on random values simultaneously. For each simulated set of parameters, model outcomes are calculated (Reference O'Hagan, Stevenson and Madan14). Using a sufficient number of Monte Carlo simulations, a probability distribution is generated that represents the consequences of input parameter uncertainty (Reference Briggs16). Probabilistic sensitivity analysis is used to understand parameter uncertainty, which is the result of imprecision in input parameters (Reference Briggs, Claxton and Sculpher4). Use of probabilistic sensitivity analysis within cost-effectiveness models is set out as best practices in some economic evaluations guideline statement and pharmaceutical guidelines for submissions to reimbursement decision-making or advisory bodies (17;18).
For BIA, the ISPOR practice guidelines suggest that because parameter uncertainty cannot be fully quantified, and structural uncertainty is not easily parameterized, scenario analysis is recommended to produce point estimates for plausible alternative scenarios (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). A point estimate implies an unjustified sense of certainty in the presented outcome (Reference Schwartz, Woloshin and Welch19). In BIA, the lack of understanding of parameter uncertainty, and resulting risk of unanticipated cost outcomes may seriously limit a decision-maker's ability to quantify uncertainty in the total investment required of the technologies considered. The presentation of cost outcomes as interval estimates would better inform decision-makers about the strength of the information, indicated by the width of the interval.
To address this, we propose a solution relying on probabilistic sensitivity analysis to generate confidence intervals for estimates generated with BIA. Our proposed solution to the lack of parameter uncertainty in budget impact estimates is intended to identify where this uncertainty is unacceptable, and further exploration or negotiation is warranted before a reimbursement decision is made (Reference Claxton20).
Proposed Method to Include Parameter Uncertainty in the BIA
We suggest that the simplest way to implement probabilistic sensitivity analysis in a BIA is to rely on cost outcomes generated in the cost-effectiveness model to also inform estimates of budget impact. Uncertainty present in the cost distribution estimated through probabilistic sensitivity analysis in the cost-effectiveness model should also inform estimates of budget impact. As the method of moments is used to convert sample statistics to population level parameters, the distribution of outcomes from probabilistic sensitivity analysis reflects the decision uncertainty in population level outcomes. Mean and variance of population level outcomes, given the parameter uncertainty present in model inputs, are estimable from model outputs.
The central limit theorem suggests that as sufficiently large random samples are drawn from a population with replacement, then the distribution of sample means will follow an approximately normal distribution (Reference Kwak and Kim21). Therefore, when drawing samples from the model to inform estimates of budget impact, the assumption of an approximately normal distribution for costs is reasonable. We propose that the variance in the estimate of mean costs from the model also reflects the variance in budget impact estimates and should be used to estimate budget impact confidence intervals (Equation 1).
In this equation, n is the number of eligible patients for each comparator (calculated as the product of eligible patients and market share/uptake), μ represents the population mean budget impact obtained from the model (calculated as treatment minus status quo), z is the critical value from the normal distribution for the required level of confidence, and σ is the standard deviation of the budget impact obtained from the model.
To use cost outcomes from probabilistic sensitivity analysis in the BIA, this would require customization of the model to generate probabilistic outcomes at relevant timepoints. Cost-effectiveness models usually estimate mean costs for a representative individual over a lifetime, but cost outcomes at specified timepoints are calculable. To inform BIA, annual outcomes are likely to be the most useful, over a limited time horizon.
In a cost-effectiveness model, uncertainty intervals are generated with percentiles, specifically the α/2 and (1−α/2) percentiles from probabilistic sensitivity analysis (Reference Briggs, Claxton and Sculpher4). These uncertainty intervals are independent of the number of patients considered. In BIA however, the number of patients likely to receive each treatment is of critical importance. Our proposed method would incorporate the number of patients in the estimation of uncertainty intervals. As the number of eligible patients in the context considered for BIA increases, it is expected that the width of the confidence interval per patient for the estimated budget impact will decrease. As the number of eligible patients decrease, wider confidence intervals per patient are expected.
Unfortunately, there is often insufficient evidence to estimate market share/uptake as a stochastic parameter, which is alluded to in ISPOR BIA recommendations (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). However, the presentation of budget impact as a point estimate conditional upon market share only conceals parameter uncertainty from considerations of affordability in decision-making. Building on the recommendation of ISPOR that BIA relies upon plausible alternate scenarios to explore uncertainty, we would suggest that parameter uncertainty is considered separately from the structural uncertainty present in scenarios that differ by estimates of uptake/market share. Our proposed method would generate probabilistic estimates of budget impact conditional upon the number of eligible patients due to market share/uptake assumptions in each scenario.
A Case Example
This example is intended to illustrate how cost outcomes from a cost-effectiveness model might be used to inform probabilistic BIA. Simulated cost outcomes from a cost-effectiveness model are provided in the Supplementary Appendix. The cost outcomes at one year represent the mean budget impact per patient at one year (calculated as treatment minus status quo), generated from a cost-effectiveness model. One thousand draws from the gamma distribution, parameterized with the method of moments, were used to generate the distribution of budget impact outcomes at one year. For these simulated outcomes, there is mean of $10,060.70 and standard deviation of $2,489.03 (alpha = 16, beta = 625). As the sample size increases, the expected population budget impact is the product of the sample size and the expected budget impact per patient, from the cost-effectiveness model (Figure 1, dashed blue line). Confidence intervals, estimated with the proposed method, are shown with solid blue lines. The dashed black line, indicating the width of the confidence interval divided by the number of patients, shows how uncertainty is affected by including additional patients. Importantly, this suggests that the usefulness of a confidence interval accompanying point estimates of budget impact decreases as the number of eligible patients increases.
Figure 1. Estimated budget impact with confidence intervals by sample size, and budget impact confidence interval width divided by the number of patients.
Discussion
In the context of health technology assessment for reimbursement decision-making, both cost-effectiveness modeling and BIA are often required; and these two facets exploring value for money and affordability of a new technology are closely related. Equipoise in guidance from ISPOR suggests that a similar approach to these types of analysis is used, although the exploration of uncertainty in economic outcomes would differ. Currently, BIA is deterministic. By overlooking parameter uncertainty, decision-makers are unable to consider precision of cost outcomes for their local context. To remedy this, we propose a solution relying on previously developed methods for probabilistic sensitivity analysis to generate confidence intervals for estimates of budget impact. We propose that the variance in mean costs from the cost-effectiveness model also reflects the variance in budget impact estimates and should be used to estimate budget impact confidence intervals. Our proposed solution to the lack of uncertainty in budget impact estimates is not intended to signify statistical significance, but rather, where uncertainty is unacceptable and further exploration or negotiation is warranted before a decision is made (Reference Claxton20).
Consideration of BIA as an extension of cost-effectiveness models, with budget impact estimates conditional upon market share/uptake would be the simplest approach to inclusion of parameter uncertainty in considerations of affordability. Currently, ISPOR recommends that uptake/market share of each intervention is a type of structural uncertainty that is particularly important to consider (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5). Regardless of inclusion of parameter uncertainty, market share and market uptake remain as significant sources of uncertainty; often relying on expert opinion, which is low on the hierarchy of evidence (Reference Concato, Shah and Horwitz22). Scenario analysis examining assumptions about intervention mix and changes expected with introduction of a new intervention will continue to be necessary to understand this source of structural uncertainty.
When requested by budget holders, off-label usage may be considered in BIA intended to capture use outside of the registered indication (Reference Sullivan, Mauskopf, Augustovski, Jaime Caro, Lee and Minchin5;Reference Nuijten, Mittendorf and Persson6). Nuijten et al. (Reference Nuijten, Mittendorf and Persson6) suggest that financial analysis of off-label use cannot rely on existing cost-effectiveness studies, and separate submodels for off-label indications should be developed. ISPOR guidance for BIA also recommends that with little to no effectiveness or safety data on such off-label use, promotion of off-label use and inclusion in BIA is not recommended. For health technology assessment included as part of a submission dossier for reimbursement decision-making, the inclusion of off-label use in BIA would be outside of this specific context.
Costs of drugs or devices are often considered to be fixed. In the context of cost-effectiveness modeling and BIA conducted for the purposes of reimbursement decision making, drug costs are negotiable – even if the confidential price paid does not match the list price. When uncertainty present in a decision exceeds a decision maker's appetite, uncertainty in cost outcomes could be reduced through negotiation on modifiable sources of uncertainty. For example, adjusting from a price per dose to a price per patient would reduce uncertainty present in dose size per patient. Our proposed solution to the lack of uncertainty in budget impact estimates is not intended to signify statistical significance, but rather, where uncertainty may be unacceptable and warrant further exploration before a decision is made (Reference Claxton20).
The method proposed in this paper, or any method to generate confidence intervals for BIA, is going to be the most useful where there is the greatest risk. The smallest populations are most likely to deviate from the mean cost per patient predicted in cost-effectiveness modeling. Rare diseases, small markets, or subsets of larger markets (e.g., regional healthcare systems), or subsets of a heterogeneous population are the most likely to experience larger uncertainty in point estimates of budget impact. We would note however, that for very small populations, the assumption of normally distributed sample means for cost outcomes is less reliable. Supplementary Appendix also suggests that for very large populations, probabilistic estimates of budget impact add little to point estimates. Also, this method will not apply to all models. Specifically, the context of submission dossiers including both cost-effectiveness modeling and BIA for regulatory decision-making purposes is considered, where the perspective taken is that of the payer. The current presentation of budget impact as a point estimate conditional on market share and market uptake does not address or change the parameter uncertainty present in the decision; it is merely obfuscated. Embracing probabilistic sensitivity analysis to produce interval estimates, conditional on estimates of market share and uptake, would clarify the impact of parameter uncertainty, providing additional information for decision-makers.
Budget impact analysis is intended to inform assessment of affordability and sustainability, or the financial consequences of introduction of a new technology in the short term for a decision-maker (Reference van de Vooren, Duranti, Curto and Garattini23). This proposed solution attempts to incorporate budget impact estimates as intervals into submission dossiers, allowing decision-makers to consider precision of estimates in their decisions. Although Lamrock et al. (Reference Lamrock, McCullagh, Tilson and Barry9) find that probabilistic estimates of budget impact do not improve accuracy, we would stress that probabilistic sensitivity analysis is concerned with precision, or width of interval estimates, rather than accuracy, or the position of an estimate.
One possibility to improve precision in budget impact estimates, is collaboration and data sharing between submitters and decision-makers, or between multiple decision-makers. In the case of small populations, multiple populations could be combined between submitters and decision-makers, or between multiple decision-makers, to increase precision and understanding of the central tendency in budget impact estimates, such as through group purchasing or cost-sharing agreements. There are many other ways in which decision-makers might choose to interpret and use interval estimates of budget impact. How the precision of budget impact estimates should be used by decision makers remains to be decided.
Conclusion
The current presentation of budget impact as a point estimate conditional on market share and market uptake does not address or change the parameter uncertainty present in the decision. In the context of health technology assessment submitted to support regulatory decision-making, we propose a method of estimating parameter uncertainty in budget impact analysis that is conditional upon assumptions of market share/uptake and is most likely to be useful for small populations, such as rare diseases, subsets of heterogeneous populations, and in small markets. Future efforts focusing on how uncertainty in epidemiologic parameters should be captured, or how decision-makers should use interval estimates of budget impact for small populations, are required.
Supplementary material
The supplementary material for this article can be found at https://doi.org/10.1017/S0266462321001707.
Funding
This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.
Conflict of Interest
The authors declared none.