Across countries the practice of health technology assessment (HTA) varies, but often it comprises a systematic review of the clinical effectiveness evidence and an economic evaluation in the form of a decision model. Technology assessment reports are a key part of the decision-making process used by policy makers such as the National Institute for Health and Clinical Excellence (NICE) in England and Wales (17). Evidence has shown that, although HTA reports are commissioned as one piece of work, often the evidence obtained in the systematic review is not used fully in the decision model (Reference Drummond, Iglesias and Cooper9).
Decision models are a useful tool, providing decision makers with an explicit framework which can be used to help inform decision making under conditions of uncertainty, by synthesizing available evidence and generating estimates of clinical and cost-effectiveness. However, any results can only be considered robust if all relevant inputs have been included. A recent study found that the evidence used to inform decision model parameter estimation was extremely variable and obtained from a variety of sources which ranged from randomised control trials to expert opinion. In addition to concerns raised about bias due to confounding, patient selection, and methods of analysis, as well as poor reporting of how evidence (clinical effectiveness, adverse effects, resource use, utilities, etc.) was identified, there was also a lack of justification for how evidence for the model was selected and a lack of quality assessment of the evidence used (Reference Cooper, Coyle and Abrams4).
In this study, we report the findings from a survey focusing on the inclusion of adverse effects in technology assessment reports. It is clear that the benefits of a treatment must not be outweighed by the adverse effects (Reference Asscher, Parr and Whitmarsh1;Reference Cuervo and Clarke6;Reference Ernst and Pittler10), and although it is widely accepted that all drugs are associated with potential adverse effects, it cannot be forgotten that procedural interventions, psychosocial interventions, and diagnostic tests are not necessarily free of unwanted adverse effects. If researchers fail to incorporate adverse effects adequately in models, this could limit the validity of the results obtained and diminish the recommendations made. In the worst case, interventions found to be cost-effective may be shown to be not cost-effective, or less cost-effective than the relevant comparator. Research has found that adverse effects are often underreported in journal articles (Reference Bernal-Delgado and Fisher2) and in systematic reviews, particularly in systematic reviews of nondrug interventions (Reference Hopewell, Wolfenden and Clarke14). These facts may impact on researchers attempting to identify and include adverse effects in both systematic reviews and decision models. The complex nature of dealing with adverse effect evidence makes it extremely unlikely that a single framework could provide adequate guidance. We believe that further work is required to establish, if possible, what can be considered “best practice” for a variety of situations for the inclusion of adverse effects. Currently, although there appears to be an implicit assumption within modeling guidance that adverse effects are very important, there appears to be a lack of clarity regarding how they should be dealt with and considered in modeling (Reference Philips, Ginnelly and Sculpher18). The aim of this research project was to review current practice regarding the incorporation of adverse effects in economic models.
METHODS
Studies were included in the survey if they were HTA reports commissioned by the National Institute for Health Research (NIHR) Health Technology Assessment programme, published between 2004 and 2007, and investigated the clinical and cost-effectiveness of a health technology using a systematic review and an economic model framework. Searches were conducted at the end of 2007. All HTA monographs (total of 186) dated from 2004 to 2007 were identified from the HTA Web site (http://www.hta.ac.uk/project/htapubs.asp). Two researchers independently screened all reports against the inclusion criteria. Any discrepancies were resolved by consensus, or where consensus could not be reached, a third researcher was consulted.
Data were extracted by one researcher using a standardized data extraction form in EPPI-Reviewer (Reference Thomas and Brunton22) and checked by a second. Discrepancies were resolved by discussion and, if necessary, a third opinion was sought. The topics investigated by studies were categorized according to the Health Research Classification System, developed by the UK Clinical Research Classification System (http://www.hrcsonline.net). For the purposes of this survey, adverse effects were defined as an undesirable or unintended effect of the intervention. Information pertaining to a failure to prevent “adverse events” such as death or stroke, when prevention was the intended effect of the intervention, were not extracted. The main focus of this survey was to identify and examine those reports that explicitly reported on the inclusion of adverse effects in the decision model. Whereas we acknowledge that in many instances there is likely to be an implicit capturing of adverse effects in the decision model, a decision model was only scored as having incorporated adverse effects if the report explicitly stated that adverse effects were included in the model structure, through clinical parameters, costs, health-related quality of life measure (by means of utilities) or patient withdrawals. If a study had captured adverse effects through the use of a clinical or cost parameter, no attempt was made to ascertain if adverse effects had also been captured by other means. Those models which did not appear to include a clinical or cost parameter for adverse effects were investigated to establish if adverse events had been explicitly captured through withdrawals or utilities. As a secondary analysis, to highlight the potential issue of implicit incorporation of adverse effects, the use of utilities and withdrawals were examined for all reports which met the inclusion criteria. The data were summarized in a narrative synthesis.
RESULTS
Of the 186 HTA reports published between 2004 and 2007, 80 met our inclusion criteria and were included in the survey. Of the eighty HTA reports, forty-seven (59 percent) were assessments conducted to inform NICE appraisals. Whereby NICE make recommendations on the use of new and existing medicines and treatments within the NHS in England and Wales. This differs from the role of other HTA reports which are independent research about the effectiveness of healthcare treatments and tests for those who use manage and provide care in the NHS. Some reports encompassed more than one research area, for example both diagnosis and treatment. The majority of the reports (sixty-one of eighty, 76 percent) were evaluations of therapeutic interventions.
Adverse Effects in the HTA Reports
Sixty-eight of eighty (85 percent) reports explicitly included adverse effects as an outcome of interest in the clinical review and forty-three of eighty (54 percent) included adverse effects in the economic model (see Table 1). Overall, thirty-nine (49 percent) included adverse effects in both the clinical review and the model, whereas eight (10 percent) did neither. Twenty-nine reports (36 percent) included adverse effects in the clinical review alone and four reports (5 percent) included adverse effects in the decision model alone (Reference Dretzke, Cummins and Sandercock8;Reference Garside, Pitt and Somerville11;Reference Goodacre, Sampson and Stevenson12;Reference Wardlaw, Chappell and Stevenson23). All four of these reports were of diagnostic interventions: two cardiovascular, one cancer, and one metabolic.
aTotals >80 because some reports had more than time horizon.
bIncludes lifetime horizon.
AEs, adverse effects; NICE, National Institute for Health and Clinical Excellence.
Comparison of the forty-three reports which included adverse effects in the economic model, found that slightly more of those that did include adverse effects in the model were assessments conducted for the NICE appraisal program (67 percent compared with 59 percent), were of therapeutic technologies (79 percent versus 70 percent), and used a 20-year-plus time horizon (51 percent compared with 24 percent). In addition, many more models of interventions for cardiovascular indications included adverse effects than did not (28 percent compared with 3 percent) (Table 1) Only a limited number were diagnostic technologies. These differences may reflect a greater experience of decision modeling in cardiovascular medicine as well as the decision maker focus of assessments for NICE and the more comprehensive nature of long-term models respectively.
Explicit Consideration of Adverse Effects
Those reports that included clinical or cost parameters as explicit indicators that adverse effects had been captured in the model are summarized in Table 2.
AE, adverse effect.
A total of 67 percent of the decision models that included adverse effects incorporated them through the use of a clinical parameter, and 79 percent incorporated them through the use of a cost parameter.
Further investigation found that three models appear to include a clinical parameter and no cost/resource parameter, suggesting that the clinical effect had no impact on resource use; and eight appear to incorporate cost parameters but no clinical parameter, suggesting that although the adverse effect had little clinical impact, it did affect the resource use, which has been accounted for in the cost.
In total, there were six models that explicitly captured adverse effects by neither a cost nor clinical parameter. Two of these six reports were classified as having captured adverse effects solely through the use of utilities (Reference Dretzke, Cummins and Sandercock8;Reference Shepherd, Brodin and Cave20), three explicitly through the use of withdrawals (Reference Clark, Jobanputra and Barton3;Reference King, Griffin and Hodges15;Reference Woolacott, Hawkins and Mason25), and one through both utilities and withdrawals (Reference Yao, Albon and Adi26).
Of the two decision models that explicitly included adverse effects solely through the utility, one (Reference Woolacott, Hawkins and Mason25) appears to have derived utilities directly from patients on treatment and it is likely that some, if not all, of the relevant adverse effects may have been captured. The second report (Reference Dretzke, Cummins and Sandercock8) is less clear. The utilities appear to have been derived using the authors’ or expert judgment. This method was used due to a lack of available empirical evidence.
Further Investigation of Utilities
In total, 66 of 80 (83 percent) of the decision models incorporated a utility. An attempt was made to classify the utilities used in the models by their source. The most common method used in the HTA assessments (53 percent) was to derive utilities from patients on treatment, either directly as part of the analysis or through the use of a published study which appeared to have elicited utilities from the appropriate patient population. If one can infer that utilities derived from patients on treatment are likely to encompass adverse effects, then one could surmise that almost 53 percent of models incorporated adverse effects through utilities. However, due to the lack of detailed reporting on the derivation of utilities, it was not possible to be sure that in every case, or even in the majority of cases, that the utilities were derived in a manner that would ensure that all of the relevant adverse effects had been captured. A further 29 percent of the reports used utility estimates based on judgment or opinion and 32 percent of the reports used secondary sources, which we also considered to include utilities elicited through public preferences. This was in the belief that a utility elicited from a patient on treatment might capture adverse effects; however, utilities elicited from individuals representing the patients are less likely to.
Further Investigation of Withdrawals
A total of sixteen of eighty reports (20 percent) had a model which incorporated withdrawals into the structure. Three explicitly incorporated withdrawals in this manner to reflect compliance with screening or monitoring rather than adverse effects. The remaining thirteen reports were technology assessments of therapeutic interventions and as is usually the case, the withdrawals appear to be due at least in part to toxicity. In addition, four explicitly incorporated adverse effects through a cost/resource parameter, and five explicitly incorporated both a cost and a clinical adverse effect parameter. The remaining four all included an explicit statement to say that adverse effects had been captured in the utility valuation (Reference Tappenden, Chilcott and Ward21) or through the use of withdrawals (Reference Clark, Jobanputra and Barton3;Reference King, Griffin and Hodges15;Reference Woolacott, Hawkins and Mason25).
Source of Adverse Effect Data
To allow the link between the treatment of adverse effects in the systematic review and in the decision model to be evaluated the sources of the clinical parameters for adverse effects in the model were examined (Table 3).
AE, adverse effect.
Eighteen models (42 percent) used some adverse effects data from the accompanying review. Most of the rest used other literature-based sources; very few relied solely on expert opinion. Fourteen reports had clinical reviews that reported a meta-analysis of adverse effects data, but of these, eight included a clinical parameter in the model, and only three of the models took their model input parameter for adverse effects from the accompanying review. However, even for these three models, the link with the clinical review's meta-analysis of adverse effects data was not without some complication: in one (Reference Hartwell, Colquitt and Loveman13), the differentiation between what was an efficacy outcome and what could be considered an adverse effect was blurred; in another (Reference Main, Bojke and Griffin16), the data were derived from the systematic review but the method of meta-analysis was different for the model; and in the third (Reference Davies, Brown and Haynes7), the results of the meta-analysis comprised only some of the model input for adverse effects.
Reported Rationale for not Including Adverse Effects in the Model
Of the thirty-seven reports that did not include adverse effects in the decision model, eighteen reported a rationale for this approach. These fell into six main categories (Table 4). Most commonly these were the lack of relevant adverse effect data and researcher knowledge that adverse effects had only a minimal effect of health-related quality of life. Therefore, of the eighty HTA reports included, forty-three explicitly included adverse effects, eighteen provided justification for not including them, therefore, only nineteen of eighty (24 percent) HTA technology assessments had no explicit consideration of adverse effects in the decision model.
AE, adverse effect; HRQoL, health-related quality of life.
DISCUSSION
It is clear from available guidance relating to decision modeling and widely accepted that all relevant clinical outcomes should be included in any decision model. There also appears to be a general, if not as clearly stated, consensus that this should include adverse effects (Reference Philips, Ginnelly and Sculpher18;Reference Rovira and Antonanzas19;Reference Weinstein, O'Brien and Hornberger24). Our research has systematically mapped the variety of ways in which adverse effects data have been explicitly incorporated into decision modeling.
Our survey may be limited in that it focused on NIHR HTA funded health technology assessments and the findings may not be generalizable to the broader HTA field. Furthermore, to make the work manageable and to avoid duplication of earlier research, we limited the sample of HTA reports included to those published from 2004 onward. This decision was based on three factors: 2004 onward would reflect current practice; 2004 was the year the NICE Methods Guide was first issued (17); and Cooper et al. (Reference Cooper, Coyle and Abrams5) had published a study appraising the use of evidence in decision models, including adverse effects, in reports published up to and including 2003.
Despite these limitations, the results of our survey were, at face value, reassuring, with just over three-quarters of all models including an explicit consideration of adverse effects. However, there are several areas within a decision analytic framework where adverse effects might be incorporated or captured. When they are included and how they are incorporated is heavily dependent on the intervention being evaluated, the impact of the adverse effect, and the scope of the decision problem. The inclusion of adverse effects through a clinical parameter may seem the most obvious and easily verifiable method, and our survey found that in practice, 67 percent of models used a clinical parameter. However, use of a clinical parameter does not guarantee that all the relevant adverse effects have been captured, nor that the clinical parameter used was a relevant one. Detailed analysis of these issues was beyond the scope of the present survey, but further research into this important area is warranted.
Our survey found that 79 percent of models incorporated a cost parameter for adverse effects, but only 10 percent incorporated cost parameters without explicit inclusion of a clinical parameter. This latter may be justifiable, as some adverse effects may have no significant or measurable impact on quality of life or health benefit, but may lead to an increase in cost. Regardless, a justification should be included in the report. Similarly, the nature of withdrawals and whether or not the reports’ authors anticipated that they captured adverse effects in the decision model was not explicitly reported in all of the HTAs. Although in the evaluation of some, particularly pharmaceutical interventions, adverse effects may be incorporated into the model structure through withdrawals, our survey found that few reports explicitly stated that withdrawals captured adverse effects.
A high proportion of the reports surveyed derived a utility outcome. This is not surprising given that this is recommended within the current NICE Methods Guide (17). However, very few reports made any explicit statement to suggest that the utility valuation captured adverse effects. This may in some part be due to the assumption on the modelers’ part that readers will understand what a utility comprises, what it captures, and what it reflects. It may well be that in many cases modelers assume that adverse effects are captured as part of health-related quality of life (Reference Tappenden, Chilcott and Ward21). Where such an assumption is correct, then there is simply a need, as with cost and withdrawals, for more explicit reporting. However, in many cases such an assumption is not correct, and there needs to be careful consideration of how adverse effects are to be captured. An expectation that consideration of adverse effects in models will be explicit is one way to promote this. How best to ensure that any adverse effects of interventions are captured within the utility needs further investigation and it is likely that more rigorous methods will need to be adhered to.
The results of the survey show that the links between the review and modeling components were not strong for adverse effects: the source of the adverse effects parameter was often informed by the results of the systematic review, but only 21 percent of models including adverse effects relied solely on the accompanying review for the required data. Our survey did not investigate in detail the other sources of adverse effects data, although it is clear that nonsystematically derived literature-based data were the most commonly used. In their 2003 study Cooper et al. (Reference Cooper, Coyle and Abrams5) found that at best only 14 percent of adverse effects outcome data were sourced from the best quality sources, that is, a meta-analysis of randomized controlled trials.
It is accepted in the health economics community that more formal, transparent, and replicable approaches to the identification and assessment of the quality of model inputs may reduce the “black box” nature of decision models and lead to less skepticism regarding model outputs (Reference Cooper, Coyle and Abrams5). It was essential to the reliability of the present survey that the determination of whether a model had included adverse effects in the decision or not was made correctly. This proved to be more difficult than had been anticipated and raises important issues regarding the transparency of the reporting of models. In particular, there was often a lack of explicit reporting with regard to which adverse effects had been considered in the model and how they had been captured and evaluated. As the target audience of these reports is likely to comprise nonhealth economists, it is important that consideration of all parameters should be explicit. With specific reference to adverse effects, this includes reporting in sufficient detail to allow a reader to understand why adverse effects are important to the decision problem, how and where those adverse effects included were identified, and what methods were used to incorporate the relevant adverse effects into the model. We suggest that where appropriate clear justification for the noninclusion of adverse effects should be provided. There are legitimate justifications for not including adverse effects, such as their having a negligible impact on health outcomes, or no impact on costs and resources, and we recommend that these should be explicitly reported. As a minimum, we would advocate that separate sections on adverse effects could be included in the clinical effectiveness and modeling chapters of every technology assessment report.
CONCLUSIONS
The survey found that most HTAs commissioned by the National Institute for Health Research (NIHR) HTA Programme do include, or at least consider, adverse effects in the decision model. The inclusion of adverse effects in the decision model did not appear to be dictated by the therapeutic area, type of intervention or type of model, nor how adverse effects were dealt with in the clinical review. In most cases, the link between the adverse effects data used in the model and that presented in the systematic review appeared to be weak. Clearer and more explicit reporting of how adverse effects are considered in decision models is required.
CONTACT INFORMATION
Dawn Craig, MSc (dc19@york.ac.uk), Research Fellow, Catriona McDaid, PhD (cm36@york.ac.uk), Research Fellow, Tiago Fonseca, MSc (tf511@york.ac.uk), Research Fellow, Christian Stock, MSc (Christian.stock@gmx.ge), Research Fellow, Steven Duffy, PgDip (sbd4@york.ac.uk), Information Specialist, Nerys Woolacott, PhD (nw11@york.ac.uk), Senior Research Fellow, Centre for Reviews and Dissemination, University of York, Heslington, York YO10 5DD, United Kingdom
CONFLICT OF INTEREST
D. Craig, C. McDaid, T. Fonseca, C. Stock, S. Duffy, and N. Woolacott have received funding for this work from the NIHR HTA Programme.