The impact of economic evaluation studies on healthcare decision makers has been shown to be rather limited. However, decision makers do generally recognize that economic considerations must be taken into account when decisions regarding healthcare resource allocation are being made (Reference Ross18).
Hoffmann and Graf von der Schulenburg (Reference Hoffmann and Graf von der Schulenburg11) surveyed decision makers from nine European countries. The main findings of the research included that health economics decision makers typically obtain information from several sources. Therefore, accessibility to studies by several means (for example scientific journals, bulletins, reports, and working papers) is intrinsic in the communication of their results to the decision maker. Also, as many economic studies are sponsored by the healthcare industry, many decision-makers believe the results to be biased. Because of this, many see the lack of credibility of study results as a barrier to the use of such information in the decision-making process.
Several surveys have been conducted in the United Kingdom (Reference Drummond, Cooke and Walley4;Reference Duthie, Trueman, Chancellor and Diez6;Reference Hoffmann, Stoykova and Nixon12). One (Reference Drummond, Cooke and Walley4) indicated that there are several barriers to use of economic evaluations in decision making. These include decision-maker's concerns about the validity of economic studies. The study also found that articles in peer-reviewed clinical journals were the most influential source of economic evaluation results for healthcare decision makers. In response to a question about the factors that would encourage them to make more use of economic evaluations, their priorities were to make studies more accessible and to have them validated by a trusted source.
The results from a study by Duthie et al. (Reference Duthie, Trueman, Chancellor and Diez6) indicated that the majority of UK National Health Service (NHS) decision makers either do not understand health economics outcome statements such as incremental cost-effectiveness ratios (ICERs), willingness-to-pay, and quality-adjusted life-years (QALYs), or they consider these outcomes to be irrelevant. Duthie et al. concluded that different individuals seek different outcomes, health economics studies should report actionable conclusions, and any cost savings must be applicable.
Hoffmann et al. (Reference Hoffmann, Stoykova and Nixon12) found that, although NHS decision makers believe that economic evaluations are useful in principle, in practice their usefulness may be limited as results of published studies do not always apply to the decision-makers' settings. The focus group also suggested that criticisms of economic evaluations should be more explicit and that more detail should be given on interventions. The research found that an improved layout with interactive interfaces and enhanced searching filters would also be beneficial. The findings of this research also indicate that an overall quality score might help decision makers to focus on the most important studies.
Bryan et al. (Reference Bryan, Williams and Mciver1) concluded that there are two barriers to the use of economic analyses in healthcare decision making. These are the accessibility of research evidence and the acceptability of research evidence.
Databases such as the NHS Economic Evaluation Database (NHS EED) are helping to improve decision-makers' access to and understanding of economic analyses. However, some of the concerns raised by decision makers could be addressed by changes in the ways economic evaluations are accessed and presented. Several studies have indicated that databases such as NHS EED may need to be further developed and adapted to tailor abstract records more closely to users' needs (Reference Hoffmann and Graf von der Schulenburg11;Reference Nixon, Duffy and Armstrong14;Reference Nixon, Phipps, Glanville, Mugford and Drummond15).
The purpose of this study was to elicit user preferences for several possible formats for presenting economic evaluation information.
METHODS
The aim of the survey was to elicit respondent's preferences for the presentation of economic evaluation information. It was decided that four different presentational formats would be incorporated into a survey in an incremental order. These were a summary score, a sixty-word summary, a short abstract, and a long abstract, which is the current format used on the NHS EED, which contains structured abstracts of economic evaluations (2). (Examples of the summary and short and long abstract for a study can be found on the Journal's Web site, which can be viewed online at http://www.journals.cambridge.org/jid_thc.)
To identify an appropriate scoring system, a literature review was conducted. A search of published studies from 1996 onward was performed to identify, and subsequently appraise, studies that had developed a scoring system for economic evaluations. Literature databases and key economic textbooks were searched (Reference Drummond, Sculpher, Torrance and O'Brien5;Reference Fox-Rushby and Cairns8;13;16;17;20). Reference searches and hand searching were also carried out. After removing duplicates, 140 unique papers were retrieved from the search.
For those papers eligible for inclusion in the literature review, data relating to the methodological approach taken in constructing the scoring system including whether or not the system was validated, were extracted by the reviewer (S.T.). Table 1 outlines some of the methodological points assessed by the reviewer. Where multiple papers referred to the same scoring system, data were extracted and reported once only.
The quality of the scoring systems was assessed and checked by the reviewer. Quality was assessed through the criteria addressed by the scoring system, methodological approach, weightings of different criteria present in the scoring system (i.e., whether more influential criteria had been assigned weights to reflect this), whether or not the system had been validated, whether or not it had been demonstrated in practice, and any limitations inherent in the scoring system. Such limitations included scoring systems that assigned ratings to studies without methodological explanation, and failure to calculate an overall quality score. Systems that scored any interventions or concepts other than economic evaluations were excluded and, therefore, the criterion outlined in Table 1 does not take factors relating to costs and effects into account. Four papers were judged as suitable for further review (Reference Chiou, Hay and Wallace3;Reference Gerard, Seymour and Smoker9;Reference Gonzalez-Perez10;Reference Wallace, Weingarten and Chiou19). One of these reports (Reference Gonzalez-Perez10) presented three scoring systems; therefore, in total, six different scoring systems were reviewed.
A checklist of points covering the content of each scoring system, its derivation methods, and its validity, was developed by the researcher to aid in determining the quality of each scoring system. This was achieved by taking points from similar checklists and incorporating criteria relevant to economic evaluations (Reference Fink7). The resulting checklist is presented in Table 1.
The Chiou et al. scoring system (Reference Chiou, Hay and Wallace3) (presented in Table 2) rated highest based on the points highlighted in the literature review checklist. This scoring system was, therefore, selected to score the economic evaluations used in this research project.
Note. Taken from Chiou et al. Development and Validation of a Grading System for the Quality of Cost-Effectiveness Studies. Medical Care 2003: 41(1); 32-44.
The score was not descriptive; it simply took into account several items relating to the study design, inclusion of costs and effects, analysis of costs and effects, limitations of the study, and other factors to create a numerical representation of the quality of the paper. The aim of the second presentational format, the summary, was that this should be restricted to a length of 60 (±5) words. Each summary described the study objective, the authors' conclusions, and whether or not the conclusions were reliable.
The short abstract was a more compact version of the full abstract currently available on NHS EED. It was designed to provide enough detail on the original paper to help users decide if published studies are relevant and of sufficient quality to be of use in their decision-making processes. The long abstract was the abstract that is currently available on NHS EED and provided a significantly more detailed description of the methodology used in the published studies, particularly regarding the source and reporting of the clinical data.
SURVEY
The survey aimed to elicit the preferences of existing NHS EED users. The sample comprised individuals who were high users of NHS EED, or whom had responsibilities that would cause them to use the database. Groups invited to participate were (i) registered users of NHS EED (9.5 percent), (ii) those who had requested priority abstracts from the database (50 percent), (iii) members of Lis-Medical (a moderated discussion group for health librarians and information workers) (1 percent), (iv) the Cochrane Library Users Group (CLUG) (1.5 percent), (iv) the Health Technology Assessment (HTA) Information Professionals and the International Network of Agencies for Health Technology Assessment (INAHTA) members (3.5 percent), and (vi) economists who were members of committees of the National Institute for Health and Clinical Excellence (NICE) in the United Kingdom (22.5 percent). In assembling the sample in this way, we were conscious that we were likely to obtain an informed view, rather than an average view, of the usefulness of different presentational formats.
During discussions relating to the content of the survey, it was decided that, ideally, the questionnaire should focus on more than one existing economic evaluation to avoid the results being biased by factors specific to the chosen study. It was, therefore, agreed that the sample should be randomly split so that each respondent received one of two possible economic evaluations. Two evaluations were selected to allow both an economic evaluation conducted alongside a clinical trial (type A) and an economic evaluation conducted using modeling techniques (type B) to be considered.
In the survey, respondents were asked about their professional background and their main reasons for using economic evaluations. This allowed for questions to be included which focused on different purposes for which economic evaluations may be used. The four formats were then presented, progressing from the least information to the most detailed format to capture the incremental benefits of the extra information. The survey was designed to focus on the elements of the presentational formats of interest, such as the degree to which each format was useful in determining the quality, reliability, and relevance of a study. Taking an incremental approach to the research was important, as this would permit the added benefit of each format compared with the next least detailed to be determined. So, for example, using this approach would allow the researchers to gain an understanding into the benefit of additional information provided in the long abstract in comparison with the short abstract. This would in turn allow for the amendment of the presentational formats to create the most beneficial combination of information.
RESULTS
We aimed to access key users of economic evaluations by asking individuals from a variety of sources to volunteer to participate in the survey. In total, approximately 2,400 individuals were accessible through the lists and databases described earlier. The greatest proportion of volunteers came from sources that were more focused on the use of economic evaluations. For example, 50 percent of individuals accessed through the priority abstract request database agreed to participate in the survey. Furthermore, 22.5 percent of health economics decision makers and 9.5 percent of NHS EED subscribers we asked to volunteer agreed to take part. Lower proportions of the remaining groups chose to answer the survey, possibly because the clientele accessed through these means were not key economic evaluation users, as targeted by this research.
A total of eighty-four individuals volunteered to participate in the study. Each was sent a randomly allocated survey (either type A or type B based), and given a 2-week deadline to return the completed questionnaire. A reminder was sent to nonrespondents after 1 week. In total, fifty (60 percent) individuals returned completed surveys. A further six individuals (7 percent) replied saying that they could no longer participate in the survey. The remaining twenty-eight individuals (33 percent) gave no response.
Of the completed surveys, twenty-eight (56 percent) related to the type A study, and twenty-two (44 percent) to the type B study. Figure 1 illustrates the type of presentational formats selected by respondents when asked to select one only, that is, their first preference.
Survey respondents were also given the opportunity to state which of the formats (other than that chosen as their first preference) they would like to have available to them as additional sources of information. Figure 2 illustrates the combinations of formats that respondents thought would be of most benefit to them.
Some survey respondents provided narrative comments on why they thought each presentational format was beneficial, and whether any important factors were missing from the four formats. The comments suggested that decision makers have two different needs from economic evaluations. There is a need for a screening tool to decide whether the study is of potential interest, and then a need for a comprehensive description of detail for those studies that appear interesting. Figures 3 and 4 illustrate the incremental benefit gained from the format combinations referred to in the survey. The charts are presented to represent the two different needs of decision makers, as suggested by comments given in the survey responses.
As Figure 3 shows, 82 percent of respondents were content that the score gave them enough information as a screening tool. The score plus summary was found to be useful by 92 percent of respondents, indicating a 10 percent incremental benefit of the summary in comparison with the score.
For the comparison of more detailed information sources illustrated in Figure 4, the short abstract coupled with the score and summary is compared with the long abstract coupled with the same two screening mechanisms, to obtain the incremental benefit the long abstract achieves in comparison to the shortened version. From this and the fact that the short abstract was the preferred first choice overall, we can infer that the long abstract yields no extra benefit, in terms of useful information, over the short abstract. Respondent's preferences for presentational format did not appear to vary with study type, A or B.
The analyses comparing uses of economic evaluations produced similar results for all the options presented in the survey with the short abstract being the preferred format in most cases. The one exception was the health economic decision makers, who preferred the greater detail offered by the long abstract.
DISCUSSION
The short abstract was clearly preferred over the other three formats offered, with the long abstract also being the first preference for a substantial proportion of the remaining sample. However, once given the opportunity to express preferences for combinations of the formats, the majority of respondents chose to include the score and/or the summary.
The reasoning behind this, judging from the comments made on the surveys, seems to be that, although the score and summary are both useful as screening tools or instruments to be used to compare studies; they do not, in themselves, offer enough information to the decision maker on the content of the study for the purpose in hand. This finding suggests the possibility that respondents have two different needs from the information presented to them. In the first instance, the majority appear to require a format, such as the score or summary, which permits studies to be rapidly scanned and compared. They then would require more detailed information, such as that presented in the short and long abstracts, on those studies that they consider to be relevant. However, it should be noted that there is a possibility that the results are influenced by the presentational format of this survey. That is, presenting the score and summary on more occasions than the more detailed formats may have encouraged familiarity with the shorter formats.
Individuals seemed to gain no additional benefit, in terms of useful information, from the long abstract over the short one (as illustrated in Figure 4), and, therefore, the majority opted for the short abstract as it is easier to read and digest. Once an individual had access to one or more screening tools plus the short abstract, the requirement for the longer abstract seemed, for most individuals, to disappear.
One possible reason for the low number of respondents opting for the long abstract may have been due to the incremental approach taken by the survey. This meant that respondents were reading the same information repeatedly. The effects of this repetition may have been twofold in that the long abstract may have seemed even longer and individuals may have formed the impression that much of the information presented in the long abstract was simply being repeated. However, the incremental approach did pose the right question: namely, “what extra does the user gain from the long abstract, over and above the short abstract?” An alternative design, involving random allocation of subjects to receive either the short or long abstract, would have required a much larger sample size and would not have guaranteed the elimination of confounding factors.
Health economics decision makers showed a preference for the long abstract, yet in all other groups, the short abstract was the most preferred format. Of those health economics decision makers who participated in the survey, 85 percent used economic evaluations to determine the cost-effectiveness of interventions. This group commented that the short survey lacked detail on the costs used in studies and would benefit from a more comprehensive analysis of the costs and benefits.
Further research could focus on face-to-face interviews with health economics decision makers in an attempt to determine what factors make them opt for the long over the shorter abstract. This research could also focus on ways of improving the short abstract to make it more useful to this group of respondents.
There are several limitations of this research. Fifty (60 percent) individuals returned the survey. There are several possible reasons that may have contributed to the relatively low number of volunteers who actually participated. Some may have believed that, having seen the survey, the research we were carrying out was not relevant to them, or that they would be unable to offer responses that were relevant to those factors that the survey sought to address.
Using an opportunistic sample meant that we had no control over the sample composition. Also, recruiting volunteers may result in some degree of volunteer bias, in that those who chose to participate in the study exhibit different opinions, or require different information from economic evaluations, than those who did not choose to volunteer. This means that those responding were probably more interested in using economic evaluations than the population of decision makers at large. However, it is not necessarily the case that the use of an opportunistic sample had an impact on the choice of presentational format.
It is also possible that a learning effect was present due to the incremental approach adopted. For example, by the time the respondent reached section D, where they were presented with the long abstract for the first time, they had already seen the score in three previous sections and the summary in two. They were, therefore, not approaching each new section afresh as they had already seen some of the information before. However, the scale of this learning-effect bias is not clear from the survey results.
CONCLUSIONS
The choice of presentational format for communicating the methodology and results of economic evaluations is important to users. Overall, the findings from this study indicate that the combination of a summary or score, coupled with a short abstract would be of most use to decision makers needing information on the quality and relevance of economic evaluations. The brief and structured format of the short abstract is popular with decision makers as it allows them to find quickly the information relevant to them, without losing essential detail. In relation to previous research in this area, it is possible that the long abstract is too complex for the majority of decision makers and so the simpler, short abstract format is more suited to their needs.
Ultimately, it may be impossible to please everyone by presenting one integrated set of information. Presenting a combination of several formats may be the most beneficial way to deliver economic evaluation information to decision makers.
CONTACT INFORMATION
Stephanie J. Thurston, MSc (sthurston@pharmerit.com), Research Consultant, Pharmerit Ltd., Tower House, Fishergate, York, North Yorkshire, YO10 4UA, UK
Dawn Craig, MSc (dc19@york.ac.uk), Research Fellow in Health Economics, Paul Wilson, BA (hons) (pmw7@york.ac.uk), Research Fellow in Dissemination, Centre for Reviews and Dissemination, University of York, Heslington, York. YO10 3BQ, UK
Michael F. Drummond, DPhil (md18@york.ac.uk), Professor, Centre for Health Economics, The University of York, Alcuin A Block, Heslington, York, UK, YO10 5DD, UK