Hostname: page-component-745bb68f8f-kw2vx Total loading time: 0 Render date: 2025-02-11T16:09:15.087Z Has data issue: false hasContentIssue false

Assessing the impact of England's National Health Service R&D Health Technology Assessment program using the “payback” approach

Published online by Cambridge University Press:  06 January 2009

James Raftery
Affiliation:
University of Southampton
Stephen Hanney
Affiliation:
Brunel University
Colin Green
Affiliation:
University of Exeter
Martin Buxton
Affiliation:
Brunel University
Rights & Permissions [Opens in a new window]

Abstract

Objectives: This study assesses the impact of the English National Health Service (NHS) Health Technology Assessment (HTA) program using the “payback” framework.

Methods: A survey of lead investigators of all research projects funded by the HTA program 1993–2003 supplemented by more detailed case studies of sixteen projects.

Results: Of 204 eligible projects, replies were received from 133 or 65 percent. The mean number of peer-reviewed publications per project was 2.9. Seventy-three percent of projects claimed to have had had an impact on policy and 42 percent on behavior. Technology Assessment Reports for the National Institute for Health and Clinical Excellence (NICE) had fewer than average publications but greater impact on policy. Half of all projects went on to secure further funding. The case studies confirmed the survey findings and indicated factors associated with impact.

Conclusions: The HTA program performed relatively well in terms of “payback.” Facilitating factors included the program's emphasis on topics that matter to the NHS, rigorous methods and the existence of “policy customers” such as NICE.

Type
General Essays
Copyright
Copyright © Cambridge University Press 2009

Given the considerable investment in health-related research assessing its impact could help increase accountability, help identify ways to maximize impact and justify funding (1;Reference Buxton and Hanney4). Questions about these issues are increasingly being asked in the United Kingdom and internationally.

A comprehensive review of studies of the impact from health research programs was linked to the work reported here (Reference Hanney, Buxton, Green, Coulson and Raftery8). Several studies examined the payback from diverse research programs (as opposed to individual projects), generally indicating a higher level of impact than is often thought to exist. Examples covered some health technology assessment programs such as in Quebec (Reference Jacob and Battista12;Reference Jacob and McGregor13) and other programs that were broader in scope, including from the United Kingdom (Reference Ferguson, Kelly and Georgiou7), the United States (Reference Stryer, Tunis, Hubbard and Clancy21), and Australia (Reference Shah and Ward20). Since that review was completed, further studies have revealed considerable levels of impact (Reference Johnston, Rootenberg, Katrak, Smith and Elkins14;Reference Kingwell, Anderson and Duckett17;Reference Kwan, Johnston and Fung19).

The review (Reference Hanney, Buxton, Green, Coulson and Raftery8) confirmed that the “payback” framework pioneered by Buxton and Hanney (Reference Buxton and Hanney4) was the most commonly used approach to assess the impact from health research. This framework consists of a multi-dimensional categorization of benefits (including the contribution to knowledge and influence on policy, practice and health gain), and a model of how best to assess these impacts (Reference Hanney, Grant, Wooding and Buxtom10). The present study describes the application of the payback framework to the largest program within NHS R&D, the Health Technology Assessment (HTA) program. The full report (Reference Hanney, Buxton, Green, Coulson and Raftery8) describes the background to the program, including unique features such as the HTA monograph series, and includes the research protocol.

The aim of the project was to assess the “payback” from the HTA program. We report the results of a survey of 204 principal investigators (PIs) who had completed projects funded by the HTA program in its first 10 years between 1993 and 2003 supplemented by more detailed case studies of sixteen research projects).

METHODS

The eligible population comprised researchers who had been funded by the HTA Program and had submitted a final draft report between the beginning of the Program in 1993 and 30 June 2003. The National Coordinating Centre for HTA (NCCHTA), which manages the HTA program, provided a full list of projects. Methodology Reviews were excluded as these constituted a separate program from 2000. Projects that had been discontinued, or which had not required a publication were also excluded as these are considered elsewhere (the former under “failure” rate, the latter under feasibility studies) (Reference Hanney, Buxton, Green, Coulson and Raftery8). Of the 258 projects potentially eligible, 38 were excluded as methodology, 10 as “discontinued,” and 6 more as “no publication was required.” Some projects that had completed but whose reports had not been accepted by NCCHTA were included. These exclusions reduced the sampling frame to 204.

The survey was organized around the “payback” framework which required data on: publications, presentations, further linked research, and impact on policy and behavior (Reference Buxton and Hanney4). Policy impact was broadly defined covering both national and local. A questionnaire previously used by Hanney et al. (Reference Hanney, Soper and Buxton11) was piloted and slightly amended. The survey was carried out in mid-2005. Questionnaires were mailed to all eligible researchers with follow up reminders by mail and email. Where researchers were reported to have moved, questionnaires were sent to current addresses.

Three types of project were distinguished: (a) primary research mainly randomized controlled trials; (b) secondary research, including systematic reviews, meta-analysis and modeling of cost effectiveness, and (c) Technology Assessment Reports (TARs) for NICE.

Sixteen case studies provided more detail of impact, on the factors associated with it and on the best way to assess it. Nine primary studies, four secondary, and three NICE TARs were selected to be case studies on the basis of stratified random selection (Reference Hanney, Buxton, Green, Coulson and Raftery8) from the 204 projects, the first time to our knowledge that case studies have been randomly selected.

The case studies consisted of interviews with principal investigators, analysis of documents referred to by the principal investigators, analysis of key citations to the main papers, and review studies of the impact of NICE. The case studies were written up using all the data available, organized according to the stages of the Payback framework.

RESULTS

A total of 133 replies were received or 65 percent, with a slightly higher response rate for the NICE TARs at 74 percent. Analysis of NCCHTA routine data showed that nonresponders tended to have fewer peer-reviewed publications, fewer Web “hits” (NCCHTA maintains data on the number of internet connections made to each published report), and to be from the earlier years of the program (Reference Hanney, Buxton, Green, Coulson and Raftery8).

Publications

The total number of publications was 574, with peer reviewed journal articles at 263 constituting 45 percent of the total. The only other large group was published presentations at 240, with the most significant of the remaining publications being eight editorials, four books and two book chapters.

The mean number of peer reviewed publications was 2.93 per project including the HTA Program monographs. Higher ratios applied to primary research (3.82) and secondary research (3.36) compared with TARs (1.81). Excluding the HTA Program monographs would reduce these values by close to one because almost all projects lead to a monograph.

A total of 5.2 presentations were made per project. Most (55 percent) were to academic audiences, followed by those to practitioners with relatively few to service users.

Almost half (46 percent) went on to receive further funding. This was more likely for primary and secondary research than for the technology assessments for NICE.

Respondents were asked to indicate if, in their view, their project had impacted on policy or behavior. Approximately three quarters of the respondents claimed that their project had impacted on policy and just over half on behavior (Table 1). Similar figures applied to expected future impact, slightly lower compared to the past for policy and slightly higher for behavior. When past and future impacts were combined (excluding double counting) 85 percent of projects claimed an impact on policy and 64 percent on behavior. The totals were higher for NICE TARs with 96 percent claiming to have impacted on policy, as might be expected given their role.

Table 1. Opinion of Lead Researchers about Existing and Potential Impact on Policy and Behavior

aCombined indicates number in “already” + number with no entry under “already” claiming a future impact.

The timeliness and quality of the research, and liaison with stakeholders, were factors which respondents linked to the impact of their work. Some referred to the importance of having a clear policy “customer” such as NICE or the National Screening Committee. Some who had led systematic reviews emphasized the importance of their study in identifying the need for further research such as randomized controlled trials.

The most common reason for lack of impact was timing. Some thought it too soon for the report to have had impact. Some critical comments were made about the slowness of aspects of the HTA process. Two respondents referred to difficulties arising from their findings being contrary to current government policy. Two others referred to the problems of negative findings. The extent to which these perceptions are well founded has not been addressed.

Case Studies

The case studies indicated the range of policies that have been informed (e.g., NICE guidance and guidelines; decisions by the National Screening Committee; National Service Frameworks; guidelines from the Scottish Intercollegiate Guidelines Network and many other national and international bodies).

Eleven of the sixteen case studies claimed to have made some impact on policy at the level of a national professional body or policy-making body, sometimes substantially. Some of the impact was international, including impact on guidelines in the United States for the treatment of stroke (Reference Duncan, Zorowitz and Bates6) and of dyspepsia (2). (See Box 1.)

Box 1. Case Study 1: A randomized controlled comparison of alternative strategies in stroke care. (HTA study 93/03/26)

Stroke is the single most expensive disorder managed in general hospitals, with a burden likely to increase. Debates about how it should best be managed in hospitals led the HTA to invite tenders to compare different approaches.

The study received £0.5 m from the HTA Program to conduct a prospective, single-blind, randomized controlled trial. Between October 1995 and March 1998 patients were recruited from a community-based stroke register. Those with severe stroke were excluded. The study had three arms: the stroke unit providing 24-hour care from a specialist multidisciplinary team with clear guidelines for acute care, prevention of complications, rehabilitation and secondary prevention; the stroke team that involved management on general wards with specialist team support to provide stroke assessments; domiciliary care consisting of management at home under the supervision of a GP and stroke specialist with support from specialist team and community services for a maximum of 3 months.

In their HTA monograph, the authors concluded that, “Management of stroke patients on general medical wards, even with specialist team support, cannot be recommended because of the high mortality and dependence rate. . . .a role for specialist domiciliary services for acute stroke was not supported. . . the stroke unit is a more cost-effective intervention than either the stroke team or home care.” (Reference Kalra, Evans and Perez15)

The quality of the study and its importance is indicated by publications in several major journals. Two papers were published in the very high impact Lancet, including the main clinical paper (Reference Kalra, Evans and Perez16) which has been cited over 60 times. Two other papers were published in Stroke, a major specialist journal, including the cost-effectiveness paper the importance of which was highlighted in an accompanying editorial.

The papers are cited in several systematic reviews, including some Cochrane reviews, especially the one by the Stroke Unit Trialists' Collaboration on Organised Inpatient (stroke unit) Care for Stroke. In this it was one of only five studies for which outcome data were available for a comparison of different forms of organized stroke unit care. It was given the top grade for its methods.

The study seems to have had a considerable impact on policy at various levels. The National Clinical Guidelines for Stroke from the Royal College of Physicians cite both the Kalra et al. Reference Kalra, Evans and Perez2000 paper directly and the Cochrane review, again noting the strength of evidence and stressing that the recommendation that patients are admitted under the care of a specialist team for their acute care and rehabilitation should be the highest priority. Guidelines in several countries also cite papers from this study, including one from SIGN. The Stroke Council of the American Heart Association recently endorsed the guidelines from Veteran Affairs/Department of Defence and these cited both Lancet articles as important evidence on the organization of stroke care (Reference Duncan, Zorowitz and Bates6).

The study showed various gains, especially reduced mortality, from the provision of care in specialist stroke units. It is therefore reasonable to suggest that following the widespread adoption of stroke units there has been a health gain. There will also have been reduced morbidity and increased patient satisfaction from the move away from care on general wards and increase provision of specialist units. The difficulty comes in relation to the counterfactual: how far would these changes have come about without the study. Policy and practice were probably moving in the direction indicated by the findings of this study, but it provided high quality evidence that seems to have been influential in promoting the changes.

Even with such clear examples of impact, it was not possible to specify the counter-factual. Some of the changes in policy and practice might have come about because of pressure from other sources. In some instances, however, the evidence produced by some studies was the main reference given to support certain policies (Reference Hanney, Buxton, Green, Coulson and Raftery8), for example, a study from the HTA-funded trial of treatments for depression in general practice (Reference Ward, King and Lloyd22) was cited as the sole evidence to support the statement in the National Service Framework for Older People that counseling in primary care may also be effective for depression (5).

DISCUSSION

One potential bias in this study arises from it having been commissioned by the HTA program and carried out a team some of whom have previous or ongoing funding from that program. In addition to being fully acknowledged, this danger was mitigated by being supervised by an independent Advisory Group (Reference Hanney, Buxton, Green, Coulson and Raftery8). Neither the HTA Program nor NCCHTA had any influence on the conduct of the research or its interpretation. Another possible bias could arise from the use of self-reported by the lead researchers, who might exaggerate the importance of their study. The case studies used documentary analysis to check the claims made in both the relevant surveys and interviews. The survey could also be biased by its response rate with around one third not responding. However, given the time frame 1993–2003, loss (death, retirement, emigration) of some lead investigators was inevitable. Nonresponders tended to have had less successful projects (fewer publications or web hits) and to be from the earlier years of the program (Reference Hanney, Buxton, Green, Coulson and Raftery8). A similar response rate has applied to similar surveys (Reference Hanney, Buxton, Green, Coulson and Raftery8).

There are genuine concerns about how far research can impact on health policy (Reference Black3), but the review (Reference Hanney, Buxton, Green, Coulson and Raftery8) suggests that higher levels of research impact on policy and practice can sometimes be identified than is often thought to be the case. In particular, impact was identified in studies that worked forward from research projects to analyze the impact made. Although tracing impact forward from particular studies, as in the “payback” approach, may exaggerate their effects, this approach does help indicate the existence of impact, which can then be explored in greater detail in case studies such as those described here.

The review also highlighted the potential importance of the context in which the program of research is conducted (Reference Jacob and Battista12;Reference Jacob and McGregor13), especially the existence of “customers” or “receptor bodies” (Reference Hanney, Gonzalez-Block, Buxton and Kogan9;Reference Kogan, Henkel and Hanney18). The HTA program work commissions work for such bodies, including NICE and the National Screening Committee, both of which issue or recommend policy.

Both the nature and context of the HTA Program are unique, limiting comparison between it and other research programs. No other HTA program provides comparable input to a NICE-type decision-making agency. Few other HTA programs include clinical trials. Other unique characteristics include the program's emphasis on funding scientific research on topics that matter to the NHS. Its success with publications may reflect the programs emphasis on rigorous science, which is often path-breaking in relation to the topics selected. The specified use of rigorous methods (systematic reviews, meta-analysis, randomized controlled clinical trials) to a large degree ensured high quality research, particularly when coupled with rigorous peer review.

The survey revealed dissatisfaction with some aspects of the program, particularly in the length of time taken to agree funding and to publish the monographs. Some researchers considered that these delays reduced the impact of the research.

CONCLUSIONS

Overall, the HTA Program has had considerable impact as measured by the “payback” approach. The number of peer reviewed publications per project compared well with other programs. Similarly, the impact on policy and behavior was considerable, particularly when clear policy “customers” existed.

CONTACT INFORMATION

James Raftery, PhD (), Professor of Health Technology Assessment, Director, Wessex Institute, School of Medicine, Southampton University, Mailpoint 728, Southampton S016 7PX, UK

Stephen Hanney, PhD (), Senior Research Fellow, Health Economics Research Group, Brunel University, Uxbridge, Middlesex UB8 3PH, UK

Colin Green, PhD (), Senior Lecturer, Peninsula Technology Assessment Group, University of Exeter, Noy Scott House, Barrack Road, Exeter EX2 5DW, UK

Martin Buxton, BA (), Professor of Health Economics, Director, Health Economics Research Group, Brunel University, Uxbridge, Middlesex UB8 3PH, UK

References

REFERENCES

1. Academy Medical Sciences/MRC/Welcome Trust. Medical research: Assessing the benefits to society. London: Academy Medical Sciences; 2006Google Scholar
2. American Gastroenterologist Association. American Gastroenterologist Association medical position statement: Evaluation of dyspepsia. Gastroenterology. 2005;129:17531755.CrossRefGoogle Scholar
3. Black, N. Evidence based policy: Proceed with care. BMJ. 2001;323;275279.CrossRefGoogle ScholarPubMed
4. Buxton, M, Hanney, S. How can payback from health services research be assessed? J Health Serv Res Policy. 1996;1:3543.CrossRefGoogle ScholarPubMed
5. Department of Health. National service framework for older people. London: Department of Health; 2001. http://www.dh.gov.uk/assetRoot/04/07/12/83/04071283.pdf.Google Scholar
6. Duncan, PW, Zorowitz, R, Bates, B, et al. Management of adult rehabilitation care: A clinical practice guideline. Stroke. 2005;36:e100e143.CrossRefGoogle ScholarPubMed
7. Ferguson, B, Kelly, P, Georgiou, A, et al. Assessing payback from NHS reactive research programmes. J Manag Med. 2000;14:2536.CrossRefGoogle ScholarPubMed
8. Hanney, S, Buxton, M, Green, C, Coulson, D, Raftery, J. An assessment of the impact of the NHS Health Technology Assessment Programme. Health Technol Assess. 2007;11:iiiiv, ix-xi, 1-180. http://www.ncchta.org/fullmono/mon1153.pdf.CrossRefGoogle ScholarPubMed
9. Hanney, S, Gonzalez-Block, M, Buxton, M, Kogan, M. The utilisation of health research in policy-making: Concepts, examples and methods of assessment. Health Res Policy Syst. 2003;1:2.CrossRefGoogle ScholarPubMed
10. Hanney, S, Grant, J, Wooding, S, Buxtom, MJ. Proposed methods for reviewing the outcomes of research: The impact of funding by the UK's ‘Arthritis Research Campaign’. Health Res Policy Syst. 2004;2:4.CrossRefGoogle ScholarPubMed
11. Hanney, S, Soper, B, Buxton, M. Evaluation of the NHS R&D Methods Programme. HERG Report No.29. Uxbridge: Brunel University; 2003.Google Scholar
12. Jacob, R, Battista, R. Assessing technology assessment. Int J Technol Assess Health Care. 1993;9:564572.CrossRefGoogle ScholarPubMed
13. Jacob, R, McGregor, M. Assessing the impact of health technology assessment. Int J Technol Assess Health Care. 1997;13:6880.CrossRefGoogle ScholarPubMed
14. Johnston, SC, Rootenberg, JD, Katrak, S, Smith, WS, Elkins, JS. Effects of a US National Institutes of Health programme of clinical trials on public health and costs. Lancet. 2006;367:13191327.CrossRefGoogle Scholar
15. Kalra, L, Evans, A, Perez, I, et al. Alternative strategies for stroke care: A prospective randomised controlled trial. Lancet. 2000;356:894899.CrossRefGoogle Scholar
16. Kalra, L, Evans, A, Perez, I, et al. A randomised controlled comparison of alternative strategies for stroke care. Health Technol Assess. 2005;9:iiiiv, 1–79.CrossRefGoogle ScholarPubMed
17. Kingwell, BA, Anderson, GP, Duckett, SJ, et al. Evaluation of NHMRC funded research completed in 1992, 1997 and 2003: Gains in knowledge, health and wealth. Med J Aust. 2006;184:282286.CrossRefGoogle ScholarPubMed
18. Kogan, M, Henkel, M, Hanney, S. Government and research: Thirty years of evolution. 2nd ed. Dortrecht: Springer; 2003.Google Scholar
19. Kwan, P, Johnston, J, Fung, AY, et al. A systematic evaluation of payback of publicly funded health and health services research in Hong Kong. BMC Health Serv Res. 2007;7:121.CrossRefGoogle ScholarPubMed
20. Shah, S, Ward, JE. Outcomes from NHMRC public health research project grants awarded in 1993. Aust N Z J Public Health. 2001;25:556560.CrossRefGoogle ScholarPubMed
21. Stryer, D, Tunis, S, Hubbard, H, Clancy, C. The outcomes of outcomes and effectiveness research: Impacts and lessons from the first decade. Health Serv Res. 2000;35 (pt 1):977993.Google ScholarPubMed
22. Ward, E, King, M, Lloyd, M, et al. Randomised controlled trial of nondirective counselling, cognative-behaviour therapy and usual general practitioner care for patients with depression. I: Clinical effectiveness. BMJ. 2000;321:13831388.CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Opinion of Lead Researchers about Existing and Potential Impact on Policy and Behavior