Hostname: page-component-745bb68f8f-kw2vx Total loading time: 0 Render date: 2025-02-11T16:57:29.647Z Has data issue: false hasContentIssue false

When the decision is what to decide: Using evidence inventory reports to focus health technology assessments

Published online by Cambridge University Press:  30 March 2011

Matthew D. Mitchell
Affiliation:
University of Pennsylvania Health System
Kendal Williams
Affiliation:
University of Pennsylvania School of Medicine and University of Pennsylvania Health System
Gretchen Kuntz
Affiliation:
University of Pennsylvania
Craig A. Umscheid
Affiliation:
University of Pennsylvania School of Medicine and University of Pennsylvania Health System
Rights & Permissions [Opens in a new window]

Abstract

Objectives: Health systems frequently make decisions regarding acquisition and use of new technologies. It is desirable to base these decisions on clinical evidence, but often these technologies are used for multiple indications and evidence of effectiveness for one indication does not prove effectiveness for all. Here, we describe two examples of evidence inventory reports that were performed for the purposes of identifying how much and what type of published clinical evidence was available for a given technology, and the contexts in which those technologies were studied.

Methods: The evidence inventory reports included literature searches for systematic reviews and health technology assessment (HTA) reports, and systematic searches of the primary literature intended to count and categorize published clinical studies. The reports did not include analysis of the primary literature.

Results: The inventory reports were completed in 3 to 4 days each and were approximately ten pages in length, including references. Reports included tables listing the number of reported studies by specific indication for use, and whether or not there were randomized trials. Reports also summarized findings of existing systematic reviews and HTA reports, when available. Committees used the inventory reports to decide for which indications they wanted a full HTA report.

Conclusions: Evidence inventory reports are a form of rapid HTA that can give decision makers a timely understanding of the available evidence upon which they can base a decision. They can help HTA providers focus subsequent reports on topics that will have the most influence on healthcare decision making.

Type
METHODS
Copyright
Copyright © Cambridge University Press 2011

Clinical departments and administrative committees at hospitals and health systems frequently must make decisions regarding acquisition and use of new technologies and new indications for use of existing technologies. It is desirable to base these decisions on clinical evidence, but often these technologies are used for multiple indications. Evidence of effectiveness for one indication does not necessarily prove effectiveness for all indications. Moreover, it is often the case that researchers gather evidence for some indications more than others because some indications may occur more frequently, or their outcomes may be easier to measure (Reference Mitchell6).

For this reason, defining the key questions and scope of a report has been recognized as a critical part of the health technology assessment (HTA) process. If questions are too narrow, less evidence will apply and conclusions will be weaker. If questions are too broad, HTA reports will take longer to complete and incorrect conclusions may be reached with regard to critical patient subgroups.

HTA centers typically approach this problem in a conservative manner, preparing comprehensive reports on a technology and looking for possible distinctions in the safety and effectiveness of an intervention between patient groups and among different indications for the intervention. This ensures high-quality HTA reports, but is time-consuming. Six months is an oft-quoted time frame for completion of a full HTA report (Reference Hailey4;Reference Neumann, Drummond and Jonsson5). However, HTA end-users want information more rapidly, and will make decisions about use of new technologies without evidence reviews if those reviews cannot be completed by the time the decision must be made.

In this article, we present two examples of short-form evidence inventory reports that our center completed to advise decision makers on the quantity and nature of published evidence, and assist them with determining whether a more complete evidence report on a topic would yield sufficient evidence on which to base a technology utilization decision.

THE SETTING

The University of Pennsylvania Health System (UPHS) is an academic medical center comprising three acute care hospitals, a rehabilitation and long-term acute care facility, a home care and hospice service, three suburban primary care and specialty centers, and outpatient practices both on the hospital campuses and in the surrounding communities. Governance of the system, including technology acquisition and drug formulary decision making, is done on both corporate (system-wide) and hospital levels.

To improve the quality, safety, and cost-effectiveness of patient care, the system's chief medical officer established the UPHS Center for Evidence-based Practice (CEP) in 2006. CEP functions as an in-house consulting service gathering and summarizing scientific evidence to support decision making at all levels of the health system: from physicians and nurses in inpatient and outpatient units to top health system administrators.

Case 1: Robot-Assisted Surgery

Faced with increasing requests from surgeons at our three hospitals requesting privileges for robot-assisted surgery, the health system's supply chain committee made an inquiry to CEP about an evidence report on the topic. Robotic surgery systems use smaller instruments than human surgeons can, and can reach some parts of the body through smaller incisions. Proponents say it reduces surgical complications and speeds patient recovery. However, the cost of the equipment is high and clinical benefits are uncertain. Thus the supply chain committee wanted to focus growth of robotic surgery at UPHS on indications where the evidence shows it improves patient outcomes or makes surgical procedures more efficient.

The relative advantages and disadvantages of robotic surgery depend on considerations such as the proximity of the surgical target to sensitive tissues and the relative safety and efficacy of conventional surgery. Because these differ by surgical site, we advised the committee to avoid drawing general conclusions about the benefits and risks of robotic surgery from studies of individual surgical procedures. Instead, we recommended that the committee select a small set of indications it was most interested in, so CEP could perform systematic literature reviews on those specific topics.

To help the committee identify these topics, we prepared an evidence inventory report on robotic surgery (Reference Mitchell and Williams8). The purpose of this report was to identify which surgical robot indications had enough of an evidence base to make a full review of the literature worthwhile. To expedite completion of the report, we limited our literature searches to secondary sources only: inclusion criteria were article type: systematic reviews and technology assessment reports and subject keyword: surgical robots. Searches were done and reviewed by a single HTA analyst. We did not do a systematic search for primary clinical studies of this technology, because it would take weeks to complete and there was no guarantee we would find enough trials to support an evidence-based decision on use of the technology.

To find technology assessment reports, we searched the Health Technology Assessment (HTA) database. This database catalogs reports published by HTA agencies around the world. Because this was a device-oriented topic, we also searched the publication database of the ECRI Institute for relevant technology assessment reports.

We searched multiple databases for systematic reviews on surgical robots. The complete list of databases is shown in Table 1, along with the number of hits from our searches and the number of those articles referenced in our finished report. The EMBASE and Medline searches used filters specific to systematic reviews and meta-analyses, to screen out narrative reviews and tutorial articles that would lack information on the quality and quantity of evidence. This narrowed the results sufficiently that the searches could be completed and the search results reviewed in a day.

Table 1. Literature Search Results for Technology Assessments and Systematic Reviews on Robotic Surgery

A total of twenty-six articles and reports were found by our searches. We were able to obtain full text of twenty-five of them. Once the reports were retrieved, we compiled tables to present the findings.

Three of the four HTA reports pertained to single indications for robotic surgery: two prostate cancer, one orthopedic surgery). The other, published by the Belgian Health Care Knowledge Center (KCE) in 2009, summarized the evidence on a variety of robotic surgery indications (Reference Camberlin, Senn, Leys and De Laet1). The largest evidence base was on prostatectomy: the KCE concluded that robotic surgery reduced blood loss, but other benefits of the technology were “harder to substantiate.” For most other indications, the evidence base included only uncontrolled case series, so comparative effectiveness of robotic and conventional surgery could not be determined.

One systematic review of robotic surgery was found in the Cochrane Library (Reference Gurusamy, Samraj, Fusai and Davidson3). It was specific to laparoscopic cholecystectomy and was last updated in January 2009. Five randomized trials comparing robotic and conventional surgery were analyzed by the Cochrane reviewers. Sixteen systematic reviews were found in the Medline and EMBASE literature (Table 2), six of which were on robot-assisted prostatectomy.

Table 2. Robotic Surgery Reviews in the EMBASE and Medline Literature

The completed report was twelve pages long and included eleven tables and eighteen references. It took 4 days to complete, from the time it was decided to prepare an inventory report instead of a full evidence report (some of the initial searches had been completed and some background documents had already been retrieved at that point) to the time the first draft was ready for internal review. The results tables included in the evidence inventory are listed in Table 3.

Table 3. Results Tables Presented in the Robotic Surgery Overview

The report was circulated to members of the supply chain committee before their meeting. At the meeting, the CEP research analyst presented the results of the literature search, explained the different types of evidence and their strengths and weaknesses, and explained why evidence on one robotic surgery application could not be used to draw conclusions about other applications.

Seeing the lack of evidence on robotic surgery for the indications of greatest interest to the medical center, the committee decided not to commission any additional evidence reports on this topic. Preparing the inventory report saved our center from having to do a comprehensive search for robotic surgery trials. While we cannot know exactly how much time and effort this saved, other CEP evidence reports typically take 2 to 3 months to complete.

Case 2: Dexmedetomidine

Dexmedetomidine is a sedative medication which is less likely to cause respiratory depression than benzodiazepines and other sedatives. At our medical center, it was added to the formulary in 2007 for a limited number of indications, particularly awake craniotomy. Our center completed an evidence review on that subtopic for the formulary committee before that decision. The review found there were no randomized trials of dexmedetomidine for awake craniotomy, and nine uncontrolled case series. It concluded that the patients in the trials successfully completed neurocognitive testing during the procedures, and did not suffer respiratory depression.

In 2009, the formulary committee received another request from the anesthesia department to approve use of the drug beyond the initial set of indications. One request was for sedation of patients in critical care units. We completed a typical evidence review on this topic, with a systematic literature search that found three relevant RCTs measuring safety and efficacy outcomes and no relevant practice guidelines.

The other request from the formulary committee was on use of dexmedetomidine in surgical procedures. The committee did not specify which procedures they wanted reviewed, though. Searching for and analyzing evidence from all of the clinical trials of dexmedetomidine would have taken more time than was available before the committee's scheduled discussion of this topic, so we used an evidence inventory report (Reference Mitchell and Williams7) to prioritize indications for review.

Because the inventory report would include all indications for use of the drug in surgery, our search strategy did not need to include any terms referring to indications. An initial Medline search on dexmedetomidine alone yielded 662 hits. Although adding search terms for key indications or for publication type would have reduced this number, time saved by screening fewer hits would be offset by added time spent searching. We therefore screened titles and abstracts of the 662 hits, identifying 82 randomized trials or other comparative studies of dexmedetomidine. The initial EMBASE search yielded considerably more hits: 970. We narrowed that search with the EMBASE limit operator for controlled clinical trials and randomized trials, obtaining 181 hits which yielded 122 relevant studies. Duplicate references to 57 articles included in both sets of search results were deleted, leaving 157 references in the database. Searches were done and reviewed by a single HTA analyst.

We then compiled a table of those controlled trials, sorted by indication. The table included only references to the trials; it did not report their results. The number of articles of each type found is shown in Table 4. The total of references in the table does not sum to the total of all references cited because some articles reported on multiple indications. As the table shows, there is a substantial evidence base of randomized trials on dexmedetomidine for numerous applications.

Table 4. Controlled Studies of Dexmedetomidine in Surgical Procedures

In the table, we marked those studies published in languages other than English, so the formulary committee would know if obtaining all the evidence for a particular indication would take longer due to time needed to translate articles. The table also identified those studies relating to specific patient populations, such as patients with pulmonary hypertension. This allowed the committee to consider limiting dexmedetomidine use to certain patient groups if evidence was available to support such limits. We also were able to determine that evidence specific to the use of dexmedetomidine in high-risk patients was lacking.

Another table in the evidence inventory report identified published systematic reviews on dexmedetomidine for specific procedures. It excluded general reviews on the drug to focus on reviews that could be the basis for a follow-up evidence report specific to one or two indications. Only four such reviews were found: two on craniotomy and intracranial surgery, one on functional neurosurgery, and one on middle ear surgery.

The finished evidence inventory report was three pages long. We attached to it the entire bibliography of 224 references (12 pages). The overview itself was quite short because we did not attempt to abstract or analyze any of the clinical data. That also meant we were able to complete the report in 3 days.

The inventory report was designed to make it easy for the formulary committee and the requesting physicians to obtain and review the cited evidence for themselves, without requiring them to order it through our center. All articles cited in the report were entered into a RefWorks database (ProQuest, LLC; Bethesda, MD) and the report users were given access to the database so they could see the references and download full text articles if desired. To expedite this, we used the RefWorks ID numbers to identify articles in the evidence overview rather than numbering references in the order they were cited.

After we completed the inventory report and submitted it to the committee and the physicians, we invited them to suggest topics for a full evidence review. They agreed on intubation as an indication where the evidence might influence a formulary decision on dexmedetomidine, so we proceeded with that review. This full review, with deeper database searching and full analysis of clinical data took approximately five weeks to complete.

Based on the evidence in the follow-up report, the formulary committee approved use of dexmedetomidine for intubation at our hospitals. It was not approved for continuous sedation of critical care patients or for other surgical indications. Approval for awake craniotomy and related procedures was reaffirmed by the committee.

DISCUSSION

By limiting the scope of a report to assessing the quantity and type of evidence on an HTA topic and not presenting or analyzing the evidence itself, we were able to complete these inventory reports in a matter of days rather than weeks or months. In the case of surgical robots, the overview was sufficient to permit the hospital committee to proceed with a decision immediately instead of waiting for a more complete report which would likely have concluded there was little evidence on which to base a decision. In the case of dexmedetomidine, the inventory report helped the committee decide on which applications to seek a full review of the evidence.

To help understand the difference between evidence inventories and evidence reviews, consider the PICO (Reference Sackett10) framework for defining key questions. In an evidence inventory report, we leave the “patients” element unspecified and then assess how much evidence there is for the intervention in question, stratifying by patient group (e.g., hysterectomy, prostate surgery). The inventory process would work equally well assessing the quantity of evidence on different variations of a particular intervention for a particular patient group.

In preparing these reports, we learned that a flexible approach to selecting sources helped expedite our work. Because we did not intend to search all possible evidence sources, we chose databases most likely to yield existing systematic reviews and technology assessment reports from which we could learn about the state of the evidence base. If those were not sufficient to ascertain how many clinical studies had been published, we proceeded to searches for primary literature. Medline and EMBASE searches relied on existing indexing terms even though they may not have captured every relevant article. In contrast, when we prepare a full evidence review, we test alternate search strategies to ensure articles are not missed.

Researching and writing the reports also gave the CEP staff a quick working knowledge of new topics and the controversies surrounding them. This made subsequent discussions with the committees more productive. At the meetings where we presented our preliminary findings, we could explain the strengths and weaknesses of the evidence base and predict what questions might not be answerable in an evidence-based manner. We could also explain why it would not be possible to conduct a comprehensive review of the evidence in a short period of time, illustrating the point by showing all the different indications that would have to be considered and how much evidence would have to be analyzed.

The committee members were satisfied with the quick turnaround time of the evidence inventory reports and have expressed interest in commissioning more reports of this type when they are presented with a technology with multiple indications. Furthermore, the committees gained a new understanding of evidence-based medicine and of the limitations on what evidence can tell us. In addition, because the inventory reports identified gaps in the evidence base, it encouraged one of our committees to seek out local evidence on a new technology.

Evidence inventories are different from horizon scanning reports sometimes published by HTA organizations (Reference Douw and Vondeling2;Reference O'Malley and Jordan9). Horizon scanning reports analyze limited primary data while evidence inventories report primarily the quantity and type of evidence on hand. Horizon scans take into consideration the regulatory and commercial status of a technology and the potential market for use of that technology. Those factors are considered by the committees that use our evidence inventories, and may be the basis for their request for an evidence inventory, but we do not include them in the reports themselves.

The Euroscan HTA prioritization process (Reference Wild, Simpson, Douw, Geiger-Gritsch, Mathis and Langer11) takes the availability of evidence into account, but is based primarily on cost and diffusion factors. HTA providers surveyed by Douw and Vondeling (Reference Douw and Vondeling2) most frequently reported cost, potential benefits, and organizational consequences as the criteria most often used for prioritizing HTA topics: availability of evidence was not widely considered.

CONCLUSIONS

Evidence inventory reports do not provide the final word on emerging technologies, but can help HTA users decide whether to proceed with full reviews of evidence on a topic of interest or to make an interim decision without that evidence. The reports are brief and easy to understand, and they can be completed in a matter of days, fitting into the decision-makers’ busy schedules. The time and other resources invested in these reports can potentially pay off in time saved searching for evidence that does not exist or obtaining and analyzing evidence that is unlikely to affect clinical decision making. Both hospital-level and national HTA centers should consider developing their own inventory products to meet their clients’ particular needs.

CONTACT INFORMATION

Matthew D. Mitchell, PhD (), Research Coordinator, Center for Evidence-based Practice, University of Pennsylvania Health System, 3535 Market Street, Suite 50, Philadelphia, Pennsylvania 19104

Kendal Williams, MD, MPH (), Assistant Professor of Clinical Medicine, University of Pennsylvania School of Medicine, 39th and Market Streets, Philadelphia, Pennsylvania 19104; Co-director, Center for Evidence-based Practice, University of Pennsylvania Health System, 3535 Market Street, Suite 50, Philadelphia, Pennsylvania 19104

Gretchen Kuntz, MSW, MSLIS (), Clinical Liaison Librarian, Biomedical Library, University of Pennsylvania, 3610 Hamilton Walk, Philadelphia, Pennsylvania 19104

Craig A. Umscheid, MD, MSCE (), Assistant Professor of Medicine and Epidemiology, Senior Scholar, Center for Clinical Epidemiology and Biostatistics, Senior Fellow, Leonard Davis Institute of Health Economics, University of Pennsylvania School of Medicine, 3400 Spruce Street, Philadelphia, Pennsylvania 19104; Director, Center for Evidence-based Practice, University of Pennsylvania Health System, 3535 Market Street, Suite 50, Philadelphia, Pennsylvania 19104

CONFLICT OF INTEREST

The authors do not report any potential conflicts of interest.

References

REFERENCES

1. Camberlin, C, Senn, A, Leys, M, De Laet, C. Robot-assisted surgery: Health technology assessment. Report No. 104C. Brussels: Belgian Federal Health Care Knowledge Centre (KCE); 2009.Google Scholar
2. Douw, K, Vondeling, H. Selection of new health technologies for assessment aimed at informing decision making: A survey among horizon scanning systems. Int J Technol Assess Health Care. 2006;22:177183.CrossRefGoogle ScholarPubMed
3. Gurusamy, KS, Samraj, K, Fusai, G, Davidson, BR. Robot assistant for laparoscopic cholecystectomy. Cochrane Database Syst Rev. 2009;(1):006578.Google Scholar
4. Hailey, D. A preliminary survey on the influence of rapid health technology assessments. Int J Technol Assess Health Care. 2009;25:415418.CrossRefGoogle ScholarPubMed
5. International Working Group for HTA Advancement, Neumann, PJ, Drummond, MF, Jonsson, B, et al. Are key principles for improved health technology assessment supported and used by health technology assessment organizations? Int J Technol Assess Health Care. 2010;26:7178.Google ScholarPubMed
6. Mitchell, MD. Efficacy studies of low-field-strength MR imaging: Feast or famine. Radiology. 1996;200:284285.CrossRefGoogle ScholarPubMed
7. Mitchell, M, Williams, K. Annotated search: Clinical trials of dexmedetomidine in surgery and other invasive procedures. Philadelphia: University of Pennsylvania Health System Center for Evidence-based Practice; 2009.Google Scholar
8. Mitchell, M, Williams, K. Systematic reviews of robotic-assisted surgery. Philadelphia: University of Pennsylvania Health System Center for Evidence-based Practice; 2009.Google Scholar
9. O'Malley, SP, Jordan, E. Horizon scanning of new and emerging medical technology in Australia: Its relevance to Medical Services Advisory Committee health technology assessments and public funding. Int J Technol Assess Health Care. 2009;25:374-382.CrossRefGoogle ScholarPubMed
10. Sackett, DL. Evidence-based medicine: How to practice and teach EBM. 2nd ed. Edinburgh/New York: Churchill Livingstone; 2000.Google Scholar
11. Wild, C, Simpson, S, Douw, K, Geiger-Gritsch, S, Mathis, S, Langer, T. Information service on new and emerging health technologies: Identification and prioritization processes for a European Union-wide newsletter. Int J Technol Assess Health Care. 2009;25 (Suppl S2):4855.CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Literature Search Results for Technology Assessments and Systematic Reviews on Robotic Surgery

Figure 1

Table 2. Robotic Surgery Reviews in the EMBASE and Medline Literature

Figure 2

Table 3. Results Tables Presented in the Robotic Surgery Overview

Figure 3

Table 4. Controlled Studies of Dexmedetomidine in Surgical Procedures