Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-06T11:14:28.047Z Has data issue: false hasContentIssue false

Searching for and use of conference abstracts in health technology assessments: Policy and practice

Published online by Cambridge University Press:  09 August 2006

Yenal Dundar
Affiliation:
University of Liverpool
Susanna Dodd
Affiliation:
University of Liverpool
Paula Williamson
Affiliation:
University of Liverpool
Tom Walley
Affiliation:
University of Liverpool
Rumona Dickson
Affiliation:
University of Liverpool
Rights & Permissions [Opens in a new window]

Abstract

Objectives: Current policy and practice regarding identification of and extent of use of data from conference abstracts in health technology assessment reviews (TARs) are examined.

Methods: The methods used were (i) survey of TAR groups to identify general policy and experience related to use of abstract data, and (ii) audit of TARs commissioned by the National Institute for Health and Clinical Excellence (NICE) and published between January 2000 and October 2004.

Results: Five of seven TAR groups reported a general policy that included searching for and including studies available as conference abstracts and presentations. A total of sixty-three published HTA reports for NICE were identified. Of these reports, thirty-eight identified at least one randomized controlled trial available as an abstract/presentation. Twenty-six (68 percent) of these thirty-eight TARs included studies available as abstracts.

Conclusions: There are variations in policy and practice across TAR groups regarding the searching for and inclusion of studies available as conference abstracts. There is a need for clarity and transparency for review teams regarding how abstract data are managed. If conference abstracts are to be included, reviewers need to allocate additional time for searching and managing data from these sources. Review teams should also be encouraged to state explicitly their search strategies for identifying conference abstracts, their methods for assessing these abstracts for inclusion and, where appropriate, how the data were used and their effect on the results.

Type
GENERAL ESSAYS
Copyright
© 2006 Cambridge University Press

The National Institute for Health and Clinical Excellence (NICE) undertakes appraisals of the clinical benefits and cost-effectiveness of new and established health technologies to produce national guidance with recommendations for their appropriate use in England and Wales. The guidance is based on appraisal of such technologies involving several sources, including a technology assessment review (TAR). The type of evidence used in TARs is determined pragmatically by the quantity and quality of evidence available. Evidence from various sources, including published and unpublished clinical trials and trials published only in abstract form, may be relevant to the appraisal considerations (9;10).

There is debate as to whether data from unpublished studies available only as conference abstracts and presentations should be included in high-quality systematic reviews (3). Accepted gold standard data sources historically have required that the reviewer be able to judge the quality of the research process and extract data from the final analysis of the results. Within this standard, evidence from conference abstracts, presentations, or interim reports of research studies traditionally have not been accepted for inclusion in reviews.

However, national institutions responsible for providing recommendations are required to make decisions early, before the integration of these technologies into clinical practice and possibly before full publication of results from clinical trials. The TAR teams, therefore, may rely on evidence from studies available only in conference abstracts or presentations in their decision process.

The evaluation of rapidly evolving health technologies where there is rapid progress of publication of evidence to inform policy decisions is a challenge for those conducting TARs. It is argued that inclusion of unpublished data from conference abstracts and presentations could assist in the generation of a more comprehensive data set (8). It possibly could reduce the risk of publication bias, whereby an entire study is either published or not depending on the significance of its results, which has been recognized as a potential threat to the validity of any meta-analysis (4;6;14). However, conference abstracts and presentations are poorly or not indexed in standard bibliographic databases typically searched in systematic reviews. Extended search strategies, therefore, are required to identify such sources (e.g., handsearching of journal supplements, meeting abstract books, and conference sites) (11;12). These strategies may be time-consuming and difficult to design and may increase the resources required to complete a TAR (15).

Inclusion of studies available as conference abstracts often creates challenges for reviewers particularly in areas of quality assessment and data extraction. The quality of reporting in such sources may be inadequate, as they do not always contain the same methodological detail as a full-length article. Data reported in abstracts or presentations may not be complete as they may report only interim analyses, results of short-term follow-up, or selected outcome data. There is also evidence that inconsistencies regarding results, as well as the reporting of primary outcome measures, may occur between conference abstracts/presentations and subsequent full reports (1;2;7;16,17). The aim of this study was to examine current policy and practice regarding identification and extent of use of data from conference abstracts and presentations in TARs.

METHODS

Evidence for this research was obtained from a survey of TAR groups and an audit of TARs published between January 2000 and October 2004. The term “abstract” in this study refers to conference abstracts and presentations (oral or poster) given at conferences, meetings, workshops, and symposia.

In August 2004, we surveyed all seven TAR groups in the United Kingdom through the Technology Assessment Services Collaboration (InterTASC) regarding their practices of identification, inclusion, and assessment of data from conference abstracts in TARs. The audit included all the reviews commissioned by the HTA program on behalf of NICE and published between January 2000 and October 2004. Reports were obtained from the National Coordinating Centre of HTA (NCCHTA) Web site. Only data involving the clinical effectiveness component of the review were considered. Individual TAR data relating to types of interventions evaluated, identification, inclusion, quality assessment, and analysis of randomized controlled trial (RCT) data from conference abstracts were extracted by using pretested data extraction forms.

Search strategies were defined as explicit if a decision to search for conference abstracts to inform TARs (by handsearching journal supplements or searching for conference sites) was clearly stated in the review methods and/or reported separately in the search strategy. Search strategies were described as not explicit if intention to search for abstracts was not clearly stated in the methods but the search strategy included a search for abstracts indexed by electronic databases.

RESULTS

All seven TAR groups completed and returned the survey.

Identification of Abstracts

Five of seven TAR groups reported a general policy regarding searching for abstracts. Identification of studies available only as abstracts was achieved by developing both general and explicit search strategies in four groups and general searches in one group. Comments from three groups identified problems related to inadequate indexing of abstracts, difficulties in finding appropriate sites to search for studies available only as abstracts, and cost involved in obtaining such studies.

In the audit, forty-seven of sixty-three TARs (75 percent) included a search to identify abstracts. Seventeen TARs (27 percent) carried out an explicit search for trials published as abstracts. This search was generally achieved by searching conference Web sites or those of professional societies, or hand searching online or print copies of journals or supplements. Thirty-eight TARs (60 percent) searched electronic databases for abstracts as part of the general search strategy. Eight (13 percent) TARs used both general and explicit searches. The remaining sixteen TARs (25 percent) did not include a search strategy for abstracts in the review. A total of thirty-eight TARs (60 percent) identified at least one trial available in abstract/presentation form (i.e., available only as an abstract [twenty-two TARs] or as both abstracts and subsequent full publications [sixteen TARs]).

Inclusion of Abstracts

Five of seven groups reported that they had a policy for inclusion of studies available only as abstracts, four of which were contingent on the availability of data provided in the abstract. Three of these groups stated that they would exclude abstracts unless there was adequate information provided regarding the trial; the other group would always include abstracts if any data on study results were available. One group would refer to abstracts only as a guide to forthcoming research. Two groups stated that they had no policy, but one would include abstracts if, otherwise, there was limited evidence available. All groups responded that, where abstracts were included in the review, the same inclusion criteria would be applied to both abstracts and full publications.

If relevant outcome data were reported only in abstracts, most TAR teams (five of seven) would extract and use these data. If data were reported in both published and abstract form, most groups (five of seven) used the data only from full publications. This finding is consistent with the results obtained from the audit. Of the thirty-eight TARs that identified at least one trial in abstract form only, twenty-six (68 percent) included trials available as abstracts.

Quality Assessment

Five groups responded that they would carry out methodological quality assessment of studies obtainable only as abstracts using the same assessment tools as for full publications. In the audit, of the twenty-six TARs that included RCTs in abstract/presentation form, twenty (77 percent) carried out an assessment of the methodological quality of such abstracts. In sixteen TARs, where both abstracts and subsequent full publications were available, full reports of these studies (published or unpublished) were used for quality assessment.

Data Extraction

One group reported that they would not normally extract data from abstracts unless no other evidence was available, and one group would only extract data if there was sufficient information to assess the methodological quality of the trial. All other groups stated that data from abstracts were managed in the same way as full publications.

Impact Assessment

Two groups would assess the effect of including data from abstracts that differed from that of the subsequent full publications or include a discussion of the effect of inclusion of abstracts but did not specify how they would do this.

DISCUSSION

Several issues identified in this study involving the identification and use of conference abstracts are particularly challenging for review teams (see Table 1). Responses to the survey and results of the audit indicate that approaches adopted by TAR groups regarding searching for and inclusion of abstracts in reviews vary considerably both across and within groups. Most TAR groups appear to have a policy concerning inclusion of studies published as abstracts, including (i) listing abstracts in appendix but excluding them from meta-analysis (MA); (ii) including abstracts in MA; (iii) including abstracts in the review, depending on the availability of data from fully reported RCTs. However, all TAR groups reported difficulties related to inclusion of data available only from abstracts. These difficulties included the inability to carry out a methodological quality assessment of the study due to insufficient data and limited reporting of outcome data.

In the audit, conference abstracts were identified in a substantial number of TARs (approximately two thirds). Development of extensive search strategies to identify abstracts requires additional time and resources and may not be achievable easily in a strict, predefined, and limited time period. Furthermore, obtaining these sources can be costly, especially if found in obscure journals. Currently, there are no specific search strategies available for identification of studies available as abstracts.

As shown in the survey and audit, the reviewers apply the same quality assessment tools to conference abstracts as to full reports. However, conference abstracts often do not contain the same methodological details as a full journal article; therefore, it is not always possible to assess appropriately the study quality. Despite this limitation, it is rare to exclude any source of evidence purely because of poor quality assessment. There is a possibility that bias could be introduced into the review by including these studies, which may in fact be of poor methodological quality (5).

There is also the issue of publication bias, as unpublished abstracts have been shown to be more likely to have negative or inconclusive effects compared with published trials in some reviews (4;13). If very few published trials are identified, exclusion of data from trials available only as abstracts could potentially present a misleading picture of the efficacy of an intervention. In addition, conference abstracts are important sources of information regarding planned or ongoing trials, as they may present information regarding the design of a study and initial findings, as well as giving an indication as to when data from such studies will be available.

Limitations of the Study

This study has only looked at searching for and inclusion of RCTs available as abstracts for the clinical effectiveness part of the review and has not considered other study designs identified as conference abstracts and included in TARs. The findings of this report, therefore, may not be generalizable to TARs, including data from conference abstracts of studies other than RCTs. Although data for this research were obtained exclusively from TARs that were associated with the NICE appraisal process, it is reasonable to believe that these results are also generalizable to the preparation of HTAs in general and, thus, may have broader implications for general conduct of systematic reviews.

POLICY AND PRACTICE IMPLICATIONS

Comprehensive searching for trials available as conference abstracts is time consuming and may be of questionable value, particularly where there are published studies with sufficient data available. Review teams should take into account the time constraints and difficulties involved in locating and retrieving these sources and should carefully consider for each TAR whether exhaustive searching for abstracts is likely to provide data that can be integrated into the report. If reviewers decide to include abstracts, they should state explicitly their rationale for doing so in the methods section of the review (see Figure 1).

Decision process regarding searching for conference abstracts.

Conference abstracts tend to provide limited details of study methodology and the reporting of outcomes. The review teams, therefore, should increase their efforts to obtain further study details by contacting trialists to determine whether studies available as abstracts are considered for inclusion in the review.

Research Recommendations

There is a need for research into development of search strategies specific to identification of studies available as conference abstracts in TARs. This approach would include, for example, guidance with regard to identification of relevant electronic databases and finding appropriate conference sites relevant to certain clinical areas.

CONCLUSIONS

There are variations in policy and practice across TAR groups regarding searching for and inclusion of studies available as conference abstracts. There is a need for clarity and transparency for review teams regarding how abstract data are managed. If conference abstracts are to be included, reviewers need to allocate additional time for searching and managing data from abstracts. Review teams should also be encouraged to state explicitly their search strategies for identifying conference abstracts, their methods for assessing these abstracts for inclusion, and, where appropriate, how the data were used and their effect on the results.

CONTACT INFORMATION

Yenal Dundar, Dr (), Research Fellow, Liverpool Reviews and Implementation Group, Faculty of Medicine, The University of Liverpool, Ashton St. Sherrington Buildings, Liverpool L69 3GE, UK

Susanna Dodd, MSc (), Research Associate, Paula Williamson (), Professor and Director, Centre for Medical Statistics and Health Evaluation, The University of Liverpool, Shelley Cottage, Brownlow Street, Liverpool L69 3GS, UK, Rumona Dickson, MSc (), Director, Tom Walley, MD () Professor of Clinical Pharmacology, Liverpool Reviews and Implementation Group, Faculty of Medicine, The University of Liverpool, Ashton St. Sherrington Buildings, Liverpool L69 3GE, UK

The evidence described in this article is based on research (Project reference: 04/05/01) commissioned by the (UK) National Health Service National Co-ordinating Centre for Health Technology Assessment programme (NCCHTA). The authors are pleased to acknowledge the support and the contributions of the colleagues involved in the larger health technology assessment project: J Critchley and A Haycox as well as experts who commented on drafts of the assessment report.

References

Bhandari M, Devereaux PJ, Guyatt GH, et al. 2002 An observational study of orthopaedic abstracts and subsequent full-text publications. J Bone Joint Surg Am. 84: 615621.Google Scholar
Chokkalingam A, Scherer R, Dickersin K. 1998. Concordance of data between conference abstracts and full reports. Baltimore, MD: Cochrane Colloquium;
Cook DJ, Guyatt GH, Ryan G, et al. 1993 Should unpublished data be included in meta-analyses? Current convictions and controversies. JAMA. 269: 27492753.Google Scholar
Dickersin K. 1997 How important is publication bias? A synthesis of available data. AIDS Educ Prev. 9: 1521.Google Scholar
Egger M, Juni P, Bartlett C, et al. 2003 How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health Technol Assess. 7: 176.Google Scholar
Hopewell S. 2001. Time to publication for results of clinical trials. The Cochrane Database Syst Rev.Google Scholar
Hopewell S, Clarke M, Askie L. 2004: Trials reported as abstracts and full publications: How do they compare? 12th Cochrane Colloquium, October 2-6. Program and abstract book. Ontario: Cochrane Colloquium; 77.
McAuley L, Pham B, Tugwell P, et al. 2000 Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses? Lancet. 356: 12281231.Google Scholar
National Institute for Clinical Excellence. Guide to the methods of technology appraisal. (reference N0515). Available at: http://www.nice.org.uk/pdf/TAP_Methods.pdf 2004.
National Institute for Clinical Excellence. Guide to the technology appraisal process. (reference N0514). Available at: http://www.nice.org.uk/pdf/TAP.pdf. 2004.
Royle P, Bain L, Waugh N. 2005 Systematic reviews of epidemiology in diabetes: Finding the evidence. BMC Med Res Methodol. 5: 2.Google Scholar
Royle P, Waugh N. 2003 Literature searching for clinical and cost-effectiveness studies used in health technology assessment reports carried out for the National Institute for Clinical Excellence appraisal system. Health Technol Assess. 7: 151.Google Scholar
Scherer RW, Langenberg P, von Elm E. 2005. Full publication of results initially presented in abstracts (Cochrane Review). Cochrane Database Syst Rev.Google Scholar
Song F, Eastwood AJ, Gilbody S, et al. 2000 Publication and related biases. Health Technol Assess. 4: 1115.Google Scholar
2005: The Cochrane Library, Planning the metaanalysis: Methods of identifying trials. Appendix 11A. Chichester, UK: John Wiley & Sons, Ltd; 218.
Tooher R, Middleton P, Griffin T, et al. 2004: How different are conference abstracts of surgical RCTs from the subsequent full publication? Ottawa: Cochrane Collaboration Colloquia; 57.
Weintraub WH. 1987 Are published manuscripts representative of the surgical meeting abstracts? An objective appraisal. J Pediatr Surg. 22: 1113.Google Scholar
Figure 0

Outline of the Pros and Cons of Searching for and Inclusion of Abstracts

Figure 1

Decision process regarding searching for conference abstracts.