Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-02-11T10:47:43.214Z Has data issue: false hasContentIssue false

Impact of the National Institutes of Health Consensus Development Program on stimulating National Institutes of Health–funded research, 1998 to 2001

Published online by Cambridge University Press:  19 June 2007

Barry Portnoy
Affiliation:
National Institutes of Health
Jennifer Miller
Affiliation:
National Institutes of Health
Kathryn Brown-Huamani
Affiliation:
Scientific Consulting Group
Emily DeVoto
Affiliation:
Independent Consultant
Rights & Permissions [Opens in a new window]

Abstract

Objectives: The National Institutes of Health (NIH) Consensus Development Conference (CDC) was instituted to provide evidence-based guidance on controversial medical issues to researchers, health practitioners, and the public; however, the degree of impact this activity has on stimulating relevant research is unclear. This study examines the impact of CDC statements on the initiation of related NIH-funded research projects.

Methods: Six CDCs from 1998 to 2001 were examined. Research initiatives related to the Conferences' topics were collected through two discrete methods: (i) the overall number of relevant pre- and postconference research activities was compiled using NIH's Information for Management, Planning, Analysis, and Coordination II (IMPAC II) and the Department of Health and Human Services' (DHHS) Computer Retrieval of Information on Scientific Projects (CRISP) grant application and award databases; (ii) for each CDC, the sponsoring institute's conference coordinator and other identified Program Directors were queried for their knowledge of new conference-specific research initiatives sponsored by their institute. The main outcome measure was the total number of requests for applications, requests for proposals, program announcements, broad agency announcements, notices, and funded investigator-initiated research program grants (RO1s) for a given Consensus topic in the 3 years before (baseline measure) and following (measure of impact) a CDC.

Results: As identified through NIH's IMPAC II and DHHS' CRISP grants and announcements databases, the total number of relevant postconference research initiatives increased for five of six CDCs when compared with baseline activity levels; research activities remained constant for the sixth. When inclusion criteria were restricted to institute-identified research initiatives, two of six CDC topics had overall increases in relevant research activity in the postconference period.

Conclusions: CDCs appear to have a positive impact on the stimulation of related NIH-funded research initiatives. Future outcomes evaluations using prospective data collection methods and more robust participation by sponsoring and cosponsoring institutes should strengthen the reliability of the association between new research initiatives on a given topic and their causal relationship to a given CDC.

Type
GENERAL ESSAYS
Copyright
© 2007 Cambridge University Press

Since 1977, Congress has charged the National Institutes of Health (NIH) with assisting policy makers, clinicians, and the public in evaluating the risks, benefits, and appropriate applications of emerging and/or unproven clinical practices and technologies. This mission arose from the observation that new medical discoveries were increasingly being used “without sufficient information about their health benefits, clinical risks, cost effectiveness, and societal side effects” (14). As a direct response, the NIH Director established an Office of Medical Applications of Research (OMAR), within the Office of Director, to create and manage a Consensus Development Program (CDP). Since that time, more than 140 Consensus Development Conferences (CDCs) or State-of-the-Science Conferences (SOSs) have been convened (1). CDCs are undertaken where there is a strong body of higher quality evidence (randomized trials, well-designed observational studies) and it is reasonable to expect that the panel will be able to give clinical direction. SOSs are used in cases where the evidence base is weaker and the sponsoring NIH institute or center is seeking the panel's opinion on future research priorities.

As stated above, the primary mission of the CDP is to produce unbiased, evidence-based assessments of controversial medical issues to advance understanding for health professionals and the public. One critical product generated by this process is the systematic identification of research gaps in the body of evidence for a given subject; this information is consolidated into formal recommendations for future clinical research. As OMAR bears responsibility for the planning and production of CDC, the Office would like to ensure that the Program is effectively achieving its objectives. This study focuses on quantifying the degree of impact these published recommendations have on directing future research endeavors.

Several evaluations of the impact of CDCs on clinicians' practices have previously been performed (24;6;7;15). Similarly, multiple authors have examined and commented upon the CDP process as a whole (5;10–12;16). However, to our knowledge, there has been only one previous attempt to examine the direct impact of CDCs on research activities, and it was restricted to a single CDC (13). For this reason, OMAR decided to formally examine the effect a series of CDCs had on shaping new NIH-funded research initiatives. The results of this study will assist OMAR in understanding the strengths and weaknesses of the CDP in generating new research activities relevant to topics covered by Consensus conferences.

METHODS

Inclusion/Exclusion Criteria

Six CDCs occurring between 1998 and 2001 (a 4-year span) were included in this analysis. The list of conferences, along with primary institute sponsors, is as follows: (i) Diagnosis and Treatment of Attention Deficit Hyperactivity Disorder (November 16–18, 1998). Lead sponsors: National Institute on Drug Abuse (NIDA) and National Institute of Mental Health (NIMH). (ii) Rehabilitation of Persons with Traumatic Brain Injury (October 26–28, 1998). Lead sponsor: National Institute of Child Health and Human Development (NICHD). (iii) Phenylketonuria: Screening and Management (October 16–18, 2000). Lead sponsor: National Institute of Child Health and Human Development (NICHD). (iv) Adjuvant Therapy for Breast Cancer (November 1–3, 2000). Lead sponsor: National Cancer Institute (NCI). (v) Osteoporosis Prevention, Diagnosis, and Treatment (March 27–29, 2000). Lead Sponsor: National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS). (vi) Diagnosis and Management of Dental Caries Throughout Life (March 26–28, 2001). Lead Sponsor: National Institute of Dental and Craniofacial Research (NIDCR).

No CDCs were held in 1999. SOSs were excluded from our study as there were too few during the time period under consideration to allow for appropriate comparison to the CDCs. Conferences were also selected to allow enough lag time between the release of the consensus statement and the development of requests for applications (RFAs), program announcements (PAs), or the receipt of unsolicited, investigator-initiated research. This lag time in most instances is approximately 2 years.

Definitions/Measures of Research Impact

This study limited its search of NIH research activities to RFAs, Requests for Proposals (RFPs), PAs, broad agency announcements (BAAs), notices, and investigator-initiated research project grants (R01s). It was thought that this broad range of research activities would serve as good indicators of impact; we did not include other NIH-funded activities, such as training grants (K awards) and program projects (PO1s), which were unlikely to be associated with specific research topics identified by CDCs.

An RFP is an initiative sponsored by an NIH institute that requests proposals for a contract to meet a specific agency need. An RFA is the official announcement of an opportunity to apply for a funded NIH grant with a specific program purpose. A PA is an announcement by an NIH institute requesting applications in a stated scientific area; however, generally, money has not specifically been set aside to pay for it (8). A BAA is an announcement of an NIH institute's general research interests that invites proposals and specifies the general terms and conditions under which an award may be made. For the purposes of this study, a notice is any announcement related to an institute's release of an RFP, RFA, PA, BAA, or other grant mechanism. An investigator-initiated R01 is a grant to support a discrete, investigator-specified project in an area representing the investigator's specific interests and competencies; it must be related to the broad stated program objectives of one or more of the NIH institutes and centers, based on descriptions of their programs (9).

For each CDC, NIH-funded research activities related to the conference topic were retrospectively collected for the 3 years immediately preceding each conference. This strategy was performed to establish a baseline level of research interest and activity by the NIH for each CDC topic. Similarly, NIH-funded research activities related to the conference topic were retrospectively collected for the 3 years following each conference. The resulting numbers of RFAs, RFPs, PAs, BAAs, notices, and R01s initiated after the CDC were then compared with baseline numbers; this ratio serves as our primary outcome measure of whether CDCs stimulate increases in NIH-funded research initiatives. We also grouped findings into NIH-issued research (RFPs, RFAs, PAs, BAAs, and notices), investigator-initiated research (R01s), and all research activities combined.

Data Collection Methods

We investigated relevant research projects for each conference topic using the NIH Information for Management, Planning, Analysis, and Coordination II (IMPAC II) database, the Department of Health and Human Services (DHHS) Computer Retrieval of Information on Scientific Projects (CRISP) database, and information available on individual NIH institute and center Web sites. IMPAC II is an internal NIH database of all funded grant applications and awards, searchable by key terms. CRISP is a public-access database containing funded extramural research projects, grants, contracts, and cooperative agreements funded by the DHHS and conducted by universities, hospitals, and other nongovernment research institutions. Key terms related to the specific CDC topic were used to confirm program director supplied information. The key search terms used for the individual CDCs were as follows: “adhd;” “rehabilitation + brain + injury;” “phenylketonuria;” “adjuvant + breast + neoplasm;” “osteoporosis;” and “dental + caries.” In several instances, this confirmatory process identified solicitations that were not originally included by the Program Director, but were included in the final analysis.

Ascribing causality between a CDC statement and a research initiative solely on the basis of both addressing an overlapping topic is a tenuous assertion; therefore, we sought to maximize confidence in the association through direct confirmation by the institute or center funding the new research. There were six institutes that served as CDC sponsors during our 4-year study period; all were contacted and included in this analysis.

For each CDC, the sponsoring institute's conference coordinator was contacted to make them aware of the outcomes evaluation effort. This coordinator was sent a detailed questionnaire designed to facilitate identification of pre–and post–CDC-related research initiatives by individual CDC research recommendation; the questionnaire also asked for identification of other institute program directors with knowledge of relevant research initiatives. We also invited the coordinator to discuss these issues by phone interview. Additional Program Directors identified by interview or questionnaire were contacted, and went through the same process. Postconference research activities compiled from the IMPAC II and CRISP systems were then separated into Specified (institute-identified) and Unspecified (only IMPAC/CRISP-identified) categories.

RESULTS

Institute Participation

Response rates varied by institute, ranging from a low of 20 percent of individuals identified as possessing knowledge of relevant research activities, to a maximum of 67 percent, with a mean response rate of 41 percent. The most robust response occurred for the CDC on Attention Deficit and Hyperactivity Disorder (ADHD), with twenty-three relevant institute representatives identified and eight questionnaires completed. The participants include NIDA, NIMH, NICHD, NCI, and NIDCR (Table 1).

Baseline Research Activities (3 years before a CDC)

No RFPs, PAs, or notices were identified for any CDC topic in the 3 years before each conference. A total of six relevant RFAs (one each for dental caries and ADHD, four for osteoporosis), plus one BAA (osteoporosis) were identified. The greatest amount of research activity was identified through the R01 mechanism, with a total of ninety-six awards. The total number of related preconference research activities was 103 (Table 2).

Post-CDC Research Activities (3 years after a CDC)

Research activities remained stable or increased across each category in the postconference period. The number of NIH-issued research initiatives occurring postconference across all CDC topics was twenty-three. ADHD had one RFP, and osteoporosis had one BAA. There were ten RFAs, two notices, and nine PAs identified. The overall number of investigator-initiated projects (R01s) was 148. The total number of postconference new research activities was 171, but when post-CDC research initiatives were limited to those identified as causally linked by an institute representative, the total was 67. Within this more restrictive inclusion criteria, there were sixteen identified NIH-issued research activities and fifty-one investigator-initiated research initiatives (Table 2).

Overall Comparison: Pre- and Postconference Research Activities

Research activities in the topic area generally increased after a given CDC. ADHD saw the greatest increase in new initiatives, with the addition of thirty-eight new projects postconference. Research initiatives also climbed for the following conferences: dental caries and osteoporosis (+9 activities each), adjuvant therapy for breast cancer (+1 initiative), and traumatic brain injury (TBI) (+11 activities). Phenylketonuria (PKU) initiatives remained unchanged. If the numbers were restricted to institute-identified research (known to be directly caused by the CDC), there was an apparent decrease in research activities for dental caries (−6 total), osteoporosis (−33), PKU (−1), and adjuvant therapy for breast cancer (−14). Both TBI and ADHD retained persistent increases in research initiatives under this refined criteria (+5 and +10, respectively) (Table 2).

DISCUSSION

CDCs appear to stimulate new, relevant NIH research activities. An overall increase in NIH-funded initiatives was seen for five of six CDCs.

Five of six CDCs examined in this study were also associated with a postconference increase in NIH-issued research initiatives (RFAs, RFPs, BAAs, PAs, and notices). When evaluating IMPAC/CRISP-identified relevant R01s, an increase in investigator-initiated grants was observed for four of six CDCs.

Limiting inclusion of postconference activities to those institute-identified as causally related to a CDC produced an apparent reduction in the impact of CDCs on new research. Post-CDC, NIH-issued research initiatives still increased for five of six conference topics. However, increases in investigator-initiated grants were only observed for two of six CDCs, and when all research modalities were combined, only two of six CDC topics had overall increases in relevant research activity.

There are several potential causes for the shift in results generated by the more stringent inclusion criteria. First, institute response rates for this study were somewhat low, ranging from 20 percent to a maximum of 67 percent, with a mean response rate of 41 percent of individuals identified as possessing knowledge of relevant research activities. Because not all program directors participated, the research initiatives these individuals were primarily responsible for were likely not picked up by questionnaire or telephone interview. There may also have been staff changes between the start of the CDC and when we queried relevant NIH staff, resulting in incomplete recall. However, the IMPAC/CRISP searches, which include all NIH research activity, would have identified those additional initiatives.

Additionally, recall bias by institute representatives likely accounts for much of the observed difference, which is concentrated in the area of investigator-initiated (or external) R01 grants. Institute experts were likely primarily responsible for the creation and dissemination of the RFPs, RFAs, BAAs, PAs, and notices related to conference topics; on being questioned, it is likely that they would be able to identify most or all of these NIH-initiated activities. However, whereas R01s must be approved by NIH, the ideas for these projects (and thus, the burden of work) come from external researchers. Individual Program Directors may or may not have had knowledge about a given R01, depending on whether that specific grant was included in their research portfolio. In other words, an institute representative was far more likely to be aware of research actively solicited by his/her institute than externally generated research proposals. It is, therefore, not altogether surprising that the total number of relevant R01s identified by IMPAC II or CRISP would be higher than the number identified by institute representatives.

Institute representatives were queried to attempt to ensure that causal relationships between a CDC and a research activity were known, rather than simply inferred. However, the wide variance in number between IMPAC/CRISP-identified research and that identified by institute representatives highlights the very real difficulties inherent in outcomes evaluation work for information dissemination activities. Similar studies of this nature can hopefully help further elucidate the best approaches to overcoming these challenges in dissemination study design.

Given that one of the primary goals of SOSs is the development of specific research agendas, an important next step would be an ancillary study examining the impact of SOSs alone on NIH-funded research agendas. Future studies of the impact of the NIH CDPs on new research might also increase the total number of conferences included in the analysis, as well as include new initiatives not just for the primary institute sponsor of the conference, but for all of the cosponsoring institutes, centers, and agencies as well. Finally, invaluable information about the impact of NIH CDCs could come from an examination of postconference changes in the delivery of health services: for example, CDC statements might be linked to Medicare administrative data to examine changes in practice patterns and reimbursement rates.

This study has provided evidence that NIH CDCs do directly stimulate new research initiatives. New examinations using prospective data collection and higher survey response rates will be crucial to better quantify the ultimate impact.

POLICY IMPLICATIONS

NIH CDCs, although primarily designed to be knowledge transfer tools for health policy and clinical practice, have the potential for broader impact. As each conference contains a systematic review, the body of evidence surrounding a particular topic is evaluated in sum; this process explicitly highlights the remaining gaps in knowledge. By making concrete in a high-visibility forum those areas in which evidence is still lacking, consensus conferences serve as fertile ground for the stimulation of new research initiatives.

CONTACT INFORMATION

Barry Portnoy, PhD (), Senior Advisor for Disease Prevention, Jennifer Miller, MD, Senior Advisor to the Consensus Development Program, Office of Medical Applications of Research, Office of the Director, National Institutes of Health, Office of Disease Prevention, 6100 Executive Boulevard, Room 2B03, Bethesda, Maryland 20892

Kathryn Brown-Huamani, MS (), The Scientific Consulting Group, Inc., 656 Quince Orchard Rd., Suite 210, Gaithersburg, Maryland 20878-1409

Emily DeVoto, PhD (), Independent Consultant, 5112 Connecticut Avenue, Washington, DC 20008

This study was funded by the NIH Evaluation Set-Aside Program EB 03-122 OD-ODP administered by the Evaluation Branch, Division of Evaluation and Systematic Assessment, OPASI, Office of the Director, National Institutes of Health. The authors also acknowledge the contributions of Barnett S. Kramer, MD, MPH, in the development and preparation of the manuscript. The views expressed in this article are those of the authors and do not necessarily represent the views of the U.S. Federal Government or the National Institutes of Health.

References

Consensus Development Program: About us. Available at: http://consensus.nih.gov/ABOUTCDP.htm. Accessed 12 December 2006.
Doksum T, Bernhardt BA, Holtzman NA. 2001 Carrier screening for cystic fibrosis among Maryland obstetricians before and after the 1997 NIH Consensus Conference. Genet Test. 5: 111116.Google Scholar
Du X, Freeman DH Jr, Syblik DA. 2000 What drove changes in the use of breast conserving surgery since the early 1980s? The role of the clinical trial, celebrity action and an NIH consensus statement. Breast Cancer Res Treat. 62: 7179.Google Scholar
Ferguson JH. 1993 NIH consensus conferences: Dissemination and impact. Ann N Y Acad Sci. 703: 180199.Google Scholar
Ferguson JH. 1995 The NIH consensus development program. Jt Comm J Qual Improv. 21: 332336.Google Scholar
Lazovich D, Solomon CC, Thomas DB, et al. 1999 Breast conservation therapy in the United States following the 1990 National Institutes of Health Consensus Development Conference on the treatment of patients with early stage invasive breast carcinoma. Cancer. 86: 628637.Google Scholar
Lazovich D, White E, Thomas DB, et al. 1997 Change in the use of breast-conserving surgery in western Washington after the 1990 NIH Consensus Development Conference. Arch Surg. 132: 418423.Google Scholar
NIH OER. Glossary of NIH terms. Available at: http://grants.nih.gov/grants/glossary.htm. Accessed 12 December 2006.
NIH OER. Types of grant programs: NIH Research Project Grant Program (R01). Available at: http://grants.nih.gov/grants/funding/r01.htm. Accessed 12 December 2006.
No authors listed. 1980 Pros and cons on NIH consensus conferences. JAMA. 244: 14131414.
Perry S. 1987 The NIH Consensus Development Program. A decade later. N Engl J Med. 317: 485488.Google Scholar
Perry S, Kalberer JT Jr. 1980 The NIH consensus-development program and the assessment of health-care technologies: The first two years. N Engl J Med. 303: 169172.Google Scholar
Ragnarsson KT. 2006 Traumatic brain injury research since the 1998 NIH Consensus Conference: Accomplishments and unmet goals. J Head Trauma Rehabil. 21: 379387.Google Scholar
Richmond JB. Statement before the Submcommittee on Domestic and International Scientific Planning, Analysis, and Cooperation. Congressional Record 124; 1978.
Thamer M, Ray NF, Henderson SC, et al. 1998 Influence of the NIH Consensus Conference on Helicobacter pylori on physician prescribing among a Medicaid population. Med Care. 36: 646660.Google Scholar
Wortman PM, Vinokur A, Sechrest L. 1988 Do consensus conferences work? A process evaluation of the NIH Consensus Development Program. J Health Polit Policy Law. 13: 469498.Google Scholar
Figure 0

NIH Institute Response Rates to Research Initiative Inquiries

Figure 1

Pre-and Post-CDC Research Initiatives