Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-06T11:06:51.221Z Has data issue: false hasContentIssue false

Primary data collection in health technology assessment

Published online by Cambridge University Press:  18 January 2007

Michelle L. McIsaac
Affiliation:
McGill University and University of Montreal Health CentresandUniversity of Calgary
Ron Goeree
Affiliation:
McMaster UniversityandSt. Joseph's Healthcare
James M. Brophy
Affiliation:
McGill UniversityandUniversity of Montreal
Rights & Permissions [Opens in a new window]

Abstract

This study discusses the value of primary data collection as part of health technology assessment (HTA). Primary data collection can help reduce uncertainty in HTA and better inform evidence-based decision making. However, methodological issues such as choosing appropriate study design and practical concerns such as the value of collecting additional information need to be addressed. The authors emphasize the conditions required for successful primary data collection in HTA: experienced researchers, sufficient funding, and coordination among stakeholders, government, and researchers. The authors conclude that, under specific conditions, primary data collection is a worthwhile endeavor in the HTA process.

Type
GENERAL ESSAYS
Copyright
© 2007 Cambridge University Press

There is growing interest in evidenced-based approaches to inform health policy decision making. Health technology assessment (HTA) assists decision making by systematically evaluating the efficacy, effectiveness, and cost-effectiveness of health services and technologies. Policy recommendations are then made based on evidence drawn from diverse sources, including clinical trials, observational studies, economic evaluations, and meta-analyses.

Most HTAs involve synthesis of existing data (often called secondary data). This is the most efficient HTA method if high quality pertinent data are available. However, when the evidence available is limited, conflicting, or too uncertain, the production of primary (original) data may be considered. This study describes the key methodological and practical issues involved in collecting primary data in the context of an HTA. To address these issues, we begin with a short description of the traditional HTA process and introduce an HTA agency with a proven capacity to collect and analyze primary data. A simplified decision algorithm is also presented to help conceptualize the evidence-based HTA decision-making framework.

BACKGROUND

HTAs in Canada service three levels of decision makers: national, provincial, and local. There are consequently HTA agencies that conduct assessments from each of these perspectives: national agencies such as CADTH (Canadian Agency for Drugs and Technologies in Health); provincial agencies such as AETMIS (Agence d'Évaluation des Technologies et des Modes d'Intervention en Santé) in Quebec, or MAS (Medical Advisory Secretariat) and OHTAC (Ontario Health Technology Advisory Committee) in Ontario; and local agencies, often hospital and university units, such as the TAU (Technology Assessment Unit of the McGill University and University of Montreal Health Centers) in Montreal, and PATH (Program for Assessment of Technology in Health) in conjunction with McMaster University and St. Joseph's Hospital in Hamilton. There are also contract research firms, and private HTA agencies that are occasionally used on an ad hoc basis.

HTA methodology typically involves request for assessments, horizon scanning, systematic literature search, review and synthesis, modeling and economic analyses, assessment of the ethical and social implications, policy recommendations, and more recently value of information (VOI) assessments.

VOI assessments evaluate the benefit of collecting additional primary data to reduce decision uncertainty. This can be accomplished by calculating the expected value of perfect information (EVPI; or a variant of this, i.e., expected value of sample information or expected value of imperfect information) by Monte Carlo simulation or Bayesian methods (10). In a utility maximizing framework, EVPI is the difference between the expected maximum utility given perfect information and the maximum utility given current information; put another way it is the difference between the utility of prior information and the expected utility of posterior information. If the cost of collecting primary data exceeds the EVPI, then collecting primary data is not considered a prolific activity (2).

Currently when evidence drawn from the literature is insufficient to make recommendations, decision makers and researchers often divert to expert opinion. Even if a VOI or expert opinion deems that further data should be collected, most HTA agencies do not have the capacity, in terms of infrastructure, personnel, and financial support, to routinely and extensively collect and evaluate primary data.

Role for Primary Data Collection in HTA

There is always some uncertainty when making health policy decisions. Uncertainty can arise from incomplete or ambiguous data on the efficacy, effectiveness, safety, or cost-effectives of the health service or technology. Uncertainty can lead to health policy decisions that yield adverse health outcomes and/or inappropriate allocation of public funds. Identifying the areas of uncertainty and collecting primary data to address these uncertainties can increase the probability of making appropriate healthcare decisions. The pragmatic collection of primary data is often referred to as “field evaluation,” and can directly address the needs of decision makers (5). Conditional technology implementation and primary data collection can also increase knowledge on the contextual effectiveness of a technology, thus yielding optimum site-specific recommendations.

PATH and Primary Data Collection

The role of primary data collection in HTA is subject to many theoretical and practical constraints. However, primary data collection for HTA can be and has been a successful endeavor. It can provide a solution to many complex evaluation obstacles that researchers and decision makers face. This section presents an example of a program in Canada that has been successful in implementing primary data collection into the HTA process.

PATH (Program for Assessment of Technology in Health) is an example of an HTA agency with the capacity to collect and evaluate primary data to aid the Ontario government's decision-making processes. PATH's primary data collection framework, which they call the PRUFE (ATH's eduction in ncertainty through ield valuation) iterative evidenced-based framework, is an example of a functioning primary data collection process, and is presented in Figure 1. The Medical Advisory Secretariat (MAS) is the provincial agency that conducts evidence-based health technology analyses that are reviewed by the Ontario Health Technology Advisory Committee (OHTAC). If the OHTAC conclude that there is insufficient evidence to make a decision, they can request that additional local data be collected to better inform decision making. The first box in Figure 1 is where PATH becomes involved in the provincial HTA process. PATH, the OHTAC, and MAS determine the sources of the informational uncertainties. PATH's first step is to construct a preliminary economic model to assess the cost-effectiveness of the technology. If the information is still insufficient PATH will design, collect, and analyze primary data on effectiveness, short or medium term clinical outcomes, patient preference outcomes, safety, adverse events, or/and cost. The primary data (either interim or final result) is synthesized along with prior published evidence on the technology and a full HTA report, including an updated cost-effectiveness study, is produced. Finally, as shown in the last box of Figure 1, each full HTA includes a VOI assessment to assess the value of collecting further primary data on the technology (5).

PATH's Reduction in Uncertainty through Field Evaluation iterative evidenced-based framework. MAS, Medical Advisory Secretariat; OHTAC, Ontario Health Technology Advisory Committee; PATH, Program for Assessment of Technology in Health; HTA, health technology assessment.

A particularly illuminating example of this process involved the issue of drug eluting stents (DES) in coronary arty disease. The efficacy of DES compared with bare metal stents (BMS) had been demonstrated in controlled clinical trials but controversy existed over the cost and number of restenoses avoided in a real-world setting. Consequently a well-performed field evaluation would permit a more unbiased estimate of the effectiveness of this technology in the local environment. Despite superior efficacy data from early randomized controlled trials (RCTs), the PATH field evaluation of this technology demonstrated at best marginal cost-effectiveness. The policy decision to continue to restrict dissemination of this technology was justified transparently to decision makers, physicians, and patients.

Decision Algorithm

A simplified but useful decision algorithm for evidence-based decision making within the HTA framework is presented in Figure 2. Implementation decisions must be made even in the absence of sufficient efficacy evidence. When adequate evidence is not available, the best techniques to inform tentative decisions should be used. Occasionally imperfect secondary data may be manipulated using indirect or Bayesian methods to make reasonable decisions while awaiting more definitive information (1). Decision makers then have three choices: not implement the technology, fully implement the technology, or conditionally implement the technology. If the decision is to not implement the technology due to lack of evidence, the technology may be re-evaluated at a later date when the evidence base has increased. In this case, the decision to wait for other secondary data is de facto judged as more attractive than investing resources to collect these data. Such a case could occur when there is a complete lack of efficacy evidence (i.e., a full RCT would be required), the technology is not expected to have a major impact on population health, or additional secondary efficacy evidence is expected to become available shortly.

Decision algorithm for evidenced-based decision making within the health technology assessment framework. This algorithm sets out a simplified version of the decision-making process, it does not depict time, as many of the stages in the decision-making process are concurrent.

Alternatively, if the decision is to tentatively fully implement the technology; primary data collection becomes a population priority. Full tentative implementation is likely to occur when there is some but perhaps not definitive efficacy data, the technology is expected to have a considerable effect on population health, or there is sufficient efficacy data but some uncertainty about safety, rare adverse events, or cost-effectiveness. Primary data can still be collected from a representative sample of the population. Data collection in the case of a tentative full implementation cannot be used to assess efficacy, as there is no longer a true comparator. Moreover, if future primary data collection on effectiveness, safety, or costs for the new technology should prove unfavorable, it may be difficult to rescind its widespread utilization.

Finally, the decision of constrained initial implementation is often the optimal occasion for primary data collection. Constrained initial implementation is suitable when there is uncertainty about efficacy, effectiveness, safety, or cost-effectiveness. Using the right method of primary data collection and analysis can provide insight into these areas of uncertainty, and re-evaluation of a constrained implementation decision is likely more feasible than for fully implemented technologies. It may not be socially or financially desirable to allocate resources to collect primary data on all technologies; therefore, data collection should primarily be undertaken in technologies where a VOI shows a clear benefit to increased information, a re-evaluation of a policy decision appears feasible, and the clinical and financial stakes are high.

Overall, primary data collection for HTA is likely to be most beneficial when idealized efficacy has already been demonstrated in the controlled, but artificial, environment of a randomized clinical trial, and data on effectiveness, feasibility, and cost are needed from a real-world setting. As mentioned previously, both data collection and subsequent policy decisions are most easily performed if the proposed initial technology implementation is constrained.

Methodological Considerations

When collecting primary data, the choice of study design is a central issue. Primary data can be collected from RCTs, other controlled/uncontrolled trials, observational studies (case-control, cross-sectional, or surveillance), administrative database analyses, case series, or case reports (6). As with any study, the first and perhaps most important decision is choosing the appropriate study design to address the questions at hand, the goal being to ensure validity by minimizing bias and confounding.

The RCT is widely considered the best methodology to address efficacy. It is considered internally valid (the association between exposure and disease are valid) in large samples, as it controls for confounding due not only to measured but also unmeasured patient differences. However, the use of RCTs is limited by their high costs, ethical difficulties in justifying randomization, and maintaining high quality with respect to blinding and collecting detailed complete follow-up data. Moreover, RCTs tend to have more restricted external validity: the population studied in the RCT may be different from the population that uses the technology (in terms of age, gender, disease history and severity, and comorbidity), thus limiting the generalizability of the study results (7). Other factors that may influence the translation of theoretical efficacy into routine effectiveness include: a learning curve that many clinicians face when using a new technology, rapid technology advances, and frequent tweaking of initial equipment (9). RCTs also have to be very large to capture uncommon side effects, which can be a major concern, especially when assessing exceptionally expensive technologies.

Observational studies have been criticized on the basis of their internal validity. Bias and confounding are much harder to control for with observational methods than with experimental designs (4). A common error is to propose the use of an observational design to demonstrate the efficacy of a new technology. Because selection bias and confounding errors can be of the same order of magnitude as the proposed clinical benefit, this design may prevent meaningful inferences about efficacy. If primary data are required to support the evidence base for efficacy, randomized trials must be favored. Nonetheless, observational studies may provide a good source of evidence on adverse events that are rare or have delayed onset. There is evidence that, in observational studies, unintended side effects are less susceptible to indication bias than intended effects. Observational studies are also a good source of complimentary data, as they provide important information on regional hospital resource use. Observational studies can provide insight on effectiveness and can be used to monitor other important outcomes such as diffusion equity and cost (8). Observational studies also cost considerably less than a RTC (3).

Case series and case reports tend to be less robust and are criticized on the basis of both internal and external validity; however, these studies can help flag issues of safety and rare side effects. Large nested case-control studies may be better suited to measure rare adverse events than an RCT (3;6). Surveillance and case studies can more efficiently identify rare events, and accuracy of diagnostic procedures may be best assessed by cross-sectional studies (6).

For primary data collection to be worthwhile, it is crucial that the choice of study match the question at hand and limit potential bias. As mentioned above, attempting to provide efficacy data from an observational study where previous RCTs either have not been preformed or are inconclusive may be misleading and should be strongly discouraged. Appropriate study design can improve the value of the information that primary data can provide decision makers and add credibility to such studies.

Practical Considerations

To ensure that primary data collection and evaluation are done properly, the appropriate people, infrastructure, financial, and governmental support must be in place. Because primary data collection for the purpose of decision making is a multidisciplinary activity, a wide range of people with diverse skill sets will be required, including healthcare workers, economists, epidemiologists, statisticians, and ethicists. Office space, equipment, resources, and continuing education must also be available to support the HTA team. Funding must be available to attract, retain, and provide resources to these highly trained individuals. Primary data collection is a resource-intensive activity and requires a high degree of knowledge and collaboration among government, researchers, clinicians, and industry.

The decision to collect primary data is both important and complex. Primary data should only be collected when the value of collecting additional information outweighs the cost of not doing so. VOI assessments are a useful method of systematically evaluating the value of investing in collecting primary data (2). These situations are more likely to occur when the technology has a potentially significant impact on patient outcomes, the available technology information does not address the appropriate population, the evidence available is conflicting or of low quality, or a large multicenter RCT is not feasible/ethical or would not capture important outcomes (such as rare/delayed side effects). Another important issue is length of the study. The study length should be timely enough to address the needs of decisions makers, but lengthy enough to capture meaningful health outcomes.

A potential downside of primary data collection is that, once a technology is diffused (even under stringent conditions) it may be politically difficult to withdraw access, even if the primary data collection does not confirm the utility of the technology. An example is again the PATH experience with DES for treating restenosis. The new evidence showed an overestimation of benefit and a high incremental cost-effectiveness ratio, and consequently has moderated dissemination of this technology, it has not been possible to completely withdraw the initial baseline DES implementation allowed. Again this finding illustrates that primary data collection should be undertaken when the information accumulated will directly influence decision making.

Framework for Primary Data Collection

The collection of primary data involves important methodological and practical issues, which should be addressed before such an initiative is undertaken. The appropriate stakeholders (government, epidemiologists, health economists, hospitals, clinicians, and industry) need to be involved so that appropriate questions are addressed, and research results are used to influence health policy decision making. Talented researchers need to be recruited and retained to ensure appropriate methodology be used to address uncertainties. Financial support also needs to be in place, as primary data collection is often a resource-intensive process. Finally there must be a willingness on behalf of decision makers to adopt such a technique into their decision-making process, as this will clearly increase the value of collecting primary data and justify investment in this activity. When informational uncertainty is high, national, regional, or local decision makers could conduct or contract primary research to address these uncertainties, and thus make more informed decisions.

DISCUSSION

Overall, primary data collection in HTA can be a valuable process in advising evidence-based decision making. For such a process to be a success, the right infrastructure, research design, people, partnerships, and funding need to be in place. Above all, an exceptional group of researchers will be needed to tackle the methodological issues involved in primary data collection and analysis. Such attributes will help create a credible evaluative framework that will lead to better healthcare decisions and patient outcomes. With proper consideration and implementation of resources, primary data collection is likely to be a successful and valuable undertaking.

CONTACT INFORMATION

Michelle L. McIsaac, MA (), Research Associate, Department of Community Health Sciences, University of Calgary, 1403-29 Street NW, Calgary, Alberta T2N 2T9, Canada; Health Economist (Research), Technology Assessment Unit of the McGill University and University of Montreal Health Centres, Royal Victoria Hospital, 687 Pine Avenue W, Montreal, Quebec H3A 1A1, Canada

Ron Goeree, MA (), Assistant Professor, Clinical Epidemiology and Biostatistics, McMaster University, 1200 Main Street West, Hamilton, Ontario L8N 3Z5, Canada; Director, PATH, Program for Assessment of Technology in Health, St. Joseph's Healthcare, Hamilton, 25 Main Street West, Suite 2000, Hamilton, Ontario L8P 1H1, Canada

James M. Brophy, MEng, MD, PhD (), Associate Profressor, Department of Medicine, McGill University and University of Montreal; Technology Assessment Unit, McGill University Health Centre, Centre Hospital University of Montreal, 687 Pine Avenue W, Montreal, Quebec H3A 1A1, Canada

References

Brophy JM, Joseph L. 2005 Medical decision making with incomplete evidence—Choosing a platelet glycoprotein IIbIIIa receptor inhibitor for percutaneous coronary interventions. Med Decis Making. 25: 222228.Google Scholar
Claxton K, Eggington S, Ginnelly L, et al. 2005. A pilot study of value of information analysis to support research recommendations for the National Institute for Health and Clinical Excellence: Centre for Health Economics. Research Paper 4:
Eisenberg JM. 1999 Ten lessons for evidence-based technology assessment. JAMA. 17: 18651869.Google Scholar
Fahey T. 1996 Randomised controlled trials in general practice. BMJ. 312: 779.Google Scholar
Goeree R, Levin L. 2006; Building bridges between academic research and policy formulation: The PRUFE framework—an integral part of Ontario's evidenced based HTPA process. Pharmacoeconomics. 11.Google Scholar
Goodman C. HTA 101: Introduction to health technology assessment. January 2004. Available at: http://www.nlm.nih.gov/nichsr/hta101/hta101.pdf#search='goodman%20c%20HTA'.
Hennekens CH, Buring JE. 1987. Epidemiology in medicine. Boston: Little, Brown and Company;
Raftery J, Roderick P, Stevens A. 2005 Potential use of routine databases in health technology assessment. Health Technol Assess. 9: 1106.Google Scholar
Stables RH. 2002 Observational research in the evidence based environment: Eclipsed by the randomized controlled trial? Health. 87: 101102.Google Scholar
Yokota F, Thompson KM. 2004 Value of information literature analysis: A review of applications in health risk management. Med Decis Making. 24: 287298.Google Scholar
Figure 0

PATH's Reduction in Uncertainty through Field Evaluation iterative evidenced-based framework. MAS, Medical Advisory Secretariat; OHTAC, Ontario Health Technology Advisory Committee; PATH, Program for Assessment of Technology in Health; HTA, health technology assessment.

Figure 1

Decision algorithm for evidenced-based decision making within the health technology assessment framework. This algorithm sets out a simplified version of the decision-making process, it does not depict time, as many of the stages in the decision-making process are concurrent.