Many countries have taken initiatives toward integration in chronic care by developing and implementing disease management programs (DMPs), which are expected to improve effectiveness and efficiency of chronic care delivery (Reference Miller, Randolph, Forkner, Smith and Galbreath1;Reference Tsiachristas, Hipple-Walters, Lemmens, Nieboer and Rutten-van Molken2). Schrijvers (2009) define these programs as “a group of coherent interventions designed to prevent or manage one or more chronic conditions using a systematic, multidisciplinary approach and potentially using multiple treatment modalities” (Reference Schrijvers3). Health technology assessment (HTA) could have a leading role in informing decision makers around the world about the extent to which DMPs meet these expectations (Reference Greiner4). However, despite the attention that DMPs have received the last decade, the evidence from the relatively few HTA studies (Reference Greiner4;Reference Busse, Blümel, Scheller-Kreinsen and Zentner5) is inconclusive. This can largely be explained by the variation in design, outcome measures, and costing methods used in the economic evaluations of DMPs (Reference Miller, Randolph, Forkner, Smith and Galbreath1;Reference Steuten, Vrijhoef, van Merode, Severens and Spreeuwenberg6).
Evidentially, the lack of a methodological framework for the HTA of DMPs has contributed to this variation and made the comparison of results hardly possible, meaning that much time and financial resources have been spent inefficiently (Reference Steuten, Vrijhoef, Severens, van Merode and Spreeuwenberg7). Current decision-making incorporates traditional cost-effectiveness studies that may not be suitable for the comparison between DMPs and usual care. This is because DMPs are complex, multifaceted interventions that have multiple effects such as improved self-management capabilities, coordination and continuity of care, reduced risk factors and complication rates, improved quality of life, which cannot be expressed in a single unit of effect like a quality-adjusted life-year (QALY) that is traditionally used in economic evaluations of health care interventions. Therefore, the establishment of a methodological framework to perform an analysis incorporating the most relevant costs and effects is desirable in performing economic evaluations of DMPs valuable to decision making.
Multi-Criteria Decision Analysis (MCDA) is developed to support decision making by allowing for a systematic trade-off between multiple, and sometimes conflicting effects and costs simultaneously in an explicit, transparent and consistent way (Reference Hummel, van Rossum, Verkerke and Rakhorst8;Reference Danny, Hummel and Volz9). Usually, policy makers implicitly consider and weigh those outcomes (criteria) and incorporate them in decision-making in a deliberate way. However, such “intuitive” decision making may not be transparent and it may be complicated by conflicting criteria or different opinions among stakeholders regarding the importance of different criteria (Reference Peacock, Mitton, Bate, McCoy and Donaldson10), which jeopardizes the accountability of decision makers to patients, insurance-payers, and professionals.
The aim of this study is to develop a methodological framework to facilitate the application of MCDA in a broader economic evaluation of DMPs including the most relevant outcomes and cost categories.
METHODS
We developed a methodological framework for the application of MCDA in a large study in which we evaluate twenty-two DMPs for several chronic diseases in the Netherlands (Reference Lemmens, Rutten-Van Molken, Cramm, Huijsman, Bal and Nieboer11). For the development of the framework, we studied literature to identify and understand MCDA techniques that could be applied to the evaluation of DMPs. To identify the objectives and criteria—the first and most important steps in MCDA—we studied frameworks that had previously been used to evaluate DMPs. The frameworks of Steuten et al. (Reference Steuten, Vrijhoef, Severens, van Merode and Spreeuwenberg7) and Lemmens et al. (Reference Lemmens, Nieboer and Rutten-Van Molken12) were the most relevant in this respect. We also used practical field experience in the ongoing broad HTA of the twenty-two DMPs mentioned above and personal discussions with integrated care providers, health insurers, scientific and practical experts in the integrated chronic care field as inspiration in understanding the complexities of DMPs and conceiving the idea for the application of MCDA in their evaluation. Finally, we conducted a MCDA of a hypothetical DMP versus usual care based on our framework to illustrate its application.
AN INTRODUCTION TO MCDA
MCDA has been successfully applied in other areas of public decision making, like the environmental area (Reference Huang, Keisler and Linkov13). Interest in MCDA for priority-setting in health care is growing rapidly the last decade (Reference Baltussen, Youngkong, Paolucci and Niessen14–Reference Goetghebeur, Wagner, Khoury, Levitt, Erickson and Rindress16). This is because MCDA overcomes the limitations of other priority-setting techniques such as cost-effectiveness, burden of disease, or equity analysis that concentrate on single criteria (Reference Baltussen and Niessen17). MCDA elicits preferences for alternative interventions by assessing the extent to which the objectives have been achieved using measurable criteria (Reference Baltussen, Youngkong, Paolucci and Niessen14). In this process, different criteria are weighted according to their relative importance to the decision. Hence, MCDA is a sophisticated method for comparing complex interventions such as DMPs incorporating all relevant categories of outcomes and costs (Reference Goetghebeur, Wagner, Khoury, Levitt, Erickson and Rindress16;Reference Belton and Stewart18;Reference Bots and Hulshof19).
The mains steps in conducting a MCDA include: (i) establishing the decision context and identifying the options to be appraised, (ii) identifying objectives and criteria, (iii) scoring by measuring the performance of each option on each criterion, (iv) assigning weights to each criterion, (v) combining the weights and scores to get the overall value, (vi) examining the results and performing sensitivity analysis (Reference Devlin and Sussex15). There are numerous different techniques for performing MCDA and their selection depends on the decision situation and the familiarity of the researchers/decision makers with a MCDA technique. However, the techniques that have proven to be most feasible and suitable are the Multi-Attribute Value Theory (MAVT) and the Analytic Hierarchy Process (AHP) (Reference Huang, Keisler and Linkov13).
EVALUATION FRAMEWORKS RELEVANT TO DMPS
Steuten et al. (Reference Steuten, Vrijhoef, Severens, van Merode and Spreeuwenberg7) and Lemmens et al. (Reference Lemmens, Nieboer and Rutten-Van Molken12) developed frameworks to evaluate DMPs. These frameworks identified structure, process, and outcome indicators used in the evaluation of DMPs. The structure indicators (e.g., method of reimbursement, presence of ICT system) are not relevant to HTA because they cannot be used to assess and quantify the performance of a DMP. Rather, these indicators are conditions for a DMP to perform well influencing therefore the process and outcome indicators. Lemmens et al. (Reference Lemmens, Nieboer and Rutten-Van Molken12) distinguished two mechanisms underlying the effects of DMPs on processes and final outcomes. The first is the patient's learning and behavioral change mechanism and the second is the professional support and behavioral change mechanism. These mechanisms lead to changes in process indicators such as disease-specific knowledge and self-care behavior as well as adherence to evidence-based guidelines and use of monitoring systems, respectively. Both frameworks relate changes in processes to changes in outcomes such as health-related quality of life (HR-QoL), mortality, clinical health status, and all relevant costs and distinguish them as important factors in the evaluation of DMPs.
IDENTIFYING OBJECTIVES, CRITERIA, AND MEASUREMENTS
Using the previously mentioned frameworks, we identified objectives of DMPs that can be included in the second step of performing a MCDA. The extent to which these objectives are achieved can be assessed by introducing a set of criteria similar to the process and outcomes indicators included in the frameworks of Steuten et al. (Reference Steuten, Vrijhoef, Severens, van Merode and Spreeuwenberg7) and Lemmens et al. (Reference Lemmens, Nieboer and Rutten-Van Molken12). In the next sections, we discuss different criteria per objective and we provide some examples of indicators that could be used to measure the performance scores on each criterion (step Reference Schrijvers3 in MCDA).
Criteria to Assess the Performance of DMPs
The effects of DMPs cover a wider range of outcomes influencing aspects of the delivery process as well as intermediate and final health outcomes. Although the ultimate objective might be to improve health outcomes, it should be kept in mind that it may take a long time before quality improvements in structure and process are translated into changes in health outcomes (Reference Steuten, Palmer, Vrijhoef, van Merode, Spreeuwenberg and Severens20). Thus changes in the process of care delivery and changes in intermediate outcomes may become goals by themselves.
Changes in the Process of Care Delivery
Because one of the main objectives of DMPs is to change care delivery toward integration of care, measurements of this process change should be used. The Assessment of Chronic Illness Care (ACIC) and the Patient Assessment of Chronic Illness Care (PACIC) could be used as a process indicator of integrated care improvement from the care professional and patient perspective, respectively (Reference Wagner, Austin, Davis, Hindmarsh, Schaefer and Bonomi21). These instruments cover many process indicators described in the framework related to performance of care providers and continuity of care by Steuten et al. (Reference Steuten, Vrijhoef, Severens, van Merode and Spreeuwenberg7). Moreover, as co-ordination between professionals of different disciplines is a crucial element of effective disease management (Reference Provan and Milward22), measurements of coordination level such as the relational coordination survey (Reference Gittell23) could be used in the evaluation. In addition, performance indicators, such as proportion of patients receiving care according to evidence-based guidelines would also be suitable to measure changes in care delivery (Reference Mattke, Bergamo, Balakrishnan, Martino and Vakkur24). Examples are the proportion of participants that get smoking cessation support in a COPD-DMP, the proportion of participants receiving podiatric care and annual eye controls in a diabetes-DMP, and the proportion of participants receiving statins in a cardiovascular-DMP.
Changes in Patient Lifestyle and Self-management Behavior
Because lifestyle improvement of people with chronic conditions is an important objective of DMPs, measurements of patients’ lifestyle behavior such as smoking, exercise and nutrition should be part of the evaluation (Reference Nolte and Mckee25). There are numerous instruments to measure physical activity, including self-report questionnaires such as the Epic Norfolk Physical Activity Questionnaire (EPAQ) (Reference Wareham, Jakes, Rennie, Mitchell, Hennings and Day26) and activity monitors like pedometers and accelerometers. The lifestyle changes are part of self-management. But self-management includes much more than this. It refers to any behavioral change that enables patients to take conscious decisions on many aspects of every-day life with a chronic disease. It includes accepting the disease, maintaining social contacts and support, keeping emotional balance, exercises to improve self-efficacy and adaption to the disease, for example by applying energy-saving techniques, stress management, working on adequate illness perceptions, etc. It also refers to teaching patients to adequately comply with therapy and how to act in case of disease worsening (Reference Nolte and Mckee25). Part of this is measured by the Self-Management Ability Scale (SMAS) (Reference Schuurmans, Steverink, Frieswijk, Buunk, Slaets and Lindenberg27).
Changes in Biomedical, Physiological, and Clinical Health Outcomes
Depending on the disease that is targeted, changes in biomedical, physiological, and clinical health outcomes such as blood pressure, cholesterol, forced expiratory volume in 1 second (FEV1), glycated hemoglobin (Hba1c), exacerbations, and complications are crucial outcome measurements of DMPs, because they may change disease progression and predict long-term changes in the health status of a patient. These outcomes are also informative to providers and contractors of DMPs (e.g., health insurers) that are negotiating about the quantity, price, and quality of care (Reference Tsiachristas, Hipple-Walters, Lemmens, Nieboer and Rutten-van Molken2;Reference de Bakker, Struijs and Baan28).
Changes in Health-Related Quality of Life
Health-related quality of life should be incorporated in the evaluation of DMPs to assess improvements in the quality of life of the participating patients. Although disease-specific questionnaires may be more sensitive to change, it can be argued that generic measurements of HR-QoL such as the Short Form 36 (SF-36) or the EuroQol 5 dimensions questionnaire (EQ-5D) might be most suitable to the evaluation of DMPs because a significant proportion of patients with a chronic disease suffers from multiple morbidities. A DMP targeted to one disease may have spill-over effects on other diseases (e.g., multiple diseases may benefit from more physical activity or a better nutritional status). However, disease-specific measures such as the St George's Respiratory Questionnaire (SGRQ) and domain-specific measures such as the Barthel scale (measuring activities of daily-living) (Reference Drummond, Sculpher, Torrance, O'Brien and Stoddart29) could also be included, depending on the purposes of the study.
Changes in Final Health Outcomes
The time horizon of empirical (economic) evaluations of DMPs is often not long enough to actually observe a change in (quality-adjusted) life-years. When the association between previously mentioned categories of health outcomes and the changes in quality and length of life is clear, it may be possible to extrapolate the outcomes that occur within a shorter time period into life-years or QALYs gained using decision analytic disease models (Reference Steuten, Palmer, Vrijhoef, van Merode, Spreeuwenberg and Severens20). Although there are some attempts to include self-management and patient perceptions in such models (Reference Cobden, Niessen, Barr, Rutten and Redekop30), extensive applications suitable for DMPs do not exist yet.
Related Costs
The costs in the evaluation of the DMPs can be distinguished into direct costs within the health care sector: (i) costs of development, (ii) costs of implementation, (iii) costs of diagnosis and treatment, direct costs outside the health care sector: (iv) costs borne by the patient/family, (v) costs of informal care, and indirect costs: (vi) costs of productivity losses (vii). Measurements of health care usage costs (e.g., outpatient and inpatient care and medication costs), the costs borne by the patient/family, the costs of informal care, and costs of productivity loss are similar to the conventional medical technologies, and are extensively discussed in the literature (Reference Drummond, Sculpher, Torrance, O'Brien and Stoddart29). Therefore, we discuss only the measurement of development and implementation costs hereafter.
Development and Implementation Costs
Development costs include all costs made during the preparation phase of a DMP such as labor time of personnel that participated in brainstorming sessions and logistic arrangements, training costs, and costs of software that supports audit and feedback. These costs could be estimated by using information that can generally be obtained from the managers of DMPs. The DMP implementation costs begin when the provision of DMP interventions to patients starts. Examples of implementation costs include the costs of managing the DMP, the costs of multidisciplinary team meetings, the costs associated with collecting quality of care indicators for audit and feedback, the costs of materials used for patient education, and the costs of keeping the ICT operating. Costing instruments such as the one developed by the World Health Organization (Reference Johns, Baltussen and Hutubessy31) could be adjusted to systematically collect the development and implementation costs of DMPs.
ASSEMBLING THE FRAMEWORK FOR THE MCDA OF DMPS
Having identified the most important objectives and criteria for assessing the achievement of each objective, we developed a framework for the application of MCDA in the economic evaluation of DMPs. The framework (see Figure 1) distinguishes between the development phase of DMPs and the implementation phase. In the development phase, a mixture of patient-directed (e.g., self-management training), professional-directed (e.g., education and training) and organizational interventions (e.g., electronic patient records) (Reference Lemmens, Nieboer and Rutten-Van Molken12), are usually selected, designed, and prepared to be implemented. The development costs accumulated in this phase are also incorporated in our framework.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160622184546-79789-mediumThumb-S0266462313000202_fig1g.jpg?pub-status=live)
Figure 1. The framework.
In the implementation phase, the interventions are infused in the organization that provides the DMP to patients with a single or multiple chronic diseases. One of the short-term outcomes can be a change in patients’ and providers’ knowledge, skills, and attitudes that can have an impact on the uptake of patient/professional-directed interventions and influence changes in other outcome measures (Reference Lemmens, Nieboer and Rutten-Van Molken12). Hence, in our framework, the changes in knowledge, skills and attitudes are seen as part of the mechanism through which the changes in the outcome categories can be achieved, rather than as separate outcome categories themselves. The framework shows that DMP interventions can aim to influence the process of care delivery and patient lifestyle and self-management behavior directly or by means of professional and patient knowledge, skills, and attitude. Suggested criteria to assess the changes in the patient lifestyle and self-management behavior are also part of the framework. They include self-management abilities, smoking behavior, physical activity, and nutrition. Likewise, the framework includes suggestions for criteria to assess changes in the care delivery process, for example disease management level, coordination level and indicators of the extent to which care is provided according to guidelines/care standards. In many cases, changes in the process of care delivery would trigger changes in patient behavior and vice versa.
Both changes in patient behavior and care delivery lead to changes in biomedical, physiological, and clinical health outcomes and changes in HR-QoL. Furthermore, changes in biomedical, physiological, and clinical health outcomes may influence the HR-QoL. Together, these changes result in changes in final health outcomes (QALYs and/or life expectancy) in the medium-term, but more likely, the long-term.
We acknowledge that disease management is an iterative process in which changes in biomedical, physiological and clinical outcomes and change in HR-QoL may trigger new changes in the process of care delivery and patient behavior. This is indicated in our framework by the circular arrow at the top of Figure 1.
Finally, the costs that occur during the implementation phase (i.e., DMP implementation costs, treatment costs, costs borne by the patient/family, costs of informal care, and costs of productivity loss) are added to the framework to ensure the calculation of the total costs of the DMPs.
A HYPOTHETICAL CASE STUDY
To illustrate how MCDA can be performed in the evaluation of DMPs, a hypothetical example is given of a COPD-DMP that is compared with usual care. Usual care is defined as “care most commonly provided by organizations without a DMP” (Reference Steuten, Vrijhoef, van Merode, Severens and Spreeuwenberg6). A list of interventions such as included in Supplementary Table 1, which can be viewed online at www.journals.cambridge.org/thc2013120, may be used as a checklist to distinguish between the interventions in usual care and the interventions in the DMP. The DMP aims to achieve improvements in all outcome categories of the framework (i.e., the six bold squares) plus a measure that relates the costs to the final outcomes, that is, the cost-effectiveness. For each objective, one or more criteria are chosen from the framework to compare the performance of the two treatments.
In this example, for changes in process of care delivery the self-management support criterion (measured by the ACIC) is chosen. Smoking and self-efficacy score are chosen as criteria for changes in patient lifestyle and self-management. They were measured as the percentage of patients that had successfully quit smoking at 12 months and by the self-efficacy domain of the SMAS, respectively. The criterion for the biomedical health outcomes is the lung function measured in FEV1% predicted and for changes in HR-QoL is the disease-specific QoL measured by the SGRQ. Moreover, the total costs per patient are selected as the criterion for the costs and the QALY is the criterion for changes in final health outcomes in this example. In addition to these criteria, the cost-effectiveness ratio calculated by dividing total costs per patient by the QALY per patient is also taken into account. To bring our hypothetical DMP closer to reality, we have used findings from existing studies that evaluated DMPs and usual care against these criteria. We used three studies (Reference Johns, Baltussen and Hutubessy31–Reference Lemmens, Nieboer, Rutten-Van Molken, van Schayck, Asin and Huijsman33) as we could not find a single study reporting results for the whole spectrum of the selected criteria. The results are shown in the performance matrix in Table 1.
Table 1. Example of a performance matrix
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160622184545-22819-mediumThumb-S0266462313000202_tab1.jpg?pub-status=live)
*Sourse: (14); **Source: (16); #Source: (11); Note: Total costs per patient and QALYs are estimated in two-years period.
The next step in implementing MCDA is to standardize the performance measures (i.e., retransform the results on different criteria onto the same scale). In this example, we have chosen a method that enables us to standardize performance measures with different ranges which is:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160316104708207-0714:S0266462313000202_eqnU1.gif?pub-status=live)
Table 2. Example of scoring the two treatment alternatives
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160622184545-66252-mediumThumb-S0266462313000202_tab2.jpg?pub-status=live)
*Total score was calculated as a weighted sum of the standardized performance scores.
Following the steps of the MCDA, weights have been attached to each criterion in our example. These weights reflect the relative importance of each criterion in the decision making. As mentioned in a previous section, there are several methods to obtain these weights but their application is outside the scope of this study. Complying with MCDA manuals, the sum of the hypothetical weights is 1 (Reference Belton and Stewart18).
In the next step, the standardized performance values are combined with the criteria weights to estimate the total scores. These are calculated using:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160316104708207-0714:S0266462313000202_eqnU2.gif?pub-status=live)
DISCUSSION AND CONCLUSION
Our framework contributes to the methodology of HTA of DMPs by providing an analytical structure to set up a MCDA in the complex field of disease management where multiple comparisons of different interventions and different outcomes and costs have to be made simultaneously. By valuing a broader set of outcomes than just the QALY, this method may overcome the limitations of a conventional CEA. Moreover, even when a DMP evaluation has a time horizon that is not long enough to capture changes in final health outcomes, this framework can still support decision making because it considers a great range of objectives and outcomes. This framework can also evaluate DMPs targeted to patients with multi-morbidity by selecting criteria that are relevant for such population.
It is important to note that our framework does not intend to explain the mechanisms through which changes can be achieved, as it is done by many theories on behavioral change. Our framework focuses on the decision criteria used in evaluating a DMP. Nevertheless, one of the major challenges is the selection of the criteria to be included in the MCDA, because it is impossible to include all aspects that are possibly influencing decision making. Should they be restricted to the objectives (outcomes categories and costs) included in our framework or are there other, wider criteria that need to be incorporated such as size of the target population or the difficulty to motivate the target population? Should the average cost/QALY ratio be one of the criteria or not, considering that decision making is commonly based on the incremental ratio of additional costs compared with current care divided by the gain in QALYs? Also, for the same objective, different and multiple criteria can be chosen, which may be equally relevant and for each criterion multiple indicators and measurements may be available. To overcome these challenges, previous studies have mostly used literature reviews, semi-structured interviews, and expert opinions to restrict the criteria to a manageable number (Reference Goetghebeur, Wagner, Khoury, Levitt, Erickson and Rindress16;Reference Steuten and Buxton34). One way forward would be that the researchers and decision makers would agree on a core or minimal set of criteria and indicators for each objective that would be used for a certain period (see Supplementary Table Reference Tsiachristas, Hipple-Walters, Lemmens, Nieboer and Rutten-van Molken2, which can be viewed online at www.journals.cambridge.org/thc2013121). Additional criteria could be added to this set where relevant.
Obtaining the weights is another challenge. There are different methods available, which are roughly categorized into value-based methods, outranking methods, and goal-achievement methods (Reference Huang, Keisler and Linkov13). A discrete-choice experiment is an example of a value-based method, but it requires independence between the criteria. The application of the Analytic Network Process (ANP), which is an extension of the AHP, may be the most suitable as it overcomes the concerns about the dependence between criteria (Reference Saaty and Vargas35). AHP is a value-based approach using pair-wise comparisons between criteria and DMPs to derive numerical weights and performance scores. It is called hierarchical because criteria can be divided into sub-criteria. The scores and weights are developed for each individual criterion initially and then aggregated assuming multiplicative preferences. The essence of ANP is the possibility to include dependence between the criteria in a decision. This advantage seems to be important to perform an evaluation of DMPs using MCDA because the outcomes of DMPs may be interacting, failing therefore, to ensure independence between the criteria, as it is required by the other MCDA methods.
In conclusion, we have presented a framework for the application of MCDA to simultaneously assess the broader outcomes and costs of DMPs. This methodology may stimulate and facilitate a much broader economic evaluation of DMPs that is currently done. It is desirable to further explore the applicability of MCDA approaches to DMPs. Therefore, we have planned empirical applications of this framework within the context of a large study in which we evaluate twenty-two different DMPs (Reference Lemmens, Rutten-Van Molken, Cramm, Huijsman, Bal and Nieboer11). The framework could be used in reimbursement decisions for DMPs or in negotiation processes between DMP providers and health insurers after having collected the necessary information on the selected criteria. Using this framework, decision makers on governmental and organizational level as well as health insurers and other payers could be provided with comprehensive information about what DMPs actually deliver on patient, professional caregiver, and organizational level and to what costs. This would improve the transparency about which criteria play a role in the decision making process and to what extent (Reference Peacock, Mitton, Bate, McCoy and Donaldson10). As a result, the results of MCDA could support decision makers to improve consistency in decision making and accountability to patients and professionals with the final aim to improve the quality and efficiency of chronic disease care.
SUPPLEMENTARY MATERIAL
Supplementary Table 1: www.journals.cambridge.org/thc2013120
Supplementary Table 1: www.journals.cambridge.org/thc2013121
CONTACT INFORMATION
Apostolos Tsiachristas, MSc Researcher in financing and economic evaluation of integrated care, Institute for Medical Technology Assessment, Department of Health Policy and Management, Erasmus University Rotterdam, NL
Jane Murray Cramm, PhD Senior researcher in care delivery to, and the well-being of, vulnerable populations, Department of Health Policy and Management, Erasmus University Rotterdam, NL
Anna Nieboer, PhD Professor of Socio-Medical Sciencies, Department of Health Policy and Management, Erasmus University Rotterdam, NL
Maureen Rutten- van Mölken, PhD Professor of economic evaluations of innovative health care for chronic diseases, Institute for Medical Technology Assessment, Department of Health Policy and Management, Erasmus University Rotterdam, NL
CONFLICTS OF INTEREST
All authors report a grant to their institution from The Netherlands Organization for Health Research and Development (ZonMw project number 300030201).