Hostname: page-component-745bb68f8f-5r2nc Total loading time: 0 Render date: 2025-02-06T07:05:40.608Z Has data issue: false hasContentIssue false

Single Conversations Expand Practitioners’ Use of Research: Evidence from a Field Experiment

Published online by Cambridge University Press:  23 February 2021

Adam Seth Levine*
Affiliation:
Johns Hopkins University
Rights & Permissions [Opens in a new window]

Abstract

Many people seek to increase practitioners’ use of research evidence in decision making. Two common strategies are dissemination and interaction. Dissemination can reach a wide audience at once, yet interactive strategies can be beneficial because they entail back-and-forth conversations to clarify how research evidence applies in a particular context. To date, however, we lack much direct evidence of the impact of interaction beyond dissemination. Partnering with an international sustainability-oriented NGO, I conducted a field experiment to test the impact of an interactive strategy (i.e., a single conversation) on practitioners’ use of research evidence in a pending decision. I find that the conversation had a substantial impact on research use relative to only receiving disseminated materials, which likely was due to increased self-efficacy. I also provide practical guidance on how researchers can apply this finding close to home by strengthening linkages with local decision makers.

Type
Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of the American Political Science Association

Although scientific research rarely dictates the path that practitioners and policy makers should pursue, it is helpful when decisions depend on having reliable knowledge about material and social conditions and/or what will happen if a particular action is taken. At times, research evidence has influenced policies with direct consequences for human well-being, as in the case of seatbelts and secondhand smoke (Brownson et al. Reference Brownson, Royer, Ewing and McBride2006). At other times, it has helped to build a new political constituency and work toward a more inclusive democracy (Levine Reference Levine2019).

For these and many other reasons, academics and other research experts frequently seek to increase decision makers’ use of research.Footnote 1 Two types of strategies are common: dissemination and interaction (Nutley, Walter, and Davies Reference Nutley, Walter and Davies2007).

Academics and other research experts frequently seek to increase decision makers’ use of research. Two types of strategies are common: dissemination and interaction.

Dissemination entails circulating research-based evidence—for example, original papers, guidelines, and/or syntheses—to a target audience. It is a one-way transfer of information that can reach numerous decision makers simultaneously.

In contrast, interaction entails dialogue between research experts and decision makers. It typically occurs one-on-one or in a small group. It may involve formal collaborations, in which they work together on projects with shared ownership and decision-making authority. Or it may entail informal collaborations—dynamic exchanges in which they enter with a mindset that is open to learning from one another and are mindful of the boundaries of what they know (Murray Reference Murray and Shafirtz1998). Informal collaborations can be as brief as a single conversation (Levine Reference Levine2020a).

Past work focusing on a diverse set of decision makers—including nonprofit practitioners, elected policy makers, and civil servants—and occurring in several countries around the world found that both dissemination and interaction can increase research use (Haynes et al. Reference Haynes, Gillespie, Derrick, Hall, Redman, Chapman and Sturk2011; Hird Reference Hird2005; Jewell and Bero Reference Jewell and Bero2008; Knott and Wildavsky Reference Knott and Wildavsky1980; Lomas Reference Lomas2005; Peterson Reference Peterson2018; Weaver and Stares Reference Weaver and Stares2001). This work also argues that, despite its smaller reach, interaction often can be more beneficial than dissemination. The reason is because using research to inform decisions entails context-dependent considerations, and dialogue makes it easier to leverage research- and context-based expertise to decide the most effective path forward (Haynes et al. Reference Haynes, Gillespie, Derrick, Hall, Redman, Chapman and Sturk2011; Nutley, Walter, and Davies Reference Nutley, Walter and Davies2007).

Although past work identifies the benefits of interaction, we still have much to learn about its direct impact on research use.Footnote 2 Indeed, Peterson (Reference Peterson2018, 344) recently noted that “[although] some studies [on the use of research evidence] have systematically acquired empirical information to support their conclusions…the state of understanding in the field remains remarkably impressionistic.”

With that in mind, this article compares research use among a set of practitioners who received disseminated research evidence in the form of written materials with those who received the same written materials and had a short one-on-one conversation with a research expert in which they spoke about how to apply the ideas in their local context. I define “research use” as practitioners directly applying the evidence to a pending decision (Weiss Reference Weiss1979).Footnote 3 With an organizational partner, I conducted a field experiment in which we found that the conversation had a significant positive effect on research use. A supplemental survey reveals that this likely is due to increased self-efficacy.

The interactive conversation had a significant positive effect on research use, likely due to increased self-efficacy.

Overall, this study makes two contributions. For academics and other research experts, it contributes to our understanding of how to increase decision makers’ use of scientific research. In many ways, this study is the flip side of work on anti-intellectualism in countries around the world (Gallup 2019, Hofstadter Reference Hofstadter1966; Merkley Reference Merkley2020; Motta Reference Motta2018; Zhang and Mildenberger Reference Zhang and Mildenberger2020). That is, my focus is on testing a practical strategy for bridging science and society as opposed to helping us understand why such bridges are needed. In addition, this study demonstrates one way that organizations can benefit from formal collaborations with researchers. Our project enabled my organizational partner to calculate a credible return on investment for adding short, research-based conversations to its work, thereby greatly enhancing impact.

WHY MIGHT A CONVERSATION BE IMPACTFUL?

One common method for increasing practitioners’ use of research evidence entails disseminating written materials. Dissemination strategies are most likely to be successful when they use accessible language; cite timely, actionable, and relevant research; and are shared by sources that the audience views as credible (Nutley, Walter, and Davies Reference Nutley, Walter and Davies2007).

However, even well-crafted dissemination strategies will not always increase research use (Knott and Wildavsky Reference Knott and Wildavsky1980). Two common barriers are limited attention and low self-efficacy.Footnote 4 First, like everyone else, practitioners can pay attention to only a limited number of stimuli at once. Due to competing demands, they may be unable to devote attention to disseminated information (Lupia Reference Lupia2013). Second, using research may entail doing something new and with innovation comes risk (Knott and Wildavsky Reference Knott and Wildavsky1980). Even if they pay attention, practitioners may not yet feel efficacious about successfully applying it to their work.

I expect that a conversation about how to apply evidence in their local context will increase practitioners’ research use relative to only receiving disseminated written materials, and that it may do so by overcoming one or both of these barriers. First, the back-and-forth nature of a conversation, including the need to respond to questions, may increase the likelihood that they actively process the material (Petty, Haugtvedt, and Smith Reference Petty, Haugtvedt, Smith, Petty and Krosnick1995). If so, we would expect the conversation to increase knowledge, measured either objectively or subjectively. Second, the conversation may increase self-efficacy—that is, a judgment of their capability to successfully apply the new information (Bandura Reference Bandura, Pajares and Urdan2006). The following study examines both the behavioral and the attitudinal consequences.

FIELD EXPERIMENT SETUP

When I was designing this study, several constraints had to be satisfied. I needed to identify numerous practitioners who faced comparable decisions that entailed clearly using or not using research evidence. Given the varying nature of many practitioners’ work, these constraints typically are difficult to overcome. I decided that one promising approach was to focus on a population that attends a workshop to learn about research evidence relevant to their work and then afterwards faces a concrete moment in which they must decide whether to use what they learned. (Similarly, Jewell and Bero’s Reference Jewell and Bero2008 study of the use of research evidence focused on a group of workshop attendees.)

Fortunately, I was able to partner with an international NGO to implement this approach. My partner is based in the United States and employs research experts who lead multiday workshops in countries around the world. Workshop participants typically work at small nonprofit organizations with the mission to promote environmentally sustainable behavior and public health in their local community. The workshops teach participants how (and why) to conduct issue-awareness campaigns to achieve these goals. Workshop leaders discuss relevant research evidence and guidelines for implementation, while also providing many examples. International NGOs like my partner, and the local nonprofits with whom they work, often are powerful voices for increasing awareness of community problems around the globe (Davis, Murdie, and Steinmetz Reference Davis, Murdie and Steinmetz2012). Although this experiment (like any experiment) occurred in a particular context and with a particular set of decision makers, the workshop participants shared a number of attributes and constraints common in the nonprofit world more generally (see the online appendix for more details).

Based on previous workshops, my organizational partner was concerned that many participants did not ultimately use the research evidence they learned about (i.e., they did not conduct an issue-awareness campaign in their local community). One possible reason was that the workshop did not include any follow-up.Footnote 5 Thus, for this experiment, we decided to add a follow-up component to several workshops in 2018.

The experiment included workshop attendees in Kenya and Mexico in March 2018 and in Ecuador and Nepal in June 2018. Eight weeks after workshops concluded, all participants had to decide whether to conduct an issue-awareness campaign. The reason for the simultaneity was that my partner offered competitive grants to cover the cost; therefore, the application deadline provided a comparable decision point. In this context, applying for a grant was equivalent to committing to conduct an issue-awareness campaign, for two reasons. First, none of the participants in these four workshops reported that they could afford to conduct a campaign without the grant (based on pre-workshop surveys). Second, submitting an application entailed a promise to conduct the campaign if awarded grant money. Thus, whether they applied for a grant was a concrete behavioral outcome that effectively corresponded to research use. Data from one year after the grant deadlines further justified this equivalence claim.

Experimental Procedure

Immediately after each workshop, I randomly assigned participants to receive one of two types of follow-up. Those randomly assigned to the control group (i.e., dissemination only) received a personalized email with additional written materials from their workshop leader. The email noted that these materials were important for completing a successful grant application and conducting an issue-awareness campaign. The materials described in more depth two research-based topics covered during the workshop and therefore were related to but not duplicative of that content. Participants randomly assigned to the treatment group also received a personalized email from their workshop leader with the same written materials, as well as a request to schedule a 30-minute Skype conversation to discuss how to apply them in a campaign in their local community. The online appendix provides more details on the substance of the written materials, as well as the conversation script.Footnote 6

After the treatment-group conversations were complete, all participants in both the control and the treatment group received another personalized email from their workshop leader requesting that they take a check-in survey and inviting them to ask any questions before the grant deadline. This “check-in” email and survey served important experimental design purposes. It ensured that participants in both the control and the treatment group felt that they had received personalized attention near the deadline. It also allowed us to collect measures of knowledge and self-efficacy.

Figure 1 is a summary of the experimental procedure. Two other points are worth noting. First, all workshop participants were told in advance that their workshop leader would not have the final say on who received the competitive grants. Second, in some cases, more than one person from a given nonprofit attended a workshop, so we implemented a nonprofit-level clustered random assignment, blocked on the workshop location (see the online appendix for more details).

Figure 1 Timeline of Field Experiment

FIELD EXPERIMENT RESULTS

In total, the experiment involved 59 participants: 23 from the Kenya workshop, 13 from the Mexico workshop, 16 from the Ecuador workshop, and 7 from the Nepal workshop (Levine Reference Levine2020c).Footnote 7 The compliance rate among those randomly assigned to the treatment group (i.e., those who were assigned to a Skype conversation and with whom leaders were able to conduct it)—was high (82.4%). To the best of our knowledge, noncompliance was unrelated to the content of the experiment or how people might respond to the conversation (Gerber and Green Reference Gerber and Green2012). Instead, it was due to idiosyncratic factors such as weather and personal family emergencies. None of the treatment-group participants refused to take part in a conversation due to lack of interest.

Behavioral Results

Here I present the behavioral results: that is, the percentage of people who applied for a grant to conduct an issue-awareness campaign (table 1). The “intent-to-treat effect” measures the effect of receiving the request to have a conversation. This answers the question: “What is the overall effect in the real world where the intervention is made available yet some people take advantage of it whereas others do not?” Overall, 12% of people in the control group (i.e., three of 25) submitted an application, compared to 59% in the treatment group (i.e., 20 of 34). Therefore, the intent-to-treat effect was a substantial 47 percentage points.

Table 1 Impact of Conversation on Practitioner Behavior

Notes: The number of participants randomly assigned to the treatment group was higher because in each block there was an uneven number of clusters. In advance, we adopted a rule that the “extra” would always be assigned to the treatment group. P-values are two-tailed, with estimates produced using randomization inference (Aronow and Samii Reference Aronow and Samii2012).

The “complier average causal effect” considers the fact that some participants who received the request to schedule a conversation did not do so. It provides a causal estimate of actually receiving the intervention—that is, actually having the conversation about how to apply the research evidence. As noted previously, workshop leaders had conversations with 82.4% of the treatment-group participants. Thus, the effect of actually having the conversation also was quite substantial: 57 percentage points.

Overall, examining both the intent-to-treat effect and the complier average causal effect, I found strong evidence that having a conversation greatly increased research use beyond dissemination. In addition, follow-up data with all workshop participants one year after the grant deadline justify my assumption that applying for a grant is equivalent to committing to conduct an issue-awareness campaign. At that time, all but one participant who applied for the grant and received money was actively conducting a campaign (unfortunately, he had lost his job and returned the money). In addition, none of those who did not receive grant money (because they did not apply or applied but were not awarded funding) reported actively conducting campaigns.

Survey Results

As mentioned previously, before the grant deadline, workshop leaders emailed a brief survey to all participants (see the online appendix for the wording of questions). The survey measured objective knowledge (i.e., Do they answer questions correctly?) and subjective knowledge (i.e., Do they feel uncertain about what they know?). We also assessed perceptions of self-efficacy. Due to feasibility constraints, it was not possible to design the field experiment in such a way that would satisfy all of the assumptions required for a formal mediation test (Bullock, Green, and Ha Reference Bullock, Green and Ha2010). Therefore, I treat these survey responses as a suggestive but not dispositive test of the underlying mechanism(s).

We collected usable survey responses from 47 people. Nine people did not answer the survey at all (i.e., five in the control group, four in the treatment group). In addition, there were three participants who did not include their contact information in responding to the survey, and so we were unable to match their responses to experimental assignments. In total, I did not have complete survey data from 12 participants (i.e., six in the control group and six in the treatment group). I verified that this omission was not related to treatment assignment—that is, the difference in attrition rates between the control and the treatment groups was not statistically significant (p=0.65). In addition, I did not find evidence that attrition was related to compliance status (p=0.56).

Table 2 summarizes the survey results. Each line in the table represents the average of a short battery of questions (described in the online appendix). Overall, I found no evidence that the conversation affected either (1) factual knowledge about aspects of conducting an issue-awareness campaign, or (2) participants’ perceptions of how uncertain they felt about what to do. This likely reflects the fact that both the control and the treatment groups received exactly the same disseminated materials written in clear and accessible language. Yet, I observed evidence (bolded in table 2) suggesting that the conversation boosted self-efficacy—participants personally felt more capable of successfully conducting a campaign. The conversation did not seek to rehash the basic principles discussed in the disseminated materials but instead entailed dialogue on how to apply that information locally. Therefore, this difference in self-efficacy suggests that it was the content of the conversation that mattered as opposed to simply the fact that participants in the treatment group had an “extra” interaction.

Table 2 Impact of Conversation on Practitioner Attitudes

Notes: All variables are measured on a 0–1 scale. Each entry displays the difference between the treatment group and the control group. P-values are two tailed, with estimates produced using randomization inference.

Finally, during the latter two data collections (i.e., after the workshops in Ecuador and Nepal), I added a question at the end of the survey to measure outcome expectations: participants’ perceptions of the likelihood of receiving the grant if they applied. This question assessed whether the treatment-group conversations may have unintentionally boosted participants’ expectations about receiving the grant money. Admittedly, the number of respondents who were asked this question was low (N=15); however, with that caveat in mind, I found no evidence that the conversation increased expectations. In fact, the estimated effect was in the opposite direction: ITT: -0.05 (p=0.81); CACE: -0.06 (p=0.81).

CONCLUSION

My results suggest that although interactive strategies for increasing research use often are more costly than disseminating information to a large audience at once, their impact can be significant. Future work is needed to better understand how this impact may vary depending on the nature of decision makers’ values and political considerations within nonprofits, bureaucracies, and/or legislatures. We also should examine other facets of potential research use. For instance, elected policy makers may not directly use new research evidence in a pending decision, but it is possible that a conversation with a research expert will change which problems they prioritize, how they conceptualize the nature of those problems, how they build coalitions, and/or whether they use research evidence to bolster preexisting decisions (Bogenschneider and Corbett Reference Bogenschneider and Corbett2010; Weiss Reference Weiss1979).

I close on a practical note. One way that readers can apply these findings close to home is by initiating new informal collaborations directly with local decision makers. An accumulating body of work shows how to conduct such outreach, including the range of goals that policy makers (Bogenschneider and Corbett Reference Bogenschneider and Corbett2010) and nonprofit practitioners (Levine Reference Levine2020a) may have, as well as the importance of being not only credible but also relational (Levine Reference Levine2020b). Surveys of policy makers (especially at the subnational level) reveal that they are open to this type of cold outreach and do not regularly receive it (Bogenschneider and Corbett Reference Bogenschneider and Corbett2010). Past work targeting nonprofit practitioners also uncovers demand (Levine Reference Levine2020a). To be sure, not everyone will be interested, but when connections do happen, decision makers can gain relevant information tailored to their local context, and research experts can gain new insights about limits in an existing body of research as well as context-dependent implementation challenges. In addition to these private benefits, increasing the prevalence of these connections establishes norms of interaction, a public benefit.

One way that readers can apply these findings close to home is by initiating new informal collaborations directly with local decision makers.

ACKNOWLEDGMENTS

I greatly appreciate the opportunity to collaborate with my partner organization. For feedback and intellectual inspiration on this project, I owe many thanks to Jake Bowers, Ali Cirone, Bryce Corrigan, Don Green, Guy Grossman, Sabrina Karim, Bruce Lewenstein, Tom Pepinsky, and Mark Peterson.

DATA AVAILABILITY STATEMENT

Replication materials are available on Harvard Dataverse at https://doi.org/10.7910/DVN/PXLWBZ.

SUPPLEMENTARY MATERIALS

To view supplementary material for this article, please visit http://dx.doi.org/10.1017/S1049096520002000.

Footnotes

1. I use the term “research experts” to encompass researchers, knowledge brokers, and research translators.

2. To my knowledge, Dobbins et al. (Reference Dobbins, Hanna, Ciliska, Manske, Cameron, Mercer, O’Mara, DeCorby and Robeson2009) is the only other study that assesses the impact of interaction beyond dissemination, although using a very different type of intervention and sample than the present investigation.

3. This corresponds to what Weiss (Reference Weiss1979) referred to as “instrumental” use.

4. Other reasons that decision makers may resist research evidence are that it conflicts with personal values or professional incentives. I do not discuss these herein because they are unlikely to apply in the context of my field experiment.

5. Cost also was a barrier, which we addressed in the context of our study.

6. The script used relationship-building techniques to make participants feel comfortable sharing information during the back-and-forth conversation (Leary Reference Leary, Fiske, Gilbert and Lindzey2010).

7. This experiment did not include all participants in each workshop. See the online appendix for more details about inclusion criteria.

References

REFERENCES

Aronow, Peter, and Samii, Cyrus. 2012. “Ri: R Package for Performing Randomization-Based Inference for Experiments.” http://cran.r-project.org/web/packages/ri.Google Scholar
Bandura, Albert. 2006. “Guide for Constructing Self-Efficacy Scales.” In Self-Efficacy Beliefs of Adolescents, Vol. 5, ed. Pajares, Frank and Urdan, Tim, 307–37. Greenwich, CT: Information Age Publishing.Google Scholar
Bogenschneider, Karen, and Corbett, Thomas J.. 2010. Evidence-Based Policymaking. New York: Routledge.Google Scholar
Brownson, Ross C., Royer, Charles, Ewing, Reid, and McBride, Timothy D.. 2006. “Researchers and Policymakers: Travelers in Parallel Universes.” American Journal of Preventive Medicine 30:164–72.CrossRefGoogle ScholarPubMed
Bullock, John G., Green, Donald P., and Ha, Shang E.. 2010. “Yes, But What’s the Mechanism? (Don’t Expect an Easy Answer).” Journal of Personality and Social Psychology 98:550–58.CrossRefGoogle Scholar
Davis, David R., Murdie, Amanda, and Steinmetz, Coty Garnett. 2012. “‘Makers and Shapers’: Human INGOs and Public Opinion.” Human Rights Quarterly 34:199224.CrossRefGoogle Scholar
Dobbins, Maureen, Hanna, Steven E., Ciliska, Donna, Manske, Steve, Cameron, Roy, Mercer, Shawna L., O’Mara, Linda, DeCorby, Kara, and Robeson, Paula. 2009. “A Randomized Controlled Trial Evaluating the Impact of Knowledge Translation and Exchange Strategies.” Implementation Science 4 (1): 61.CrossRefGoogle ScholarPubMed
Gallup. 2019. “Wellcome Global Monitor—First Wave Findings.” https://wellcome.ac.uk/sites/default/files/wellcome-global-monitor-2018.pdf. Accessed June 18, 2020.Google Scholar
Gerber, Alan S., and Green, Donald P.. 2012. Field Experiments. New York: W. W. Norton.Google Scholar
Haynes, Abby S., Gillespie, James A., Derrick, Gemma E., Hall, Wayne D., Redman, Sally, Chapman, Simon, and Sturk, Heidi. 2011. “Galvanizers, Guides, Champions, and Shields: The Many Ways That Policymakers Use Public Health Researchers.” The Milbank Quarterly 89:564–98.CrossRefGoogle ScholarPubMed
Hird, John A. 2005. Power, Knowledge, and Politics. Washington, DC: Georgetown University Press.Google Scholar
Hofstadter, Richard. 1966. Anti-Intellectualism in American Life. New York: Knopf Publishing.Google Scholar
Jewell, Christopher J., and Bero, Lisa A.. 2008. “‘Developing Good Taste in Evidence’: Facilitators of and Hindrances to Evidence-Informed Health Policymaking in State Government.” The Milbank Quarterly 86:177208.CrossRefGoogle Scholar
Knott, Jack, and Wildavsky, Aaron. 1980. “If Dissemination Is the Solution, What Is the Problem?Knowledge: Creation, Diffusion, Utilization 1:537–78.CrossRefGoogle Scholar
Leary, Mark R. 2010. “Affiliation, Acceptance, and Belonging: The Pursuit of Interpersonal Connection.” In Handbook of Social Psychology, Vol. 2, ed. Fiske, Susan T., Gilbert, Daniel T., and Lindzey, Gardner, 864–97. Hoboken, NJ: John Wiley & Sons, Inc.Google Scholar
Levine, Adam Seth. 2019. “Why Social Science? Because It Tells Us How to Create More Engaged Citizens.” www.whysocialscience.com/blog/2019/9/24/because-it-tells-us-how-to-create-more-engaged-citizens.Google Scholar
Levine, Adam Seth. 2020a. “Research Impact Through Matchmaking (RITM): Why and How to Connect Researchers and Practitioners.” PS: Political Science & Politics 53:265–69.Google Scholar
Levine, Adam Seth. 2020b. “Why Do Practitioners Want to Connect with Researchers? Evidence from a Field Experiment.” PS: Political Science & Politics 53:712–16.Google Scholar
Levine, Adam Seth. 2020c. “Replication Data for: Single Conversations Expand Practitioners’ Use of Research: Evidence from a Field Experiment.” Harvard Dataverse. https://doi.org/10.7910/DVN/PXLWBZ.CrossRefGoogle Scholar
Lomas, Jonathan. 2005. “Using Research to Inform Healthcare Managers’ and Policy Makers’ Questions: From Summative to Interpretive Synthesis.” Health Policy 1:5571.Google ScholarPubMed
Lupia, Arthur. 2013. “Communicating Science in Politicized Environments.” PNAS 110:14048–54.CrossRefGoogle ScholarPubMed
Merkley, Eric. 2020. “Anti-Intellectualism, Populism, and Motivated Resistance to Expert Consensus.” Public Opinion Quarterly 84 (1): 2448.CrossRefGoogle Scholar
Motta, Matthew. 2018. “The Polarizing Effect of the March for Science on Attitudes toward Scientists.” PS: Political Science & Politics 51:782–88.Google Scholar
Murray, Vic. 1998. “Interorganizational Collaborations in the Nonprofit Sector.” In International Encyclopedia of Public Policy and Administration, Vol. 2, ed. Shafirtz, Jay M., 1192–96. Boulder, CO: Westview.Google Scholar
Nutley, Sandra M., Walter, Isabel, and Davies, Huw T. O.. 2007. Using Evidence: How Research Can Inform Public Services . Bristol, England: The Policy Press.Google Scholar
Peterson, Mark A. 2018. “In the Shadow of Politics: The Pathways of Research Evidence to Health Policy Making.” Journal of Health Politics, Policy, and Law 43:341–76.CrossRefGoogle Scholar
Petty, Richard E., Haugtvedt, C. P., and Smith, S. M.. 1995. “Elaboration as a Determinant of Attitude Strength: Creating Attitudes That Are Persistent, Resistant, and Predictive of Behavior.” In Attitude Strength: Antecedents and Consequences, ed. Petty, Richard E. and Krosnick, Jon A., 93130. Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
Weaver, R. Kent, and Stares, Paul B. (eds.). 2001. Guidance for Governance: Comparing Alternative Sources of Public Policy Advice. Tokyo: Japan Center for International Exchange.Google Scholar
Weiss, Carol H. 1979. “The Many Meanings of Research Utilization.” Public Administration Review 39:426–31.CrossRefGoogle Scholar
Zhang, Baobao, and Mildenberger, Matto. 2020. “Scientists’ Political Behaviors Are Not Driven by Individual-Level Government Benefits.” PLOS One, May 6.Google Scholar
Figure 0

Figure 1 Timeline of Field Experiment

Figure 1

Table 1 Impact of Conversation on Practitioner Behavior

Figure 2

Table 2 Impact of Conversation on Practitioner Attitudes

Supplementary material: Link

Levine Dataset

Link
Supplementary material: PDF

Levine supplementary material

Levine supplementary material

Download Levine supplementary material(PDF)
PDF 197.5 KB