Hostname: page-component-686fd747b7-s9gpq Total loading time: 0 Render date: 2025-02-13T12:06:08.107Z Has data issue: false hasContentIssue false

The Effects of Interactive Requests on the Quantity and Quality of Survey Responses: An International Methodological Experiment

Published online by Cambridge University Press:  13 February 2025

Farsan Ghassim*
Affiliation:
University of Oxford, UK Lund University, Sweden
Rights & Permissions [Opens in a new window]

Abstract

A perennial issue of survey research is that some participants do not answer all questions. Interactive follow-up requests are a novel approach to this problem. However, research on their effectiveness is scarce. I present the most comprehensive study yet on the effects of interactive requests on item non-responses. Theoretically, I outline different pathways whereby follow-up requests may effectively increase response rates and improve data quality: reminding, motivating, instructing, monitoring, and sanctioning. To test my hypothesis that interactive requests increase item response rates, I conducted an online survey experiment in 2021 on diverse samples of around 3,100 respondents in ten countries worldwide. I find that follow-up requests generally increase response rates, although effects vary by country. Depending on the question and survey design, interactive requests reduce item non-responses by up to 47 per cent across countries, while not adversely affecting data quality. I thus recommend response requests to increase survey data efficiency.

Type
Letter
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Introduction

In recent years, public opinion researchers have increasingly turned to web surveys (Tourangeau, Conrad, and Couper Reference Tourangeau, Conrad and Couper2013). While they are highly effective and efficient in gathering primary data, online surveys are also plagued by specific problems due to their self-administered nature (Evans and Mathur Reference Evans and Mathur2005). Item non-response is one of the central issues affecting survey efficiency and quality (Berinsky Reference Berinsky, Donsbach and Traugott2007) – both in traditional modes and web surveys (Denscombe Reference Denscombe2009; Gooch and Vavreck Reference Gooch and Vavreck2019). Participants who do not respond to certain questions reduce the cost-efficiency of the data-gathering process. Moreover, if item non-responses are biased – that is, if certain types of participants are less likely to answer particular questions – this may affect the results’ representativeness for the underlying population of interest. Depending on the kind of question, significant proportions of a sample may decide not to respond on certain occasions. Addressing the issue of item non-response is thus of utmost importance to survey-based work.

Research on feedback as a way of getting higher-quality responses dates back decades (Cannell, Oksenberg, and Converse Reference Cannell, Oksenberg and Converse1977; Miller and Cannell Reference Miller and Cannell1982). The onset of the Internet survey age brought the possibility of interacting with participants in novel ways. Scholars have thus taken a renewed interest in feedback as a way of improving survey data quality (Conrad et al. Reference Conrad, Couper, Tourangeau and Zhang2017), among others, in terms of response rates (Vicente and Reis Reference Vicente and Reis2010, 259–260). Interactive requests are one such method; for example, follow-up prompts in pop-up windows asking respondents to answer questions that they previously left unanswered (see Image 1).

Image 1: Example response request.

In an early study of a few hundred American respondents, DeRouvray and Couper (Reference DeRouvray and Couper2002) found that prompts decreased question-skipping rates significantly. Holland and Christian (Reference Holland and Christian2009) studied American university students and found that interactive prompts do not improve response rates to open-ended questions. In another study limited to the US, Sun et al. (Reference Sun, Caporaso, Cantor, Davis and Blake2023, 109) show that prompts on speeding and straightlining do not affect response rates. Conducting a representative survey of the Dutch population, de Leeuw, Hox, and Boevé (Reference De Leeuw, Hox and Boevé2016) discovered that follow-up probes are effective at increasing response rates, particularly in combination with not offering an explicit ‘I don’t know’ option but allowing respondents to skip questions. Exploring the underlying mechanisms, Zhang and Conrad (Reference Zhang and Conrad2018) argue that feedback prompts can lead to genuine improvements in participant behaviours, but also run the risk of strengthening social desirability bias in responses. While these are valuable first steps in evaluating the promise of interactive requests in the age of web surveys, we still lack comprehensive multinational research assessing the effects of response requests on different types of questions.

Contributing to the literature, the present study provides a novel theoretical framework for the potential effectiveness of interactive requests in increasing response rates. I test my central hypothesis of response requests’ effectiveness in an original online survey experiment on diverse population-based samples in ten countries across the global South, North, East, and West, thus providing the most comprehensive study yet in this regard. I find that follow-up requests can reduce item non-responses substantially (depending on the question, survey design, and country context), while not adversely affecting response quality. I thus recommend response requests to increase survey data efficiency.

Theoretical framework

Let us begin by acknowledging that there are instances in which item non-responses are valid; for example, when respondents truly do not know the answer or find a question too sensitive. Given the existence of such valid item non-responses, why should we expect interactive follow-up requests to affect item response rates and quality? While prior studies have advanced our knowledge of the effects of interactive requests on survey responses, we lack a theoretical framework for understanding why and how such interactive requests work. Theoretically, response requests may both increase and decrease item non-responses. The former effect may materialize if participants perceive (repeated) response requests as burdensome (Crawford, Couper, and Lamias Reference Crawford, Couper and Lamias2001), so that they increase survey breakoff rates, for example. However, in line with prior research, I expect that interactive requests reduce item non-response rates. This section concentrates on theoretical considerations why that effect may materialize.

To begin with, survey participants may genuinely forget to answer questions. Consider web surveys that often feature many different questions on one page, sometimes in grids. Even conscientious participants may overlook questions that they would have liked to answer. In such cases, interactive requests can serve as useful reminders, helping respondents to complete the survey. Reminder strategies for online surveys have mostly been studied with regard to unit non-response (Barnhart, Reddy, and Arnold Reference Barnhart, Reddy and Arnold2021; Cook et al. Reference Cook2016), while the focus here is on item non-response.

Not all cases of skipped questions are merely accidental omissions by participants, however. From existing methodological research, we know that item non-response is a common satisficing phenomenon (Berinsky Reference Berinsky, Donsbach and Traugott2007; Krosnick et al. Reference Krosnick, Holbrook, Berent, Carson, Hanemann, Kopp, Mitchell, Presser, Ruud, Smith, Moody, Green and Conaway2002). That is, some participants skip questions in order to avoid the effort associated with answering them. In the present context, I am also interested in the potential effects of interactive requests on such respondents. To this end, let us now consider different ways in which participants may satisfice and how follow-up requests can intervene in such cases.

For one, participants may have the impression that their answers do not really matter and thus decide to invest little effort into completing a survey. On such occasions, follow-up requests can motivate respondents to provide genuine substantive answers, meaning responses that truly reflect their underlying attitude or knowledge. Interactive feedback may indicate to respondents that the researchers are sincerely interested in obtaining participants’ answers, which can motivate them to respond when they would have skipped a question otherwise. Scholars have examined the motivating effects of other survey methods such as self-commitment (Cibelli Hibben, Felderer, and Conrad Reference Cibelli Hibben, Felderer and Conrad2022), while my focus here is on the potential motivational effects of response requests.

Another avenue whereby follow-up requests can effectively reduce item non-response rates is by serving as instructions. While explicit instructions are a commonly discussed feature of survey design to reduce non-responses (Vicente and Reis Reference Vicente and Reis2010, 253), I concentrate on the potentially implicit instructional effects of response requests. In particular, participants may believe that, because skipping questions is possible, it is also legitimate under any circumstances. In such cases, follow-up requests may affect participants’ understanding of the desirability of different response behaviours. Specifically, such prompts may signal to respondents that skipping questions is less legitimate than answering them. Requests may thus serve as instructions to some participants about desired behaviours and thereby reduce item non-response rates.

Next, participants may be aware that skipping questions is not a desirable behaviour from researchers’ perspectives but still decide to do so – at least as long as they feel unobserved. In such instances, response requests can lead participants to believe that the researchers are monitoring their behaviour. This may cause respondents to provide more genuine substantive answers; that is, responses reflecting participants’ true attitudes or knowledge; more disingenuous substantive responses, namely answers which do not reflect participants’ true underlying positions; or do not lead to any changes in respondents’ answers – perhaps because they understand that skipping questions is not desired by researchers but nonetheless accepted. Monitoring respondents’ behaviour has been studied in other survey contexts (Miura and Kobayashi Reference Miura and Kobayashi2015; Van Selm and Jankowski Reference Van Selm and Jankowski2006, 449), whereas my focus is on the possible monitoring effects of interactive requests in cases of attempted item non-response.

Lastly, follow-up requests may effectively increase response rates by making item non-responses more time-consuming. Interactive feedback prompts appear only when survey participants attempt to proceed without answering a question. The time that it takes to answer questions conscientiously may be one factor leading to item non-response. Assuming that some participants want to complete surveys as quickly as possible, interactive prompts may effectively prevent them from doing so by simply skipping questions. Instead, potential shirkers may decide that it is, in fact, more time-efficient to respond to (subsequent) questions than to attempt to skip them and effectively add additional questions (that is, the interactive prompts) to the survey. Thus, the final pathway whereby follow-up requests could increase response rates may be called sanctioning. Like monitoring, sanctioning can improve data quality by leading to more genuine substantive responses, reduce data quality by producing more non-genuine substantive responses, or even lead to increased breakoff rates (Peytchev Reference Peytchev2009). Whether response quality decreases or increases as a result of response requests is a matter for empirical exploration.

In sum, there are different avenues whereby interactive follow-up requests may increase the quantity and quality of responses: reminding, motivating, instructing, monitoring, and sanctioning. Based on these theoretical considerations, as well as past research that has found interactive requests to be effective at reducing item non-response (DeRouvray and Couper Reference DeRouvray and Couper2002), my ex-ante hypothesis was that follow-up requests reduce item non-responses in surveys.Footnote 1 Moreover, building on prior research that found positive effects (Sun et al. Reference Sun, Caporaso, Cantor, Davis and Blake2023), I explore interactive requests’ potential impact on response quality.

Research design

In a multi-country survey experiment, I tested to what extent and how follow-up requests affect the quantity and quality of responses to different questions. In order to ensure the generalizability of results to different cultural contexts, I included a highly diverse sample of ten countries from the global South, North, East, and West: Australia, Canada, Colombia, Egypt, France, Hungary, Indonesia, Kenya, South Korea, and Turkey. My English questionnaire was translated into the survey countries’ primary languages by native speakers.Footnote 2

In collaboration with Qualtrics (acting as both sample aggregator and survey platform) and its sample suppliers (Dynata, Cint, Lucid, and Toluna), I fielded the experiments between May and October 2021. My respondents were recruited from opt-in panels in the target countries based on quotas for gender, age, region, and education. This quota-based approach resulted in highly diverse samples of 3,100 respondents or more in each country. The aggregate cross-country sample included 32,319 respondents, randomized by Qualtrics into control and treatment conditions of approximately equal sizes (ncontrol = 16,150, ntreatment = 16,169). While the aggregate sample size is relatively large, it should be noted that item non-responses only affect small fractions of responses.Footnote 3

My experiment focused on six newly developed questions, which differed in terms of the presumed knowledge requirements as this factor could conceivably influence the frequency of item non-responses and how interactive requests affect response behaviour. Supplementary Material section 5 presents my dependent variable questions, which may be approximately ordered from low to high knowledge requirements as follows: self-reported interest in world politics (which, as a mere subjective statement of interest, requires essentially no knowledge); prioritization of the environment or the economy, expected adherence to cultural norms by fellow citizens including immigrants; opinion on the home country’s global responsibility; views on governmental market intervention versus self-regulation (which are all attitudinal questions but appear to require some prior knowledge for the formation of valid views); and knowledge of the IMF headquarters’ location (to which there is a correct answer, which is arguably not common knowledge). My dependent variable in each case is whether respondents skipped the respective question by proceeding without providing an answer or by selecting the ‘I don’t know’ (DK) option that was randomly provided to around half of the respondents in both the control and treatment groups.Footnote 4

The control group in my experiment received no response requests, while treatment group participants saw such a request if they wanted to proceed to the next survey page without providing an answer to at least one of the questions on a page. Each of my dependent variable questions was posed on a separate page and in random order. A standardized interactive response request is offered by Qualtrics as a question-design option on their survey platform (see Image 1 above). I used this prompt as my experimental treatment. Both the control and treatment groups could skip questions; the treatment group just had to confirm that they wanted to continue without answering.

Results and discussion

The following results are first aggregated across all ten survey countries before presenting heterogeneous treatment effect analyses by country. In order to evaluate the effect of interactive requests, I examined the changes in item non-response rates to the different questions outlined above. As expected, the effects are negative and statistically significant (at least p<0.01) for four of the six dependent variables. Among these, the substantive effects range between 0.5 and 0.8 points as a percentage of all responses. Item non-response rates to the cultural norms and IMF knowledge questions are not significantly affected by the interactive request in the cross-country sample. Figure 1 illustrates these results.

Figure 1. Effect of interactive requests on item non-response rates.

Note: The figure shows changes in item non-response rates due to interactive requests (see Image 1) as a proportion of all responses. For comparison of effect sizes and better legibility, the different control group means have been normalized to zero. The dots illustrate the difference-in-means between the control and treatment groups. The lines indicate 95 per cent confidence intervals. Supplementary Material section 6.1 provides detailed data.

While the effects may not seem large at first sight, they are quite substantive when considering reductions in terms of the percentage of item non-responses (rather than percentage points among all answers, of which item non-responses are only a small part, with a median rate of 2.7 per cent in the control group – see Supplementary Material section 6.1). Figure 2 below shows that, for the questions with significant effects above, one-fifth (specifically between 19 and 22 per cent) of item non-responses were discouraged due to the follow-up request.

Figure 2. Percentages of discouraged item non-responses due to response request.

Note: The figure shows the percentages of remaining and discouraged item non-responses as a percentage of item non-responses in the control groups of the dependent variables. Supplementary Material section 6.1 provides detailed data.

The result of an effective reduction of item non-responses is especially noteworthy since a random half of respondents were given an explicit ‘I don’t know’ choice (for another experiment). While choosing this option did not trigger an interactive response request, it was counted as having effectively skipped the question, which potentially biased my estimates of item non-responses – in the control and treatment groups – upward (Krosnick et al. Reference Krosnick, Holbrook, Berent, Carson, Hanemann, Kopp, Mitchell, Presser, Ruud, Smith, Moody, Green and Conaway2002), and thus may have limited the response request’s apparent effectiveness.

Similar to de Leeuw and colleagues (Reference De Leeuw, Hox and Boevé2016), I find that concentrating on respondents who were not given an explicit DK choice, the rate of item non-responses decreases significantly for almost all dependent variable questions, albeit only at p<0.1 for the IMF knowledge variable. The remaining exception is the one on cultural norms (see Figure 3 and Supplementary Material section 6.2), which may be due to the higher sensitivity of that question. The substantive effects in terms of reductions of item non-responses (as a percentage of total responses) range between 0.6 and 0.8 points. Focusing only on the item non-responses themselves, Figure 4 shows that between 10 and 47 per cent are discouraged as a result of the request. Note that these differences-in-means are all substantively larger than in the analyses above that included the response conditions with explicit DK options.

Figure 3. Effect of interactive requests on item non-responses when no ‘I don’t know’ offered.

Note: See notes below Figure 1. This analysis is limited to the randomly selected respondents who did not see an explicit DK option. Supplementary Material section 6.2 provides detailed data.

Figure 4. Discouraged item non-responses due to requests when no ‘I don’t know’ offered.

Note: See notes below Figure 2. This analysis is limited to the randomly selected respondents who did not see an explicit DK option. Supplementary Material section 6.2 provides detailed data.

Having established that interactive prompts effectively increase the quantity of answers, let us now turn to their effects on response quality: Are prompted responses better, worse, or equally good as non-prompted responses? While many methods have been proposed to evaluate the validity and reliability of responses relating to interests and attitudes (for example, Campbell and Fiske Reference Campbell and Fiske1959; Fink and Litwin Reference Fink and Litwin1995; Marsden and Wright Reference Marsden and Wright2010), this is hardly possible for single items as in my survey, since only participants themselves can ascertain that their responses reflect the underlying truth. However, in the case of knowledge questions, we can investigate the extent to which interactive requests affect response quality. An insignificant difference in correct responses between the control and treatment groups would demonstrate that response requests have neither a negative nor a positive effect on data quality. If response requests lead to more random guesses, we would expect the proportion of correct responses among substantive responses to decrease, indicating increased noise and lower data quality. Conversely, if interactive prompts lead more knowledgeable participants to complete the question, we would expect the proportion of correct responses among all substantive answers to increase.

As illustrated in Figure 5, we observe a null effect. The proportion of correct answers does not change significantly – neither among all responses (including non-substantive answers) nor among substantive responses. Therefore, it appears that, while response requests may lead to more respondents attempting to answer knowledge questions (at least when no explicit DK option is offered), their guesses are about as likely to be right than among those respondents who answer the questions without being prompted. In sum, interactive requests can increase the number of responses to knowledge questions while not affecting their quality.

Figure 5. Effects on correct answers to the IMF knowledge question.

Note: Across survey countries, the figure shows differences in correct responses to the IMF knowledge question due to the response request (see Image 1). The dots illustrate the difference-in-means between the control and treatment groups. The lines indicate 95 per cent confidence intervals. Supplementary Material section 6.3 provides the underlying data.

Finally, let us explore the heterogeneous treatment effects in different survey countries. To this end, I construct Bayesian multi-level models with random country effects to benefit from partial pooling, while avoiding issues with frequentist approaches such as minute confidence intervals (Bürkner Reference Bürkner2018; Sroka Reference Sroka2020). For the full sample, Table 1 shows that almost all parameters across all variables and countries are negative. However, only political interest, environmentalism, global responsibility, and market intervention have coefficients that are statistically significant at 95 per cent confidence in my ten survey countries – most consistently so in Hungary, Indonesia, and Kenya. Conversely, in Australia and Egypt, not a single parameter is significant for any variable.

Table 1. Summary of treatment effects in countries - Full sample

Note: The numbers present the treatment parameters, while the asterisks indicate statistically significant differences from zero at the 95 per cent confidence level. Supplementary Material section 7 provides detailed data.

Table 2 largely backs the analysis in Table 1 but generally – as above – shows even more pronounced effects for the sample without explicit ‘I don’t know’ options. For instance, while Table 1 shows seventeen statistically significant coefficients at the 95 per cent level, there are twenty-four such parameters in Table 2. Note also that there is now at least one significant effect in each survey country. Moreover, Table 2 shows a significant effect in Indonesia for the IMF knowledge question while Supplementary Material section 7.6.2 shows that the results in France and Turkey are only marginally insignificant at the 95 per cent level. Consistent with the full sample analysis in individual countries (Table 1) and the aggregate analysis across countries (Figures 1 and 3), the country coefficients for cultural norms are insignificant. Similarly, in line with my findings above, correct IMF knowledge responses among all responses and substantive responses are not significant in any surveyed country, indicating that response quality does not change as a result of interactive requests. Overall, these findings present the country-level drivers of treatment effects in the aggregate sample above, including the generally stronger effects of response requests on item non-responses when no explicit ‘I don’t know’ option is offered.

Table 2. Summary of treatment effects in countries - No DK sample

Note: See notes below Table 1.

Conclusion

My study contributes to the existing literature in several ways. In addition to theoretical contributions by Zhang and Conrad (Reference Zhang and Conrad2018), I theorize how interactive follow-up requests may decrease item non-response rates through the mechanisms of reminding, motivating, instructing, monitoring, and/or sanctioning. Expanding significantly on the study by DeRouvray and Couper (Reference DeRouvray and Couper2002) and, as opposed to Holland and Christian (Reference Holland and Christian2009) as well as Sun et al. (Reference Sun, Caporaso, Cantor, Davis and Blake2023) whose focus differs somewhat from mine, I find that response requests are generally effective at reducing item non-response rates, although their effectiveness depends on questionnaire design, the specific question at hand, and the survey country context. Confirming the results of de Leeuw and colleagues (Reference De Leeuw, Hox and Boevé2016) in other country contexts, I find that response requests tend to be more effective in reducing item non-responses when no explicit ‘I don’t know’ option is offered.

Contributing to this literature, I provide the most comprehensive study yet of the effects of interactive requests on item non-responses. Based on my ten-country survey experiment in 2021, I find that follow-up requests are effective at increasing response rates conditional upon question type, survey design, and country context – without affecting data quality as measured by the proportion of correct responses to knowledge questions. My study’s central implication is thus that researchers should use interactive response requests in their surveys to increase the number of responses to different types of questions, thereby improving data efficiency.

As the use of online survey sample providers for research is on the rise, and given that large population-based survey projects such as the American National Election Studies (2024) and the European Social Survey (2024) are increasingly moving toward web surveys as the primary mode of data collection, my findings have the potential of improving survey data efficiency on a large scale if they hold across different types of samples (Coppock, Leeper, and Mullinix Reference Coppock, Leeper and Mullinix2018). Nonetheless, due to differences in the compositions of self-selection-based commercial online samples and randomly selected population-based samples, one fruitful extension of my research would be to evaluate the extent to which interactive response requests affect data quantity and quality in the context of such large-scale survey projects.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/S0007123424000747

Data availability statement

Replication data for this article can be found in Harvard Dataverse at: https://doi.org/10.7910/DVN/BHLOJA.

Acknowledgments

I thank my research assistants (Ian Neidel and Shawn Thacker), translators (Benedek Paskuj, Demirkan Coker, Grace Baghdadi, Ismail Hmitti, Jenny Lee, Jesse Kimotho, Kevin Misaro, Laura Palacio Londoño, Léo Bureau-Blouin, Leonhard Tannesia, Marwan Jalani, Maxine Setiawan, Miklós Szabó, Pinar Aldan, Ricardo Aguilar, Sapheya Elhadi, and Victory Lee), as well as various pre-testers. I would also like to thank Qualtrics (especially Fergal Connolly and Joanne Dufficy) and its partners for their excellent work and contributions. Last but not least, I am grateful to the British Journal of Political Science Lead Editor, Lucas Leemann, and the anonymous reviewers for their constructive comments.

Financial support

I thank Jonas Tallberg and the Legitimacy in Global Governance (LegGov) project, funded by The Bank of Sweden Tercentenary Foundation (grant number M15-0048:1). Moreover, I am grateful to Bernd Schlipphak and the University of Münster for a WWU Fellowship.

Competing interests

There are none to report.

Footnotes

1 My pre-registration is available online (Ghassim Reference Ghassim2021).

2 Section 1 of the Supplementary Material elaborates on our translation and pre-testing procedures. It also presents the languages in which my survey was shown to respondents in each country. The full questionnaires for each survey country are available online (Ghassim Reference Ghassim2025).

3 The target quotas and realized samples for each country are summarized in Supplementary Material section 2. Section 3 presents details on the procedures employed to obtain participants’ informed consent. Section 4 provides information on the compensation of survey participants.

4 To prevent participant confusion and frustration, I did not prompt respondents who selected DK in the treatment group, since the inclusion of a DK option signals its acceptability as an answer.

References

American National Election Studies (2024) 2024 Time Series Study. ANES | American National Election Studies. Available from https://electionstudies.org/data-center/2024-time-series-study/ (accessed 16 August 2024).Google Scholar
Barnhart, BJ, Reddy, SG and Arnold, GK (2021) Remind Me Again: Physician Response to Web Surveys: The Effect of Email Reminders Across 11 Opinion Survey Efforts at the American Board of Internal Medicine from 2017 to 2019. Evaluation & the Health Professions 44, 245259.CrossRefGoogle ScholarPubMed
Berinsky, AJ (2007) Survey Non-Response. In Donsbach, W and Traugott, MW (eds), The SAGE Handbook of Public Opinion Research. London, UK: Sage Publications, pp. 309321.Google Scholar
Bürkner, P-C (2018) Advanced Bayesian Multilevel Modeling with the R Package Brms. The R Journal 10, 395.CrossRefGoogle Scholar
Campbell, DT and Fiske, DW (1959) Convergent and Discriminant Validation by the Multitrait-Multimethod Matrix. Psychological Bulletin 56, 81105.CrossRefGoogle ScholarPubMed
Cannell, CF, Oksenberg, L and Converse, JM (1977) Striving for Response Accuracy: Experiments in New Interviewing Techniques. Journal of Marketing Research 14, 306315.CrossRefGoogle Scholar
Cibelli Hibben, K, Felderer, B and Conrad, FG (2022) Respondent Commitment: Applying Techniques from Face-to-Face Interviewing to Online Collection of Employment Data. International Journal of Social Research Methodology 25, 1527.CrossRefGoogle Scholar
Conrad, FG, Couper, MP, Tourangeau, R and Zhang, C (2017) Reducing Speeding in Web Surveys by Providing Immediate Feedback. Survey Research Methods 11, 4561.Google ScholarPubMed
Cook, DA et al. (2016) Incentive and Reminder Strategies to Improve Response Rate for Internet-Based Physician Surveys: A Randomized Experiment. Journal of Medical Internet Research 18, e244.CrossRefGoogle ScholarPubMed
Coppock, A, Leeper, TJ and Mullinix, KJ (2018) Generalizability of Heterogeneous Treatment Effect Estimates across Samples. Proceedings of the National Academy of Sciences 115, 1244112446.CrossRefGoogle ScholarPubMed
Crawford, SD, Couper, MP and Lamias, MJ (2001) Web Surveys: Perceptions of Burden. Social Science Computer Review 19, 146162.CrossRefGoogle Scholar
De Leeuw, ED, Hox, JJ and Boevé, A (2016) Handling Do-Not-Know Answers: Exploring New Approaches in Online and Mixed-Mode Surveys. Social Science Computer Review 34, 116132.CrossRefGoogle Scholar
Denscombe, M (2009) Item Non-response Rates: A Comparison of Online and Paper Questionnaires. International Journal of Social Research Methodology 12, 281291.CrossRefGoogle Scholar
DeRouvray, C and Couper, MP (2002) Designing a Strategy for Reducing ‘No Opinion’ Responses in Web-Based Surveys. Social Science Computer Review 20, 39.CrossRefGoogle Scholar
European Social Survey (2024) Methodology Overview. Available from https://www.europeansocialsurvey.org/methodology/methodology-overview (accessed 16 August 2024).Google Scholar
Evans, JR and Mathur, A (2005) The Value of Online Surveys. Internet Research 15, 195219.CrossRefGoogle Scholar
Fink, A and Litwin, MS (1995) How to Measure Survey Reliability and Validity. London, UK: Sage Publications.Google ScholarPubMed
Ghassim, F (2021) Response Requests in Web Surveys Lead to More Substantive Responses (AsPredicted #65088). AsPredicted, Wharton Credibility Lab, University of Pennsylvania. Available from https://aspredicted.org/rjhf-p6xc.pdf (accessed 17 October 2024).Google Scholar
Ghassim, Farsan (2025) “Replication data for ‘The effects of interactive requests on the quantity and quality of survey responses: An international methodological experiment’”, https://doi.org/10.7910/DVN/BHLOJA, Harvard Dataverse, V1.CrossRefGoogle Scholar
Gooch, A and Vavreck, L (2019) How Face-to-Face Interviews and Cognitive Skill Affect Item Non-Response: A Randomized Experiment Assigning Mode of Interview. Political Science Research and Methods 7, 143162.CrossRefGoogle Scholar
Holland, JL and Christian, LM (2009) The Influence of Topic Interest and Interactive Probing on Responses to Open-Ended Questions in Web Surveys. Social Science Computer Review 27, 196212.CrossRefGoogle Scholar
Krosnick, JA, Holbrook, AL, Berent, MK, Carson, RT, Hanemann, WM, Kopp, RJ, Mitchell, RC, Presser, S, Ruud, PA, Smith, VK, Moody, WR, Green, MC and Conaway, M (2002) The Impact of ‘No Opinion’ Response Options on Data Quality: Non-Attitude Reduction or an Invitation to Satisfice? Public Opinion Quarterly 66, 371403.CrossRefGoogle Scholar
Marsden, PV and Wright, JD (eds) (2010) Handbook of Survey Research, 2nd ed. Bingley, UK: Emerald.Google Scholar
Miller, PV and Cannell, CF (1982) A Study of Experimental Techniques for Telephone Interviewing. Public Opinion Quarterly 46, 250269.CrossRefGoogle Scholar
Miura, A and Kobayashi, T (2015) Monitors Are Not Monitored: How Satisficing among Online Survey Monitors Can Distort Empirical Findings. Research in Social Psychology 31, 120127.Google Scholar
Peytchev, A (2009) Survey Breakoff. Public Opinion Quarterly 73, 7497.CrossRefGoogle Scholar
Sroka, EC (2020) When Mixed Effects (Hierarchical) Models Fail: Pooling and Uncertainty. Medium. Available from https://towardsdatascience.com/when-mixed-effects-hierarchical-models-fail-pooling-and-uncertainty-77e667823ae8 (accessed 6 September 2024).Google Scholar
Sun, H, Caporaso, A, Cantor, D, Davis, T and Blake, K (2023) The Effects of Prompt Interventions on Web Survey Response Rate and Data Quality Measures. Field Methods 35, 100116.CrossRefGoogle Scholar
Tourangeau, R, Conrad, FG and Couper, MP (2013) The Science of Web Surveys. Oxford, UK: Oxford University Press.CrossRefGoogle ScholarPubMed
Van Selm, M and Jankowski, NW (2006) Conducting Online Surveys. Quality and Quantity 40, 435456.CrossRefGoogle Scholar
Vicente, P and Reis, E (2010) Using Questionnaire Design to Fight Nonresponse Bias in Web Surveys. Social Science Computer Review 28, 251267.CrossRefGoogle Scholar
Zhang, C and Conrad, FG (2018) Intervening to Reduce Satisficing Behaviors in Web Surveys: Evidence from Two Experiments on How It Works. Social Science Computer Review 36, 5781.CrossRefGoogle Scholar
Figure 0

Image 1: Example response request.

Figure 1

Figure 1. Effect of interactive requests on item non-response rates.Note: The figure shows changes in item non-response rates due to interactive requests (see Image 1) as a proportion of all responses. For comparison of effect sizes and better legibility, the different control group means have been normalized to zero. The dots illustrate the difference-in-means between the control and treatment groups. The lines indicate 95 per cent confidence intervals. Supplementary Material section 6.1 provides detailed data.

Figure 2

Figure 2. Percentages of discouraged item non-responses due to response request.Note: The figure shows the percentages of remaining and discouraged item non-responses as a percentage of item non-responses in the control groups of the dependent variables. Supplementary Material section 6.1 provides detailed data.

Figure 3

Figure 3. Effect of interactive requests on item non-responses when no ‘I don’t know’ offered.Note: See notes below Figure 1. This analysis is limited to the randomly selected respondents who did not see an explicit DK option. Supplementary Material section 6.2 provides detailed data.

Figure 4

Figure 4. Discouraged item non-responses due to requests when no ‘I don’t know’ offered.Note: See notes below Figure 2. This analysis is limited to the randomly selected respondents who did not see an explicit DK option. Supplementary Material section 6.2 provides detailed data.

Figure 5

Figure 5. Effects on correct answers to the IMF knowledge question.Note: Across survey countries, the figure shows differences in correct responses to the IMF knowledge question due to the response request (see Image 1). The dots illustrate the difference-in-means between the control and treatment groups. The lines indicate 95 per cent confidence intervals. Supplementary Material section 6.3 provides the underlying data.

Figure 6

Table 1. Summary of treatment effects in countries - Full sample

Figure 7

Table 2. Summary of treatment effects in countries - No DK sample

Supplementary material: File

Ghassim supplementary material

Ghassim supplementary material
Download Ghassim supplementary material(File)
File 1 MB
Supplementary material: Link

Ghassim Dataset

Link