Introduction
In recent years, public opinion researchers have increasingly turned to web surveys (Tourangeau, Conrad, and Couper Reference Tourangeau, Conrad and Couper2013). While they are highly effective and efficient in gathering primary data, online surveys are also plagued by specific problems due to their self-administered nature (Evans and Mathur Reference Evans and Mathur2005). Item non-response is one of the central issues affecting survey efficiency and quality (Berinsky Reference Berinsky, Donsbach and Traugott2007) – both in traditional modes and web surveys (Denscombe Reference Denscombe2009; Gooch and Vavreck Reference Gooch and Vavreck2019). Participants who do not respond to certain questions reduce the cost-efficiency of the data-gathering process. Moreover, if item non-responses are biased – that is, if certain types of participants are less likely to answer particular questions – this may affect the results’ representativeness for the underlying population of interest. Depending on the kind of question, significant proportions of a sample may decide not to respond on certain occasions. Addressing the issue of item non-response is thus of utmost importance to survey-based work.
Research on feedback as a way of getting higher-quality responses dates back decades (Cannell, Oksenberg, and Converse Reference Cannell, Oksenberg and Converse1977; Miller and Cannell Reference Miller and Cannell1982). The onset of the Internet survey age brought the possibility of interacting with participants in novel ways. Scholars have thus taken a renewed interest in feedback as a way of improving survey data quality (Conrad et al. Reference Conrad, Couper, Tourangeau and Zhang2017), among others, in terms of response rates (Vicente and Reis Reference Vicente and Reis2010, 259–260). Interactive requests are one such method; for example, follow-up prompts in pop-up windows asking respondents to answer questions that they previously left unanswered (see Image 1).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250212120105403-0498:S0007123424000747:S0007123424000747_fig6.png?pub-status=live)
Image 1: Example response request.
In an early study of a few hundred American respondents, DeRouvray and Couper (Reference DeRouvray and Couper2002) found that prompts decreased question-skipping rates significantly. Holland and Christian (Reference Holland and Christian2009) studied American university students and found that interactive prompts do not improve response rates to open-ended questions. In another study limited to the US, Sun et al. (Reference Sun, Caporaso, Cantor, Davis and Blake2023, 109) show that prompts on speeding and straightlining do not affect response rates. Conducting a representative survey of the Dutch population, de Leeuw, Hox, and Boevé (Reference De Leeuw, Hox and Boevé2016) discovered that follow-up probes are effective at increasing response rates, particularly in combination with not offering an explicit ‘I don’t know’ option but allowing respondents to skip questions. Exploring the underlying mechanisms, Zhang and Conrad (Reference Zhang and Conrad2018) argue that feedback prompts can lead to genuine improvements in participant behaviours, but also run the risk of strengthening social desirability bias in responses. While these are valuable first steps in evaluating the promise of interactive requests in the age of web surveys, we still lack comprehensive multinational research assessing the effects of response requests on different types of questions.
Contributing to the literature, the present study provides a novel theoretical framework for the potential effectiveness of interactive requests in increasing response rates. I test my central hypothesis of response requests’ effectiveness in an original online survey experiment on diverse population-based samples in ten countries across the global South, North, East, and West, thus providing the most comprehensive study yet in this regard. I find that follow-up requests can reduce item non-responses substantially (depending on the question, survey design, and country context), while not adversely affecting response quality. I thus recommend response requests to increase survey data efficiency.
Theoretical framework
Let us begin by acknowledging that there are instances in which item non-responses are valid; for example, when respondents truly do not know the answer or find a question too sensitive. Given the existence of such valid item non-responses, why should we expect interactive follow-up requests to affect item response rates and quality? While prior studies have advanced our knowledge of the effects of interactive requests on survey responses, we lack a theoretical framework for understanding why and how such interactive requests work. Theoretically, response requests may both increase and decrease item non-responses. The former effect may materialize if participants perceive (repeated) response requests as burdensome (Crawford, Couper, and Lamias Reference Crawford, Couper and Lamias2001), so that they increase survey breakoff rates, for example. However, in line with prior research, I expect that interactive requests reduce item non-response rates. This section concentrates on theoretical considerations why that effect may materialize.
To begin with, survey participants may genuinely forget to answer questions. Consider web surveys that often feature many different questions on one page, sometimes in grids. Even conscientious participants may overlook questions that they would have liked to answer. In such cases, interactive requests can serve as useful reminders, helping respondents to complete the survey. Reminder strategies for online surveys have mostly been studied with regard to unit non-response (Barnhart, Reddy, and Arnold Reference Barnhart, Reddy and Arnold2021; Cook et al. Reference Cook2016), while the focus here is on item non-response.
Not all cases of skipped questions are merely accidental omissions by participants, however. From existing methodological research, we know that item non-response is a common satisficing phenomenon (Berinsky Reference Berinsky, Donsbach and Traugott2007; Krosnick et al. Reference Krosnick, Holbrook, Berent, Carson, Hanemann, Kopp, Mitchell, Presser, Ruud, Smith, Moody, Green and Conaway2002). That is, some participants skip questions in order to avoid the effort associated with answering them. In the present context, I am also interested in the potential effects of interactive requests on such respondents. To this end, let us now consider different ways in which participants may satisfice and how follow-up requests can intervene in such cases.
For one, participants may have the impression that their answers do not really matter and thus decide to invest little effort into completing a survey. On such occasions, follow-up requests can motivate respondents to provide genuine substantive answers, meaning responses that truly reflect their underlying attitude or knowledge. Interactive feedback may indicate to respondents that the researchers are sincerely interested in obtaining participants’ answers, which can motivate them to respond when they would have skipped a question otherwise. Scholars have examined the motivating effects of other survey methods such as self-commitment (Cibelli Hibben, Felderer, and Conrad Reference Cibelli Hibben, Felderer and Conrad2022), while my focus here is on the potential motivational effects of response requests.
Another avenue whereby follow-up requests can effectively reduce item non-response rates is by serving as instructions. While explicit instructions are a commonly discussed feature of survey design to reduce non-responses (Vicente and Reis Reference Vicente and Reis2010, 253), I concentrate on the potentially implicit instructional effects of response requests. In particular, participants may believe that, because skipping questions is possible, it is also legitimate under any circumstances. In such cases, follow-up requests may affect participants’ understanding of the desirability of different response behaviours. Specifically, such prompts may signal to respondents that skipping questions is less legitimate than answering them. Requests may thus serve as instructions to some participants about desired behaviours and thereby reduce item non-response rates.
Next, participants may be aware that skipping questions is not a desirable behaviour from researchers’ perspectives but still decide to do so – at least as long as they feel unobserved. In such instances, response requests can lead participants to believe that the researchers are monitoring their behaviour. This may cause respondents to provide more genuine substantive answers; that is, responses reflecting participants’ true attitudes or knowledge; more disingenuous substantive responses, namely answers which do not reflect participants’ true underlying positions; or do not lead to any changes in respondents’ answers – perhaps because they understand that skipping questions is not desired by researchers but nonetheless accepted. Monitoring respondents’ behaviour has been studied in other survey contexts (Miura and Kobayashi Reference Miura and Kobayashi2015; Van Selm and Jankowski Reference Van Selm and Jankowski2006, 449), whereas my focus is on the possible monitoring effects of interactive requests in cases of attempted item non-response.
Lastly, follow-up requests may effectively increase response rates by making item non-responses more time-consuming. Interactive feedback prompts appear only when survey participants attempt to proceed without answering a question. The time that it takes to answer questions conscientiously may be one factor leading to item non-response. Assuming that some participants want to complete surveys as quickly as possible, interactive prompts may effectively prevent them from doing so by simply skipping questions. Instead, potential shirkers may decide that it is, in fact, more time-efficient to respond to (subsequent) questions than to attempt to skip them and effectively add additional questions (that is, the interactive prompts) to the survey. Thus, the final pathway whereby follow-up requests could increase response rates may be called sanctioning. Like monitoring, sanctioning can improve data quality by leading to more genuine substantive responses, reduce data quality by producing more non-genuine substantive responses, or even lead to increased breakoff rates (Peytchev Reference Peytchev2009). Whether response quality decreases or increases as a result of response requests is a matter for empirical exploration.
In sum, there are different avenues whereby interactive follow-up requests may increase the quantity and quality of responses: reminding, motivating, instructing, monitoring, and sanctioning. Based on these theoretical considerations, as well as past research that has found interactive requests to be effective at reducing item non-response (DeRouvray and Couper Reference DeRouvray and Couper2002), my ex-ante hypothesis was that follow-up requests reduce item non-responses in surveys.Footnote 1 Moreover, building on prior research that found positive effects (Sun et al. Reference Sun, Caporaso, Cantor, Davis and Blake2023), I explore interactive requests’ potential impact on response quality.
Research design
In a multi-country survey experiment, I tested to what extent and how follow-up requests affect the quantity and quality of responses to different questions. In order to ensure the generalizability of results to different cultural contexts, I included a highly diverse sample of ten countries from the global South, North, East, and West: Australia, Canada, Colombia, Egypt, France, Hungary, Indonesia, Kenya, South Korea, and Turkey. My English questionnaire was translated into the survey countries’ primary languages by native speakers.Footnote 2
In collaboration with Qualtrics (acting as both sample aggregator and survey platform) and its sample suppliers (Dynata, Cint, Lucid, and Toluna), I fielded the experiments between May and October 2021. My respondents were recruited from opt-in panels in the target countries based on quotas for gender, age, region, and education. This quota-based approach resulted in highly diverse samples of 3,100 respondents or more in each country. The aggregate cross-country sample included 32,319 respondents, randomized by Qualtrics into control and treatment conditions of approximately equal sizes (ncontrol = 16,150, ntreatment = 16,169). While the aggregate sample size is relatively large, it should be noted that item non-responses only affect small fractions of responses.Footnote 3
My experiment focused on six newly developed questions, which differed in terms of the presumed knowledge requirements as this factor could conceivably influence the frequency of item non-responses and how interactive requests affect response behaviour. Supplementary Material section 5 presents my dependent variable questions, which may be approximately ordered from low to high knowledge requirements as follows: self-reported interest in world politics (which, as a mere subjective statement of interest, requires essentially no knowledge); prioritization of the environment or the economy, expected adherence to cultural norms by fellow citizens including immigrants; opinion on the home country’s global responsibility; views on governmental market intervention versus self-regulation (which are all attitudinal questions but appear to require some prior knowledge for the formation of valid views); and knowledge of the IMF headquarters’ location (to which there is a correct answer, which is arguably not common knowledge). My dependent variable in each case is whether respondents skipped the respective question by proceeding without providing an answer or by selecting the ‘I don’t know’ (DK) option that was randomly provided to around half of the respondents in both the control and treatment groups.Footnote 4
The control group in my experiment received no response requests, while treatment group participants saw such a request if they wanted to proceed to the next survey page without providing an answer to at least one of the questions on a page. Each of my dependent variable questions was posed on a separate page and in random order. A standardized interactive response request is offered by Qualtrics as a question-design option on their survey platform (see Image 1 above). I used this prompt as my experimental treatment. Both the control and treatment groups could skip questions; the treatment group just had to confirm that they wanted to continue without answering.
Results and discussion
The following results are first aggregated across all ten survey countries before presenting heterogeneous treatment effect analyses by country. In order to evaluate the effect of interactive requests, I examined the changes in item non-response rates to the different questions outlined above. As expected, the effects are negative and statistically significant (at least p<0.01) for four of the six dependent variables. Among these, the substantive effects range between 0.5 and 0.8 points as a percentage of all responses. Item non-response rates to the cultural norms and IMF knowledge questions are not significantly affected by the interactive request in the cross-country sample. Figure 1 illustrates these results.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250212120105403-0498:S0007123424000747:S0007123424000747_fig1.png?pub-status=live)
Figure 1. Effect of interactive requests on item non-response rates.
Note: The figure shows changes in item non-response rates due to interactive requests (see Image 1) as a proportion of all responses. For comparison of effect sizes and better legibility, the different control group means have been normalized to zero. The dots illustrate the difference-in-means between the control and treatment groups. The lines indicate 95 per cent confidence intervals. Supplementary Material section 6.1 provides detailed data.
While the effects may not seem large at first sight, they are quite substantive when considering reductions in terms of the percentage of item non-responses (rather than percentage points among all answers, of which item non-responses are only a small part, with a median rate of 2.7 per cent in the control group – see Supplementary Material section 6.1). Figure 2 below shows that, for the questions with significant effects above, one-fifth (specifically between 19 and 22 per cent) of item non-responses were discouraged due to the follow-up request.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250212120105403-0498:S0007123424000747:S0007123424000747_fig2.png?pub-status=live)
Figure 2. Percentages of discouraged item non-responses due to response request.
Note: The figure shows the percentages of remaining and discouraged item non-responses as a percentage of item non-responses in the control groups of the dependent variables. Supplementary Material section 6.1 provides detailed data.
The result of an effective reduction of item non-responses is especially noteworthy since a random half of respondents were given an explicit ‘I don’t know’ choice (for another experiment). While choosing this option did not trigger an interactive response request, it was counted as having effectively skipped the question, which potentially biased my estimates of item non-responses – in the control and treatment groups – upward (Krosnick et al. Reference Krosnick, Holbrook, Berent, Carson, Hanemann, Kopp, Mitchell, Presser, Ruud, Smith, Moody, Green and Conaway2002), and thus may have limited the response request’s apparent effectiveness.
Similar to de Leeuw and colleagues (Reference De Leeuw, Hox and Boevé2016), I find that concentrating on respondents who were not given an explicit DK choice, the rate of item non-responses decreases significantly for almost all dependent variable questions, albeit only at p<0.1 for the IMF knowledge variable. The remaining exception is the one on cultural norms (see Figure 3 and Supplementary Material section 6.2), which may be due to the higher sensitivity of that question. The substantive effects in terms of reductions of item non-responses (as a percentage of total responses) range between 0.6 and 0.8 points. Focusing only on the item non-responses themselves, Figure 4 shows that between 10 and 47 per cent are discouraged as a result of the request. Note that these differences-in-means are all substantively larger than in the analyses above that included the response conditions with explicit DK options.
Having established that interactive prompts effectively increase the quantity of answers, let us now turn to their effects on response quality: Are prompted responses better, worse, or equally good as non-prompted responses? While many methods have been proposed to evaluate the validity and reliability of responses relating to interests and attitudes (for example, Campbell and Fiske Reference Campbell and Fiske1959; Fink and Litwin Reference Fink and Litwin1995; Marsden and Wright Reference Marsden and Wright2010), this is hardly possible for single items as in my survey, since only participants themselves can ascertain that their responses reflect the underlying truth. However, in the case of knowledge questions, we can investigate the extent to which interactive requests affect response quality. An insignificant difference in correct responses between the control and treatment groups would demonstrate that response requests have neither a negative nor a positive effect on data quality. If response requests lead to more random guesses, we would expect the proportion of correct responses among substantive responses to decrease, indicating increased noise and lower data quality. Conversely, if interactive prompts lead more knowledgeable participants to complete the question, we would expect the proportion of correct responses among all substantive answers to increase.
As illustrated in Figure 5, we observe a null effect. The proportion of correct answers does not change significantly – neither among all responses (including non-substantive answers) nor among substantive responses. Therefore, it appears that, while response requests may lead to more respondents attempting to answer knowledge questions (at least when no explicit DK option is offered), their guesses are about as likely to be right than among those respondents who answer the questions without being prompted. In sum, interactive requests can increase the number of responses to knowledge questions while not affecting their quality.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250212120105403-0498:S0007123424000747:S0007123424000747_fig5.png?pub-status=live)
Figure 5. Effects on correct answers to the IMF knowledge question.
Note: Across survey countries, the figure shows differences in correct responses to the IMF knowledge question due to the response request (see Image 1). The dots illustrate the difference-in-means between the control and treatment groups. The lines indicate 95 per cent confidence intervals. Supplementary Material section 6.3 provides the underlying data.
Finally, let us explore the heterogeneous treatment effects in different survey countries. To this end, I construct Bayesian multi-level models with random country effects to benefit from partial pooling, while avoiding issues with frequentist approaches such as minute confidence intervals (Bürkner Reference Bürkner2018; Sroka Reference Sroka2020). For the full sample, Table 1 shows that almost all parameters across all variables and countries are negative. However, only political interest, environmentalism, global responsibility, and market intervention have coefficients that are statistically significant at 95 per cent confidence in my ten survey countries – most consistently so in Hungary, Indonesia, and Kenya. Conversely, in Australia and Egypt, not a single parameter is significant for any variable.
Table 1. Summary of treatment effects in countries - Full sample
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250212120105403-0498:S0007123424000747:S0007123424000747_tab1.png?pub-status=live)
Note: The numbers present the treatment parameters, while the asterisks indicate statistically significant differences from zero at the 95 per cent confidence level. Supplementary Material section 7 provides detailed data.
Table 2 largely backs the analysis in Table 1 but generally – as above – shows even more pronounced effects for the sample without explicit ‘I don’t know’ options. For instance, while Table 1 shows seventeen statistically significant coefficients at the 95 per cent level, there are twenty-four such parameters in Table 2. Note also that there is now at least one significant effect in each survey country. Moreover, Table 2 shows a significant effect in Indonesia for the IMF knowledge question while Supplementary Material section 7.6.2 shows that the results in France and Turkey are only marginally insignificant at the 95 per cent level. Consistent with the full sample analysis in individual countries (Table 1) and the aggregate analysis across countries (Figures 1 and 3), the country coefficients for cultural norms are insignificant. Similarly, in line with my findings above, correct IMF knowledge responses among all responses and substantive responses are not significant in any surveyed country, indicating that response quality does not change as a result of interactive requests. Overall, these findings present the country-level drivers of treatment effects in the aggregate sample above, including the generally stronger effects of response requests on item non-responses when no explicit ‘I don’t know’ option is offered.
Conclusion
My study contributes to the existing literature in several ways. In addition to theoretical contributions by Zhang and Conrad (Reference Zhang and Conrad2018), I theorize how interactive follow-up requests may decrease item non-response rates through the mechanisms of reminding, motivating, instructing, monitoring, and/or sanctioning. Expanding significantly on the study by DeRouvray and Couper (Reference DeRouvray and Couper2002) and, as opposed to Holland and Christian (Reference Holland and Christian2009) as well as Sun et al. (Reference Sun, Caporaso, Cantor, Davis and Blake2023) whose focus differs somewhat from mine, I find that response requests are generally effective at reducing item non-response rates, although their effectiveness depends on questionnaire design, the specific question at hand, and the survey country context. Confirming the results of de Leeuw and colleagues (Reference De Leeuw, Hox and Boevé2016) in other country contexts, I find that response requests tend to be more effective in reducing item non-responses when no explicit ‘I don’t know’ option is offered.
Contributing to this literature, I provide the most comprehensive study yet of the effects of interactive requests on item non-responses. Based on my ten-country survey experiment in 2021, I find that follow-up requests are effective at increasing response rates conditional upon question type, survey design, and country context – without affecting data quality as measured by the proportion of correct responses to knowledge questions. My study’s central implication is thus that researchers should use interactive response requests in their surveys to increase the number of responses to different types of questions, thereby improving data efficiency.
As the use of online survey sample providers for research is on the rise, and given that large population-based survey projects such as the American National Election Studies (2024) and the European Social Survey (2024) are increasingly moving toward web surveys as the primary mode of data collection, my findings have the potential of improving survey data efficiency on a large scale if they hold across different types of samples (Coppock, Leeper, and Mullinix Reference Coppock, Leeper and Mullinix2018). Nonetheless, due to differences in the compositions of self-selection-based commercial online samples and randomly selected population-based samples, one fruitful extension of my research would be to evaluate the extent to which interactive response requests affect data quantity and quality in the context of such large-scale survey projects.
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/S0007123424000747
Data availability statement
Replication data for this article can be found in Harvard Dataverse at: https://doi.org/10.7910/DVN/BHLOJA.
Acknowledgments
I thank my research assistants (Ian Neidel and Shawn Thacker), translators (Benedek Paskuj, Demirkan Coker, Grace Baghdadi, Ismail Hmitti, Jenny Lee, Jesse Kimotho, Kevin Misaro, Laura Palacio Londoño, Léo Bureau-Blouin, Leonhard Tannesia, Marwan Jalani, Maxine Setiawan, Miklós Szabó, Pinar Aldan, Ricardo Aguilar, Sapheya Elhadi, and Victory Lee), as well as various pre-testers. I would also like to thank Qualtrics (especially Fergal Connolly and Joanne Dufficy) and its partners for their excellent work and contributions. Last but not least, I am grateful to the British Journal of Political Science Lead Editor, Lucas Leemann, and the anonymous reviewers for their constructive comments.
Financial support
I thank Jonas Tallberg and the Legitimacy in Global Governance (LegGov) project, funded by The Bank of Sweden Tercentenary Foundation (grant number M15-0048:1). Moreover, I am grateful to Bernd Schlipphak and the University of Münster for a WWU Fellowship.
Competing interests
There are none to report.