Research in the fields of business and marketing (Wilson, Hall-Phillips and Djamasbi Reference Wilson, Hall-Phillips and Djamasbi2015), higher education (Lewin-Jones and Mason Reference Lewin-Jones and Mason2014) and linguistics (Baron Reference Baron1998; Millar Reference Millar2009), among others, offer a plethora of insights about how people react to interpersonal communication and casework. For example, frequent, personalized emails between retailers and customers improve customer satisfaction and enhance customer loyalty (Huang and Shyu Reference Huang and Shyu2009). Student perceptions of professors' promptness and helpfulness over email impact teaching evaluations and the student–professor relationship (Sheer and Fung Reference Sheer and Fung2007). Overall, individuals' evaluations of written interpersonal communication can influence more general attitudes and behavior. This communication is especially important in principal–agent relationships in which the lines of communication are limited, but necessary to convey interests.
It is therefore surprising that in political science we know very little about perceptions of legislator–constituent communication. Constituent relations is a major component (arguably, the central component) of representative democracy, and many Americans report having contacted an elected official in national surveys. Yet research to date on legislator–constituent communication has focused on constituency service from the top down (see Costa Reference Costa2017; Dai Reference Dai2007; Grimmer Reference Grimmer2013). That is, we know about politicians' strategic use of constituent communication, how they are biased in their responsiveness and how they view this type of casework. We know little, however, about how individuals in the mass public experience and evaluate service responsiveness.Footnote 1
A substantial body of research has been devoted to understanding inequalities in service responsiveness, typically through the use of audit studies. Many scholars, in addition to examining overall response rates, also measure the quality of legislators' responses to assess whether politicians provide better responses to some constituents than others. Table 1 shows how response quality has been coded in the audit literature.Footnote 2 Clearly, scholars disagree about what constitutes a quality response from a public official.
Table 1. How response quality is coded in the literature
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221202175031865-0758:S0007123419000553:S0007123419000553_tab1.png?pub-status=live)
Note: bold indicates this criterion was explicitly considered in coding response quality. * indicates that this criterion was not explicitly mentioned, but might have been part of a more subjective coding (for example, some scholars measured the ‘friendly and courteous tone of the response’ (Grohs, Adam and Knill Reference Grohs, Adam and Knill2016) or coded responses as helpful if they ‘provided actionable information’ (Terechshenko et al. Reference Terechshenko2019). See the Appendix for each author's exact description of how they measured response quality.
Different methodological choices lead to different substantive conclusions. Without a clear consensus on what legislative responsiveness should look like, the implications of any given study on representation are very limited. Moreover, citizens who interact with their legislators may have different ideas about what constitutes responsiveness, which challenges the ecological validity of some conclusions made from audit studies.
I conduct three tests to examine how individuals evaluate communication with elected officials. The findings offer new insights about some common assumptions scholars make about how constituents evaluate service responsiveness. For example, greeting the constituent by name at the start of an email can significantly improve individuals' evaluations. However, providing a direct answer to the constituent's question, instead of only supplying contact information for another office, had a statistically significant and positive impact in only one of the three tests, suggesting that context affects evaluations of how helpful a legislator needs to be in constituency service. Overall, the findings help the field advance towards an ‘industry standard’ measure of legislative responsiveness to constituent communication.
Research design
Studies 1 and 2
First, I fielded two survey experiments with YouGov using nationally representative samples of 1,000 American adults for each survey.Footnote 3 Subjects were first presented with the prompt: ‘Imagine Jake just moved to a new area. He emailed his state legislator asking for information on how to vote. Below is the response he received from his state legislator after X days’; X was a randomly generated integer between 1 and 30.Footnote 4 After the prompt, subjects were presented with the (hypothetical) email response from the legislator.
In addition to the randomized number of days before a response, two treatment variables were randomized in the responses.Footnote 5 First, the response either provided an answer to the question or contact information for another office. The ‘answer’ response provides steps the constituent needs to take to register and vote and provides a link to the state legislature website for more information. The ‘contact information’ response provides the phone number and email address for the ‘elections office clerk’, but provides no information on how to vote. Some scholars consider providing such contact information a helpful response whereas others discount it (see Table 1); varying this aspect allows me to test these assumptions directly. The emails are about the same length to control for perceived effort the legislator exerted to respond. Secondly, the emails varied in their tone. Responses were either ‘friendly’ or not. Friendly responses start with a named greeting (‘Dear Jake/Jane’) and end with an invitation to follow up (‘Let me know if you have additional questions.’).
Subjects were then asked to rate the response on its overall quality, friendliness and helpfulness on a scale from 0–100. For simplicity, in the results presented below, I focus on evaluations of overall response quality, but present results for all three dependent variables in the Appendix. In Study 1, the mean rating of overall quality is 52.5, the median is 51 and the standard deviation is 25.5. In Study 2, the mean is 51, median 51 and the standard deviation is 27.
One feature of this design deserves mention. The respondents evaluate, as a third party, legislative communication to a hypothetical constituent. While this avoids confounding issues that could arise with a first-person perspective, it may not fully capture how citizens react to elite responsiveness in real-life settings. The results I present below can possibly be interpreted as lower-bound estimates of how an individual would actually evaluate a response that they had personally solicited, given that they would be more invested in an interaction in which they were seeking help for themselves.
Study 3
One benefit of audit studies is the unique data scholars collect on elite responsiveness to constituent emails. However, this data has rarely been used to understand patterns in elite–constituent communication. For the third test, I leverage real state legislator emails from a 2010 audit experiment reported in Butler (Reference Butler2014). The purpose of the audit was to examine whether non-Latino white state legislators in the United States are biased in their responsiveness to blacks and Latinos.
Undergraduate research assistants coded each response for the same treatment variables from Studies 1 and 2: whether the email included a named salutation, an invitation to follow up, an answer to the constituent request, or contact information for another office. In Studies 1 and 2, it is unclear whether the combined effect of a greeting and invitation to follow up, or one or the other, drives any observed differences between conditions. Moreover, it is impossible to determine how answering the question and providing contact information would affect evaluations. For this study, I am able to isolate the independent and combined effects of these components. To control for the length of the email, I also recorded the number of words of each legislator response. Finally, some legislator responses were automated form responses generated by a computer; since these responses are often followed up with personalized emails and do not reflect the direct exchange between legislator and constituent, I include a control variable that indicates whether responses were automated or personalized.Footnote 6
Legislator responses were then anonymized so that the name, email address and location of elected officials remained confidential. Subjects from Amazon's Mechanical Turk (MTurk) were recruited to evaluate a random subset of 400 of the emails.Footnote 7 Descriptive statistics of the legislator emails are shown in Table 2. This data can help to resolve questions about the ecological validity of response quality measures; that is, how often do legislators actually answer the constituent's question? How often do they greet the constituent by name?
Table 2. Descriptive statistics of legislator emails
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221202175031865-0758:S0007123419000553:S0007123419000553_tab2.png?pub-status=live)
Respondents first answered a very brief demographic/political battery before being asked to evaluate five email exchanges between constituents and state legislators, chosen at random.Footnote 8 They rated each legislator response on a scale from 0–100 based on how satisfied they would be with the response if they received it from their legislator. Note that unlike Studies 1 and 2, this design uses a first-person perspective in which respondents are asked to imagine that they were personally involved in the communication exchange with the legislator. I recruited a total of 1,000 subjects so that each legislator response would be rated an average of 12.5 times. From those ratings, I produced an average respondent-satisfaction measure for each response. I examine the independent effect of each email characteristic on the average respondent satisfaction. The mean satisfaction rating is 62.3, the median is 65.6 and the standard deviation is 18.4.
Results
Studies 1 and 2
Figure 1 shows the mean ratings of response quality across experimental conditions. Since the ratings of overall quality, friendliness and helpfulness are highly correlated, I focus here on quality ratings but adjust for multiple testing bias. In the Appendix, I present results for all three dependent variables.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221202175031865-0758:S0007123419000553:S0007123419000553_fig1.png?pub-status=live)
Figure 1. Mean response quality score by experimental condition
Note: plots show mean evaluations by condition, with vertical lines representing 95 per cent confidence intervals.
In both studies, emails with named greetings and invites to follow up received higher-quality scores. In Study 1, the mean quality rating for responses that were not overtly friendly was 48, whereas the mean quality rating for friendly responses was 57.2 (+9.2, p < 0.001). In Study 2, the mean quality rating for responses that were not overtly friendly was also 48, whereas the mean quality rating for friendly responses was 54.2 (+6.2, p < 0.001). Adjusting for multiple comparisons using the Holm (Reference Holm1979) correction, these differences are statistically significant.
However, the mean quality rating for responses that contained an answer to the request and responses that contained contact information were nearly identical. In Study 1, the mean quality rating for both answer and contact responses was 52.5 (0-point difference, p = 0.97). In Study 2, the mean quality rating for answer responses was 50.5 and the mean quality rating for contact responses was 51.4 (-0.9, p = 0.63). Neither of these differences is statistically or substantively significant. This is notable since several audit studies do not consider responses to be complete unless they provide full answers to the constituent request and sometimes disregard responses that only contain contact information for another office. Yet here, respondents evaluated these types of responses similarly.
In both studies, the number of days it took for the legislator to respond had a negative and statistically significant effect on perceptions of response quality (see Figure 2). In Study 1, after about 20 days, evaluations of response quality decrease by 10 points. In Study 2, evaluations of response quality immediately decrease with each additional day; within the first 10 days, evaluations of overall response quality drop by over 15 points. One explanation for the different patterns may be due to the timing of the studies; the first was fielded right before an election, when expectations about response promptness may be more lenient. Future research should examine whether evaluations of service responsiveness are conditioned by electoral context or other factors. Overall, the total drop in perceptions of response quality over the 30-day period is 10–15 points in both studies. While it may not be surprising that longer response times decreased quality, it is nevertheless informative for auditors measuring the timeliness of legislator responses. Some scholars consider responses to be more helpful if they arrive within 24 hours, but these results suggest that it would take at least several days (and sometimes longer) to incur any real penalty by constituents.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221202175031865-0758:S0007123419000553:S0007123419000553_fig2.png?pub-status=live)
Figure 2. Effect of number of days until response on quality rating
Note: fitted using locally weighted smoothing (LOESS). Grey shaded areas represent 95 per cent confidence intervals.
Study 3
Turning to Study 3, Table 3 presents the results from an ordinary least squares regression model estimating the effects of the response characteristics on the mean level of satisfaction with the response (recall satisfaction with response was registered on a scale from 0–100 and each response was rated an average of 12.5 times). The response characteristic with the largest effect on satisfaction with the response is whether or not the email was an automated form message. Respondents were over 15 points less satisfied with responses that were automated than those that were personalized (p < 0.01).
Table 3. Effect of response characteristics on satisfaction with response
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221202175031865-0758:S0007123419000553:S0007123419000553_tab3.png?pub-status=live)
Note: coefficients estimated using ordinary least squares. Standard errors are in parentheses. * p < 0.01
To estimate the effect of response tone on satisfaction, I use responses containing neither a named greeting nor an invitation to follow up as the reference category. Compared to these ‘non-friendly’ emails, greeting the constituent by name and inviting them to follow up with future queries increased respondent satisfaction by 9.25 points, which is similar to the effects found in Studies 1 and 2 (p < 0.01). Isolating the effects of each, we see that an invitation to follow up does not independently have a statistically significant effect on satisfaction. Including a named greeting, however, does exert an independent influence on respondent satisfaction (+4.33, p < 0.01), yet not quite as much as when combined with an invite to follow up. This suggests that adding more friendly elements leads constituents to react more favorably to the email.
Contrary to the findings presented in Studies 1 and 2, whether the constituent's question was answered had a large and statistically significant effect on respondents' satisfaction. Controlling for providing contact information, respondents were almost 13 points more satisfied with responses that answered constituents' requests than those that did not contain either an answer or contact information (p < 0.01). Providing contact information to help the constituent find the answer, while not providing the answer itself, did not have a statistically significant effect on respondent satisfaction with the email.
In sum, the findings from this study support some of the findings from Studies 1 and 2, and provide new information regarding how individuals perceive elite communication. Respondents consistently preferred emails from officials who were friendly, whether that came in the form of a named salutation or invitation to follow up with more questions, or a personalized email rather than an automated one. Respondents also judged emails based on their length. Longer emails resulted in higher satisfaction with the communication. Finally, answering the question did improve respondents' evaluations of the response in Study 3 compared with providing contact information for a relevant office.
Discussion
This study takes a bottom-up approach to understanding legislative service responsiveness. This approach has several benefits. The first is methodological: it helps the field advance towards an ‘industry standard’ measure of legislative responsiveness to constituent communication. An industry standard based on what constituents actually value is more informative than scholar-driven assumptions, especially given that some of the findings conflict with assumptions made in recent studies. For example, Broockman (Reference Broockman2013) codes responses as ‘helpful’ if they ‘provided the website, email address, physical address, or telephone number of a person or agency that could help [the constituent]’ (p. 2, SI), yet the results presented here suggest we cannot definitively conclude that individuals favor this type of response. Moreover, the findings clearly point to the importance of greeting constituents by name, but only a small number of prior studies consider whether legislators do so (see Table 1).
There are also practical implications of this research. Legislative offices at all levels across many countries devote a great amount of time and resources to constituent relations (Germany and McGowen Reference Germany and McGowen2008; Goldschmidt Reference Goldschmidt2011). Understanding citizen evaluations of legislative communication can help officials provide quality representation to constituents. It is also important from a citizen perspective that communication with legislators is of a high quality. This research suggests just how easy it might be to satisfy citizens with this type of elite interaction. For instance, simply greeting constituents by name results in substantively more favorable evaluations, a result that was consistent across all three tests. Citizens may be satisfied with legislative interactions if they are greeted by name, but there are important democratic repercussions if their needs are not actually being met. Research on legislator–constituent communication should consider this tension between subjective perceptions of response quality and more objective measures when evaluating elite responsiveness.
Supplementary material
Data replication sets are available in Harvard Dataverse at: https://doi.org/10.7910/DVN/4C1KJT and online appendices at: https://doi.org/10.1017/S0007123419000553.
Acknowledgements
I thank Dan Butler, Ray La Raja, Tatishe Nteta, Bruce Desmarais, and anonymous reviewers for helpful comments. I am also grateful to Dan Butler for sharing data and to Grace Anderson, Christopher Gallucci, Thomas Kennedy, Sophia Paarz, Samantha Reardon and Anthony Rentsch for research assistance.