Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-02-06T07:25:40.411Z Has data issue: false hasContentIssue false

Is It Simply Gender? Content, Format, and Time in Political Knowledge Measures

Published online by Cambridge University Press:  27 March 2018

Monica Ferrin
Affiliation:
Collegio Carlo Alberto
Marta Fraile
Affiliation:
European University Institute & Consejo Superior de Investigaciones Cientificas (IPP)
Gema García-Albacete
Affiliation:
Carlos III University of Madrid
Rights & Permissions [Opens in a new window]

Extract

Theories of democracy commonly assume that citizens must have a certain degree of information and factual knowledge to be able to understand the functioning of institutions, the performance of the incumbent government, and the actions of the main political actors. Political knowledge helps people to better assess their interest as individuals and as members of groups (Delli Carpini and Keeter 1996). Moreover, governments have more incentives to be responsive when they can be held accountable, but citizens are able to hold governments accountable for their actions only when they know what governments are actually doing.

Type
Research Article
Copyright
Copyright © The Women and Politics Research Section of the American Political Science Association 2018 

Theories of democracy commonly assume that citizens must have a certain degree of information and factual knowledge to be able to understand the functioning of institutions, the performance of the incumbent government, and the actions of the main political actors. Political knowledge helps people to better assess their interest as individuals and as members of groups (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996). Moreover, governments have more incentives to be responsive when they can be held accountable, but citizens are able to hold governments accountable for their actions only when they know what governments are actually doing.

However, research on public opinion and political participation has frequently shown that citizen's average level of information, knowledge, and understanding of politics is relatively poor (Boudreau and Lupia Reference Boudreau, Lupia, Druckman, Green, Kuklinski and Lupia2011; Converse Reference Converse and Apter1964; Delli Carpini and Keeter Reference Delli Carpini and Keeter1996). Equally concerning is the fact that political knowledge appears to be unevenly distributed. This conclusion comes from the evidence portrayed in survey oriented studies that rely on a range of knowledge items that mainly focus on the accretion of electoral and partisan facts.

After decades of debate, however, there is an absence of a generally accepted measure of the public's knowledge about politics. The concept is theoretically complicated and potentially multidimensional (Mondak Reference Mondak2001), implying a particular difficulty to operationalize it.

Scholars have devoted substantial research time into estimating average levels of citizens’ political knowledge as well as their main antecedents, with the United States being the most frequent case under study. Despite valuable insights provided by previous studies, much of the research suffers from limitations related to measurement (Barabas et al. Reference Barabas, Jerit, Pollock and Rainey2014; Boudreau and Lupia Reference Boudreau, Lupia, Druckman, Green, Kuklinski and Lupia2011; Lupia Reference Lupia2006)

In fact, recent studies have debated the effect that the format of the question (Fortin-Rittberger Reference Fortin-Rittbergger2016; Luskin and Bullock Reference Luskin and Bullock2011; Miller and Orr Reference Miller and Orr2008; Robison Reference Robinson2015; Sturgis, Allum, and Smith Reference Sturgis, Allum and Smith2008), the survey protocol (Prior and Lupia Reference Prior and Lupia2008), the use of images versus words (Prior Reference Prior2014), and the codification procedure of the responses to the questions (Gibson and Caldeira Reference Gibson and Caldeira2009) have on both observed levels of knowledge and its antecedents. What is more, a comprehensive study of the U.S. case covering the period 2007–2010 found that the magnitude of the effect of education, media exposure, and gender on citizens’ knowledge about politics very much depends on the type of questions used to estimate such effects (Barabas et al. Reference Barabas, Jerit, Pollock and Rainey2014)

Among all the antecedents of knowledge, this article focuses on gender. The gender gap in political knowledge identified in previous literature in favor of men (Burns, Schlozman and Verba Reference Burns, Schlozman and Verba2001; Delli Carpini and Keeter Reference Delli Carpini and Keeter1996 and Reference Delli Carpini, Keeter, Rinehart and Josephson2000; Dolan Reference Dolan2011; Fortin-Rittberger Reference Fortin-Rittbergger2016; Fraile Reference Fraile2014; Fraile and Gómez Reference Fraile and Gomez2017;Kenski and Jamieson Reference Kenski, Jamieson and Jamieson2000; Mondak and Anderson Reference Mondak and Anderson2004; Stolle and Gidengil Reference Stolle and Gidengil2010) constitutes one of the most puzzling sources of knowledge inequalities. We test whether the magnitude of the gender gap in knowledge depends on the characteristics of the questions employed to measure individuals’ political knowledge. By integrating three different strands of research (feminist, media studies, and psychology) that, to date, have rarely been employed together, we identify three characteristics of knowledge survey items that are relevant in explaining the gender gap documented in previous studies: the content, the format, and the temporal dimension.

This study draws on a unique face-to-face survey, with up to 27 political knowledge items carried out on a national representative sample of the Spanish population, and shows that the size of the gender gap in knowledge is conditional upon the type of question being asked. The gender gap in favor of men substantively diminishes (or even reverses) in questions that are open ended, that do not rely on knowledge of recent facts, and that go beyond the traditional arenas of electoral and partisan politics. Analyzing the effect of the three items’ characteristics simultaneously allows us to contribute to the ongoing debate about the way in which knowledge should be measured.

A COMPREHENSIVE LOOK AT THE GENDER GAP IN KNOWLEDGE

The existence of relevant gender differences in levels of political knowledge is well documented in previous research. Numerous studies show that women tend to provide fewer correct answers than men to standard political knowledge questions (Burns, Schlozman, and Verba Reference Burns, Schlozman and Verba2001; Delli Carpini and Keeter Reference Delli Carpini and Keeter1996 and Reference Delli Carpini, Keeter, Rinehart and Josephson2000; Fortin-Rittberger Reference Fortin-Rittbergger2016; Fraile Reference Fraile2014; Fraile and Gomez Reference Fraile and Gomez2017; Kenski and Jamieson Reference Kenski, Jamieson and Jamieson2000; Mondak and Anderson Reference Mondak and Anderson2004; Verba, Burns, and Schlozman Reference Verba, Burns and Schlozman1997;).

Despite many attempts in the literature to explain this gap, the question remains unresolved, and the debate is very much alive. Traditional explanations of the gender gap in political knowledge point to social norms (that identify women as being responsible for parenting and other caring activities) as well as to the socioeconomic disadvantages that women have traditionally suffered (Burns, Schlozman, and Verba Reference Burns, Schlozman and Verba2001). However resources, opportunity, and motivation appear to be insufficient to fully account for the gender differences in knowledge (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996).

Recent contributions have provided alternative explanations for the gender gap in knowledge that are partly derived from traditional factors—particularly the way measures of political knowledge have been constructed, which have only measured knowledge in men's areas of interest. For example, some studies show that men and women appear to have different views and interests about politics contingent on their diverse life experience (Campbell and Winters Reference Campbell and Winters2008; Coffé Reference Coffé2013; Fitzgerald Reference Fitzgerald2013; Verba, Burns, and Lehman Schlozman Reference Verba, Burns and Schlozman1997; Wolak and McDevitt Reference Wolak and McDevitt2011). As a consequence, men and women acquire different knowledge about the political world. These differences may, in turn, affect the types of political knowledge that women and men possess. Ignoring these gender differences when measuring political knowledge might be one of the main reasons that explain why previous studies have found such a large gender gap in knowledge in favor of men and why they have failed to account for it. In fact, a feminist approach argues that the majority of questions used in most surveys to date have been biased in favor of men's interests (Dolan Reference Dolan2011; Stolle and Gidengil Reference Stolle and Gidengil2010).

The size of the gender gap in favor of men turns out to be greater in “conventional” knowledge questions (such as identifying a prominent political figure or naming the second party in congress), which implicitly treat politics as if it were synonymous with traditional arenas of electoral and partisan politics, a sphere that is often perceived as a men's game. If knowledge, on the other hand, is measured on diverse political areas and issues, such as local politics, civic rights, and social policies, then the differences between men and women tend to be reduced (Barabas et al. Reference Barabas, Jerit, Pollock and Rainey2014; Delli Carpini and Keeter Reference Delli Carpini and Keeter1996 and Reference Delli Carpini, Keeter, Rinehart and Josephson2000; Dolan Reference Dolan2011; Kenski and Jamieson Reference Kenski, Jamieson and Jamieson2000; Shaker Reference Shaker2012; Stolle and Gidengil Reference Stolle and Gidengil2010).

These alternative specific policy areas are considered to be more directly relevant to the life experiences of women than to men. In comparison to men, women have substantially more pressure to specialize in the private sphere and to focus on the needs of the family. As a consequence, women are more likely to develop an interest in social welfare and community-oriented topics, as these are closer to their daily activities (Campbell and Winters Reference Campbell and Winters2008; Stolle and Gidengil Reference Stolle and Gidengil2010). Moreover, despite the existence of a substantive gender gap in general political interest (Kittilson and Schwindt-Bayer Reference Kittilson and Schwindt-Bayer2012), some studies show that women are, on average, more interested in local and domestic political issues than men (Coffé Reference Coffé2013). As a consequence of this feminist argument, we expect the size of the gender gap in knowledge to depend on the topic addressed by the specific questions. More specifically, we propose the following:

H1:

The size of the gender gap in favor of men is greater for questions inquiring about the traditional arenas of electoral and partisan politics.

In the search for explanations to the gender gap in knowledge, little attention has been paid to the potential interaction between gender and the temporal dimension of the survey items. Knowledge items can refer to structural rules that rarely change over time, or past political events. In these cases, the media might not directly help to make citizens knowledgeable about these topics. Knowledge items included in conventional surveys can also refer to current news. These include, for example, questions relating to specific policies or political measures that have recently been implemented by the incumbent government. Mass media are the primary way of learning about these topics (Barabas et al. Reference Barabas, Jerit, Pollock and Rainey2014, 843), and therefore knowledge about recent facts might be closely related to their media coverage. But do the media reach all citizens equally? Media studies show, in fact, that women are less likely to be exposed to news across all media sources: television, radio, and press (Aalberg, Blekesaune, and Elvestad Reference Aalberg, Blekesaune and Elvestad2013; Benesh Reference Benesch2012; Poindexter, Meraz, and Schmitz Weiss Reference Poindexter, Meraz and Weiss2008; Shehata and Stromback Reference Shehata and Stromback2011). This same result has also been found in the Spanish case (Fraile Reference Fraile2011).Footnote 1

Media studies explaining the causes of the existence of the gender gap in news consumption are, however, scarce. To the best of our knowledge, no single study addresses this question directly. Instead, what literature suggests is that as women have less time available, news consumption implies a higher cost in comparison to men (Benesh Reference Benesch2012). Another speculation concerns women's preference for collecting specific information related to their daily wants and problems, rather than more abstract political content (Poindexter et al. Reference Poindexter, Meraz and Weiss2008).

A last explanation regards media content and media production. Previous scholars have demonstrated that news watching/reading/listening/producing/making is a predominantly masculine activity and that the content of such news is very much male-biased (Curran et al. Reference Curran, Coen, Soroka, Aalberg, Hayashi, Hichy, Iyengar, Jones, Mazzoleni, Papathanassopoulos, Rhee, Rojas, Rowe and Tiffen2014; Ross and Carter Reference Ross and Carter2011).Footnote 2 As a consequence, women might have less interest in “male-oriented news” than their male counterparts. From this argument we might expect that the gender gap should be of greater magnitude in questions relating to recent facts covered by the media, since women tend to be included as subjects of the news to a lesser extent than men, and women tend to be less exposed to the news than men.

H2:

The gender gap will be smaller in questions that do not relate to current events.

Another body of literature has instead suggested that at least part of the gender gap in knowledge might be the product of the format of the survey items used to measure citizens’ knowledge about politics. Studies from psychology have shown that women tend to rate themselves more negatively than men in scientific ability, and in their performance in academics tests, even if their average performance is undistinguishable from that of men (Ehrlinger and Dunning Reference Ehrlinger and Dunning2003). This lack of confidence also has a documented effect when responding to survey questions. In fact, there is persuasive evidence showing that, given the lack of confidence of women in their abilities, they tend to be generally more risk-averse (Bonte Reference Bonte2015) and are less willing to guess in response to survey items (Kenski and Jamieson Reference Kenski, Jamieson and Jamieson2000; Lizotee and Sidman Reference Lizotte and Sidman2009; Mondak and Anderson Reference Mondak and Anderson2004). Consequently, men's level of knowledge is systematically overestimated by survey items that employ a format that maximizes the possibility of guessing. While previous studies have abundantly analyzed the role of the “Do not know” protocol of survey knowledge items in explaining part of the gender gap (Lizotee and Sidman Reference Lizotte and Sidman2009; Miller and Orr Reference Miller and Orr2008; Mondak and Anderson Reference Mondak and Anderson2004; Sturgis, Allum, and Smith Reference Sturgis, Allum and Smith2008), surprisingly little attention has been paid to the potential interaction between gender and the format (open vs. closed) of the aforementioned items (with the sole exception of Ferrin, Fraile, and García-Albacete Reference Ferrin, Marta and Garcia-Albacete2017; and Fortin-Rittberger Reference Fortin-Rittbergger2016).

Knowledge items included in conventional survey questionnaires are normally presented in two kinds of format: open and closed. These differ in their ability to stimulate correct responses and to incentivize guessing. While open-ended questions produce conservative estimations of political knowledge, since some knowledgeable respondents will not provide a correct answer unless they are 100% certain (Luskin and Bullock Reference Luskin and Bullock2011; Mondak Reference Mondak2001), closed-ended questions stimulate correct answers, as respondents can attempt to answer even if they have only some partial knowledge about the question. Given the different propensity of women and men to guess, and given that closed-ended items are more likely to incite guessing than open ended items (Luskin and Bullock Reference Luskin and Bullock2011), we propose our third hypothesis:

H3:

The size of the gender gap is greater when the survey questions used to measure knowledge are in the closed-ended format.

DATA AND RESEARCH DESIGN

We have designed a survey questionnaire with the aim of assessing citizens’ levels of political knowledge in Spain, a mature democracy where what people know about politics has been studied only recently. According to previous studies, Spaniards present medium to low levels of political knowledge compared to their European counterparts (Fraile Reference Fraile2014). For the measurement of knowledge we have considered all the relevant debates about the appropriate conceptualization, operationalization, and measurement of political knowledge. We introduced up to 33 different items intended to measure knowledge, 27 of them specifically measuring political knowledge. The large number of questions provides a unique dataset in terms of variation in content, format, and temporal referents of political knowledge items. By going beyond the American and Canadian contexts, the study of the previously unexplored Spanish case scrutinizes previous findings and tests for generalization of the results.

A face-to-face survey was carried out December 13–30, 2012, on a national representative sample of the Spanish population (n = 2962) by the Spanish Centro de Investigaciones Sociologicas.Footnote 3 The survey was carried out in a nonelectoral moment (the most recent elections were held more than one year before, on November 20, 2011). We included eight items in the questionnaire, reproducing classical factual questions used in previous studies. More specifically, we included questions about relevant political actors (both men and women, reflecting an increase in the higher-profile roles of women in key political offices in the Spanish political system; see, for instance, Verge Reference Verge2012) from the incumbent government and from various opposition parties (for the exact wording of the questions see the items included in P16, P17, and P18 in the supplementary material).

In order to expand the number of political domains covered by our questionnaire, we designed a number of items aimed at measuring knowledge about the functioning of democratic institutions and specific policies. Although institutions and policies are regarded as the most important areas citizens should know about (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996, chapter 2), they are often excluded from standard surveys. Regarding the functioning of democratic institutions, we have included questions about the main task of the Spanish Parliament, the content of the Spanish Constitution, and the main characteristics of democracy and dictatorship (see the nine items included in P8, P9, P10, P11, P13 and P14 in the supplementary material).Footnote 4 Concerning specific policy questions, we have included seven items about education, healthcare, and other social services (see the items included in P25 to P28 in the supplementary material). Respondents are asked, for example, which level of government is responsible for specific policies such as collecting waste or managing public education or health centers.

Another political domain that is considered to be relevant is knowledge about economic institutions and processes (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996, chapter 2). We included four items in the questionnaire intending to measure respondents’ knowledge of the economy. More specifically, these items ask about unemployment, inflation, the Euribor,Footnote 5 and the market economy (see the four items included in P20 to P23 in the supplementary material).

Finally, we also included five items measuring cultural issues not strictly related to politics as a strategy to dissuade respondents from the idea that they were being examined. Results of the pilot study (carried out in November 2012) indicated that asking about different dimensions of politics and culture (and using different formats such as images, words, numbers, etc.) provided dynamism and enthusiasm to the interview. The evidence of these cultural items is provided in Table 1 for descriptive reasons. The analyses presented below do not include these cultural items since they do not measure political knowledge.Footnote 6

Table 1. The size of the gender gap across knowledge items (percentages)

Source: Our elaboration of the data CIS2973. Questions in italics refer to cultural knowledge (P29) and to other knowledge domains (P27). For this reason, successive analyses exclude these six items. *p < 0.05.

Knowledge items do not only vary in their content (classical topics, functioning of democratic institutions, specific policies, and economic issues) but also in their format (closed ended and open ended items) and their temporal dimension (current versus non current). For all the knowledge questions, the Do not Know (DK) protocol was neutral (that is, interviewers were instructed to record spontaneous DK answers, but the DK category was not shown in the cards given to the respondents). The supplementary material includes the original wording of each of the questions and its classification in each of the content, format, and temporal categories. Finally, in the design of the questionnaire we used images and numbers when proposing the questions to the respondents, seeking to take into account the fact that citizens’ political information can be stored in a visual or even numeric format as opposed to a textual format (see for instance Prior Reference Prior2014).

Table 1 presents all the questions. Columns 2 and 3 in Table 1 show the percentage of correct responses provided by men and women for all items included in the survey. Column 4 provides the size of the gender gap (that is, the percentage of correct answers for men, minus the percentage of correct answers for women). The topics in Table 1 are ordered according to the size of the gender gap on that particular item. Table 1 also classifies the items according to the three main characteristics of interest here: the format, the content, and the temporal dimension of the question (columns 6–8, respectively).

Table 1 shows that, on average, men provide significantly higher percentage of correct answers than women in 24 out of the 33 items we included in the survey. For the remaining nine items, on the other hand, the gender gap disappears and sometimes even reverses, with women presenting similar or even more correct answers than men. This is the case, for example, for the question relating to the age at which free public education starts, or the place where citizens need to go to obtain the health card.Footnote 7

The gender gap in knowledge appears evident, although the magnitude of the gap varies across items. In some, men provide as much as 16% more correct answers than women, as is the case for the question that asks about the logo of the UGT (one of the largest trade unions in Spain, see P1703 in Table 1) and the party of Cayo Lara (coordinator and spokesperson of the left-wing political party United Left at the time of the interview, see P1802 in Table 1). The gap is smaller, however, in questions relating to the functioning of democracy and its institutions and to specific policies. Differences in the percentage of correct answers between men and women is only around 4%, for example, when respondents are asked about the main function of the Spanish Parliament or who has the right to vote (see, respectively P9 and P10 in Table 1).

Table 1 offers a first hint at suggesting that the magnitude of the gender gap is related to the content of the knowledge items. Men seem to perform particularly well on questions about the name of political actors, the party they belong to, and on items probing familiarity with economic issues. On the other hand, levels of knowledge tend to become more balanced between men and women once we consider other political aspects, for instance the functioning and competences of democratic institutions. The average magnitude of the gender gap for the four questions about the economy is 10.4% and is very similar (10.1%) for the eight conventional items of knowledge about political actors and parties. In contrast, the average magnitude of the gender gap in favor of men for the nine questions about the functioning of democratic institutions decreases considerably (4.2%), and the gap reverses in favor of women for the seven policy specific questions (−2%).

It is less clear from Table 1, however, whether the format and temporal dimension of the items are also influencing the magnitude of the gender gap in knowledge. The average magnitude of the gender gap for the seven open-ended knowledge questions appears smaller (2.1%) than the average magnitude for the 26 closed-ended knowledge questions (5.2%). Finally, the differences in the average magnitude of the gender gap for the 22 questions on current issues (4.2%) and for the 11 “not current” questions (5.2%) seem negligible.

All this preliminary descriptive evidence suggests that the magnitude of the gender gap in knowledge appears to depend more on the content of the questions than on the format and the temporal reference of such questions. We need, however, to rigorously examine the patterns found in Table 1. We have reshaped the data into long format and conducted multilevel analysis.Footnote 8 This empirical strategy allows us to group knowledge items within each individual respondent and thus to circumvent the assumption of independent residuals. Individuals therefore become the level-two unit of analysis, which groups each of the 27 knowledge questions (level-one).Footnote 9

Questions’ characteristics are measured by reproducing columns 6–8 of Table 1 and constitute the main independent variables here. The first variable classifies the questions according to their format, taking value 0 for closed-ended and 1 for open-ended questions. The second variable differentiates the items according to their substantive content. As previously discussed, this includes (i) classical knowledge topics (those relating to political parties and actors); (ii) economic issues (relating to the economy); (iii) institutions and democracy (relating to the functioning of democracy and its institutions); and (iv) specific policies.Footnote 10

The third variable distinguishes the items according to their temporal dimension, taking value 1 for questions relating to current events or facts covered by the media and value 0 for questions unrelated to issues being discussed in the media at the time of the interview—which normally refer to the past or to some specific rules of the political system. More precisely, we codified as 1 all questions concerned with issues being discussed in the media at the time of the interview (December 2012). For this scope, we performed a content analysis of two of the main national newspapers, El Pais and El Mundo, during the time of the survey fieldwork, December 13–30, 2012. We also analyzed the content of the public broadcasts included on the website of TVE1, the most popular Spanish public television channel.Footnote 11 The supplementary material provides the original wording of all the questions as well as the way each item has been classified in each of the three variables at the question level.

Our dependent variable classifies responses as correct (value 1) versus the remaining options (both incorrect and DK answers: value 0). Accordingly, we have used random intercept multilevel mixed-effects logistic regression.Footnote 12 This estimation allows us to test the extent to which the effect of the three specific features of the knowledge items (format, content, and temporality)Footnote 13 on the probabilities of providing a correct answer is conditioned on the gender of the respondent and controlling for the standard antecedents of knowledge according to previous literature: education, age, political motivations, and exposure to different news media (see Table 2).Footnote 14

Table 2. The likelihood of getting a correct answer as a function of the type of questions and gender: Random intercept multilevel mixed-effects logit estimations

Source: Our elaboration of the data CIS2973.

Notes: Robust standard errors in parentheses; ***p < 0.01, **p < 0.05, *p < 0.1; aReference category: content = classic.

FINDINGS

Table 2 presents the results of a random intercept multilevel mixed-effects logistic regression model. It shows results for the estimation of each of the independent variables at the question level plus its respective interaction term with gender (including all constitutive terms), controlling for the standard antecedents of knowledge at the individual level. Table A1 in the supplementary material provides descriptive statistics of all the variables used in the estimations.

Regarding the question format, Table 2 corroborates previous research showing that the closed-ended format is less demanding and that the magnitude of the gender gap is reduced when the knowledge questions are formulated as open-ended, as compared to closed-ended (Luskin and Bullock Reference Luskin and Bullock2011). However, what is the substantive magnitude of this finding? Figures 1 to 3 display the predicted values of providing a correct answer for men and women by comparing questions’ characteristics. Figure 1 shows that the probability of women providing a correct answer to closed-ended items is 5% lower than men (that is, 64.1% for women versus 69.1% for men). On the other hand, when open-ended items are used, the difference vanishes: the probability that women provide a correct answer to open-ended items is 76.2% versus 75.5% for men. This finding suggests that keeping constant the DK protocol (as we did with our 27 items), the higher propensity of men to guess in comparison to women amplifies gender differences when closed-ended questions are used to measure political knowledge.Footnote 15

Figure 1. Marginal effects of the format of the questions on the probabilities of providing a correct answer. Predictions are calculated using the margins command in Stata and on the basis of the estimates provided in Table 2.

As for the content of the questions, Figure 2 shows that women are 7.5% less likely to provide a correct answer to “classic knowledge questions” than men (that is, 59.1% for women and 66.6% for men). The gender gap is equally large with regard to questions relating to the economy: the probability that men give a correct answer to this type of questions is 8.8% higher than that of women (that is, 50.1% for women and 58.9% for men). Still the gender gap decreases significantly when measures of knowledge are widened to consider areas outside the traditional items. This is the case for the two types of items considered here: those related to the functioning of democracy and its institutions and those related to specific policies. The propensity of women to provide a correct answer to items measuring knowledge about the functioning of democracy is only 3.3% below that of men (that is, 71.9% for women and 75.2% for men). Moreover, gender differences even reverse for knowledge of specific policies, where women's probability of providing a correct answer is 3.4% higher than that of men (that is, 80.5% for women and 77.1% for men). This evidence confirms that the magnitude of the gender gap in knowledge is larger when respondents are asked about electoral politics and economic matters, whereas it decreases substantively or even disappears when we expand the definition of political knowledge to additional areas that are regarded as equally important in what citizens should know about politics (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996). While previous studies have demonstrated that the differences between men and women tend to be reduced when asked about their knowledge of local politics, civic rights, social policies, and women candidates (Barabas et al. Reference Barabas, Jerit, Pollock and Rainey2014; Delli Carpini and Keeter Reference Delli Carpini and Keeter1996 and Reference Delli Carpini, Keeter, Rinehart and Josephson2000; Dolan Reference Dolan2011; Kenski and Jamieson Reference Kenski, Jamieson and Jamieson2000; Shaker Reference Shaker2012; Stolle and Gidengil Reference Stolle and Gidengil2010), we also show here that the magnitude of the gender gap in favor of men substantively decreases for questions about familiarity with the functioning of democracy and its institutions.

Figure 2. Marginal effects of the content of the questions on the probabilities of providing a correct answer. Predictions are calculated using the margins command in Stata and on the basis of the estimates provided in Table 2.

Regarding the temporal dimension of the knowledge questions, Figure 3 shows that the probability that women provide a correct answer to a question relating to current affairs is about 5.5% lower than for men (63.0% for women and 68.5% for men). However, gender differences vanish for knowledge relating to current news (72.8% for women versus 73.9% for men, see Figure 3).

Figure 3. Marginal effects of the temporal dimension of the questions on the probabilities of providing a correct answer. Predictions are calculated using the margins command in Stata and on the basis of the estimates provided in Table 2.

All the estimations summarized in Table 2 include the specification of respondents’ exposure to media as part of the standard antecedents of knowledge according to previous literature. However, we are aware that this individual level variable could capture part of the gender differences in the capacity to correctly answer current and noncurrent political knowledge questions (since there are relevant gender differences in media exposure). In order to eliminate this possibility, we replicated the estimation by excluding the three variables measuring respondents’ declared media exposure. The results fully confirm the findings of Figure 3. Excluding respondents’ media exposure from the estimation, the probability that women provide a correct answer to a question relating to current affairs is about 6.3% lower than for men (62.7% for women and 69.0% for men), while the gender gap shrinks significantly for questions that do not refer to current news (72.6% for women and 74.4% for men). Although previously overlooked in the literature, this finding suggests an important explanation for the existence of the gender gap in knowledge: the lower propensity of women in comparison to men to be exposed to political news in the mass media. Consequently, the gender differences are augmented when questions relating to current affairs are used to measure knowledge. We discuss the implication of all these findings for the study of both political knowledge and its antecedents in more detail in the next section.

CONCLUSION AND DISCUSSION

Political knowledge is considered to be a key resource for the exercise of citizenship (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996). The more knowledgeable people are, the better they understand the impact of public policies on their own interests, and the more likely they are to vote to appropriately punish and reward governments. Previous evidence has also shown that knowledgeable citizens tend to be more progressive, tolerant, and less approving of the president (Althaus Reference Althaus2003; Delli Carpini and Keeter Reference Delli Carpini and Keeter1996). What is more, political knowledge also matters for the correct functioning of democracy because it is necessary for an active citizenry. Particularly, knowledge is considered to be a crucial resource linked to the ability of citizens to effectively participate and be engaged in all types of political activities, including both electoral and nonelectoral participation (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996). For these reasons, inequalities in observed levels of political knowledge have concerned scholars for decades. This article contributes to the ongoing debate about the appropriate measurement of political knowledge, on the one hand, and to the understanding of the origins of repeatedly observed gender gaps, on the other.

Our findings show that the size of the gender gap in knowledge depends both on the content of the topics covered by the survey items and on their format, confirming previous studies (Barabas et al. Reference Barabas, Jerit, Pollock and Rainey2014; Delli Carpini and Keeter Reference Delli Carpini and Keeter1996; Ferrin, Fraile, and García-Albacete Reference Ferrin, Marta and Garcia-Albacete2017; Fortin-Rittberger Reference Fortin-Rittbergger2016; Stolle and Gidengil Reference Stolle and Gidengil2010). However, we also show that, although previously overlooked in the literature, the size of the gender gap is additionally contingent on the temporal dimension of the knowledge questions, as the gap is reduced for items that are not in the current news.

These findings have two key implications for the study of political knowledge: the first relates to the debate about its measurement, while the second sharpens our understanding of its antecedents. First, this study suggests that there seem to be varieties of knowledge, each revealing something different about what citizens know and understand about politics and its different dimensions. Limiting the measurement of knowledge to electoral and partisan politics appears to provide an incomplete picture of what people know (or do not know) about politics. Our findings also suggest that not only the content, but also the format and the temporal dimension of the questions should vary in survey questionnaires aiming to assess citizens’ levels of political knowledge.

However, are these different dimensions of knowledge equally important to predict citizens’ propensity to participate in politics? Can all these types of knowledge be equally considered as significant resources for political engagement? Evaluating the extent to which each knowledge domain predicts political participation to the same extent would require a further analysis that goes beyond the scope of this article; however, preliminary tests indicate that all four knowledge domains identified here are equally relevant predictors of both electoral and nonelectoral political participation. Thus, including diverse measures of political knowledge in surveys will not imply losing our ability to understand political behavior; on the contrary, it will most likely improve our capacity to explain citizens’ propensity to engage in politics.

A second derivation of our findings is that not simply the size of the gender gap but perhaps the extent of other gaps previously observed in political knowledge are conditional upon the kind of knowledge that is being measured. Our hypotheses were derived from the combination of three strands of research that shed light on the potential effect of the type of question in the estimation of the magnitude of the gender gap. The same might be done for understanding other gaps in political knowledge such as those that depend on citizens’ cognitive abilities, their personality, or their motivation to be exposed to the media. For instance, a recent study has shown that the magnitude of the informative effect of various forms of media use varies according to the type of knowledge measured (Eveland and Schmitt Reference Eveland and Schmitt2015). Nonetheless much remains to be learned about other sources of knowledge inequalities. This constitutes a challenging investigative agenda that awaits public opinion researchers.

The two implications highlighted here lead to the same conclusion: the debate on how to measure political knowledge is very much alive. Our work suggests that future research might attempt to follow innovative paths on the development of measures of political knowledge. This implies considering more issue-specific knowledge items than the traditional battery of electoral and partisan questions by including different substantive content, diverse question formats, and items that refer both to the past and to current political processes. The debate remains open on how to obtain a measure of political knowledge that is replicable, includes indicators that are relevant to the current political context, and that can be compared over time and across groups of citizens.

SUPPLEMENTARY MATERIAL

To view supplementary material for this article, please visit https://doi.org/10.1017/S1743923X1700023X

Footnotes

We acknowledge the financial support of the Spanish Ministry of Science and Innovation (grant numbers CSO2012-32009 and CSO2016-75090-R] and of the Centro de Investigaciones Sociologicas (CIS).

1. Descriptive evidence from our sample confirms previous studies regarding the gender gap in exposure to political news. While there are only marginal differences regarding television, the percentage of women in our survey that report not reading political news in newspapers at all is 11% higher than that for men (41% of women and 30% of men). The gap among those that report reading newspapers on a daily basis is even higher (22% women and 36% men). In regard to our measure of media breadth exposure (number of TV channels respondents use to inform themselves) there is a statistically significant difference between the average 2.94 channels watched by men and the 2.75 watched by women.

2. Data collected by the Global Media Monitoring Project shows that Spain is no exception in this regard. According to their European and Spanish regional reports for 2010 and 2015, the presence of women in Spanish media was slightly above the European average on the appearance of women as subjects on the news (always below 30%) or reporters (around 40%). Women's presence on political news, similar to other countries, is even lower than for social, health, or criminal stories. Details are available at www.whomakesthenews.org (accessed September 3, 2017).

3. The sample design responds to a random-strata sampling. More technical details about the sample design can be found at www.cis.es.

4. Similar knowledge items were used, for example, in the International Civic and Citizenship Education Study (http://www.iea.nl/iccs_2009.html) but also in some U.S. surveys, such as the Roper Center or Delli Carpini and Keeter's survey on political knowledge of 1989 (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996, 67).

5. Euribor is short for Euro Interbank Offered Rate. The Euribor rates are based on the interest rates at which a panel of European banks borrows funds from one another. The Euribor rates provide the basis for the price or interest rate of all kinds of financial products, such as interest rate swaps, interest rate futures, saving accounts, etc.

6. We replicated the estimation with all these issues included as an additional category of the variable content, and the findings are consistent (see Figure A1 and Table A1 in the supplementary material).

7. Note that Spain has a universal public health system. All citizens are entitled to a public health card, which they need to access public services. Health system responsibilities are decentralized; thus this question refers to the level of government that is responsible for this policy area (the complete wording is available in the supplementary material, P26).

8. This means that for each person interviewed there are 27 observations, so that the total number of observations in the original data set is multiplied by 27 (from 2,962 to 79,974 observations). Recall, however, that the number of observations both at the individual level and at the question levels reported in Table 2 in the text are 2,771 and 74,817 due to the missing observations of the variables at individual level (education, political interest, and number of TV channel used; see Table A1 in the supplementary material).

9. Additional reasons lie behind the decision to use multilevel estimation over the alternative of conducting separated analyses for different indexes of political knowledge (including all items corresponding with each category of question type: content, format, and temporality) at the individual level. First, the number of items for each of the dependent variables in the individual level estimations changes across them, which creates a notable problem for the comparison of the magnitude of the effect of the gender coefficient across equations. Second, the multilevel strategy allows a comparison between different categories of the three different variables at the question level simultaneously. Third, multilevel modeling allows us to take into account the fact that the same person answers many questions (27) while simultaneously controlling for the standard antecedents of political knowledge at the individual level.

10. We have not included the five cultural items and the item about knowledge of the doctor's name, as we have no way of checking if the name provided by the respondents was correct. We have replicated the same analysis with all the 33 items, and findings are consistent (see Figure A1 and Table A1 in the supplementary material).

11. While preparing the design of the questionnaire we performed a daily content analysis of El Pais, El Mundo, and TVE1 during the last week of October and the first week of November 2012. This content analysis allowed us to identify the most regularly mentioned topics in the media. We also used the results of the closest survey to December 2012 administered by CIS (CIS 2960, October 2012) to identify the most popular and relevant topics according to the respondents. Our final selection of the current topics included in the survey questions was based on these two criteria: (1) topics that were the most popular according to previous surveys and (2) topics that were widely covered by the media.

12. More specifically we have used the Stata command xtmelogit. An alternative cross-classified random effect model was not possible to estimate because we lack statistical efficiency. (Our independent variables at the question level are all categorical, especially content, which contains up to 4 different categories while the number of observations at the first level is small: 27 knowledge items for each respondent.) Margins for the figures estimating predicted probabilities are calculated using fixed effects.

13. A last important feature at question level would be the difficulty of the question. Our survey is the first to date in Spain that has included so many knowledge items. Therefore, only a few of these questions have previously been put to a Spanish sample. Consequently, we cannot find an “exogenous” and objective way to code from the very beginning the level of difficulty for each of the items, as it is standard in the measurement of other areas of knowledge (e.g., education). Nonetheless, we have replicated the estimations including the average percentage of correct answers for each of the questions in the whole sample at aggregate level (a measure that we recognize is completely endogenous to our dependent variable). Findings are consistent, which makes us more confident about their robustness.

14. Political motivations are included by means of respondents’ declared interest in politics (ranging from 0—no interest at all, to 3—very much interested). Media exposure is controlled for by including the frequency that respondents state they watch news on television (from 0—never, to 4—every day) and the frequency that they read newspapers (from 0—never, to 4—every day). Previous tests included additional independent variables at the individual level that could contribute to explaining respondents’ knowledge and/or affect the magnitude of the gender gap such as (1) the size of the family, (2) the number of small children (younger than 10), (3) the degree of daily (or weekly) leisure time available for the interviewee, (4) the type of job—part-time or full-time—of the interviewee, and finally (5) the sex of the interviewer. Since the results showed here were robust, we decided to present the simplest estimations summarized in Table 2.

15. We recognize that the distinction between open-ended and closed-ended questions constitutes an oversimplification. Most of the closed-ended questions in the questionnaire were four-response questions, but for substantive reasons some required a different number of answer categories (for example, we were obliged to use only three options in the response categories of items P2501 to P2504). Previous literature shows that the tendency to guess is more pronounced if the number of response options is very small (that is, two options: correct versus incorrect), whereas it tends to decrease as the number of alternative responses becomes larger (Rodriguez Reference Rodriguez2005). As a robustness test, we have replicated the same estimation with an alternative specification of the variable format that includes four categories (open, closed 6 categories, closed 4 categories, and closed 2/3 categories). The results corroborate the findings presented above and confirm that the smaller the number of response options, the bigger the gender gap (results are available from the authors). However, since the number of response categories was decided according to substantive reasons, we have decided to present the findings of the most parsimonious test (the one using the contrast between open- and closed-ended format).

References

REFERENCES

Aalberg, Toril, Blekesaune, Arild, and Elvestad, Eiri. 2013. “Media Choice and Informed Democracy: Toward Increasing News Consumption Gaps in Europe?The International Journal of Press/Politics 18 (3): 281303. doi: 10.1177/1940161213485990.Google Scholar
Althaus, Scott L. 2003. Collective Preferences in Democratic Politics: Opinion Surveys and the Will of the People. Cambridge: Cambridge University Press.Google Scholar
Barabas, Jason, Jerit, Jennifer, Pollock, William, and Rainey, Carlisle. 2014. “The Question(s) of Political Knowledge.” American Political Science Review 108 (4): 840–55. doi: 10.1017/S0003055414000392.Google Scholar
Benesch, Christine. 2012. “An Empirical Analysis of the Gender Gap in News Consumption.” Journal of Media Economics 25 (3): 147–67. doi: 10.1080/08997764.2012.700976.Google Scholar
Bonte, Werner. 2015. “Gender Difference in Competitive Preferences: New Cross-Country Empirical Evidence.” Applied Economics Letters 22 (1): 7175. doi: 10.1080/13504851.2014.927560.Google Scholar
Boudreau, Cheryl, and Lupia, Arthur. 2011. “Political Knowledge.” In Cambridge Handbook of Experimental Political Science, ed. Druckman, James N., Green, Donald P., Kuklinski, James H., and Lupia, Arthur. New York: Cambridge University Press, 171–83.Google Scholar
Burns, Nancy, Schlozman, Kay Lehman, and Verba, Sidney. 2001. The Private Roots of Public Action. Cambridge: Harvard University Press.Google Scholar
Campbell, Rosie, and Winters, Kristi. 2008. “Understanding Men's and Women's Political Interest: Evidence from a Study of Gendered Political Attitudes.” Journal of Elections, Public Opinion, and Parties 18 (1): 5374. doi: 10.1080/17457280701858623.Google Scholar
Coffé, Hilde. 2013. “Women Stay Local, Men Go National and Global? Gender Differences in Political Interest.” Sex Roles 69 (5–6): 323–38. doi: 10.1007/s11199-013-0308-x.Google Scholar
Converse, Philip E. 1964. “The Nature of Belief Systems in Mass Publics.” In Ideology and Discontent, ed. Apter, David E.. New York: Free Press, 206–61.Google Scholar
Curran, James, Coen, Sharon, Soroka, Stuart, Aalberg, Toril, Hayashi, Kaori, Hichy, Zira, Iyengar, Shanto, Jones, Paul, Mazzoleni, Gianpietro, Papathanassopoulos, Stylianos, Rhee, June Woong, Rojas, Hernando, Rowe, David, and Tiffen, Rod. 2014. “Reconsidering ‘Virtuous Circle’ and ‘Media Malaise’ Theories of the Media: An 11-Nation Study.” Journalism 15 (7): 815–33. doi: 10.1177/1464884913520198.Google Scholar
Delli Carpini, Michael X., and Keeter, Scott. 1996. What Americans Know about Politics and Why it Matters. New Haven: Yale University Press.Google Scholar
Delli Carpini, Michael X., and Keeter, Scott. 2000. “Gender and Political Knowledge.” In Gender and American Politics: Women, Men, and the Political Process, ed. Rinehart, Sue Tolleson and Josephson, Jyl. Armonk, NY: M.E. Sharpe, 2152.Google Scholar
Dolan, Kathleen. 2011. “Do Women and Men Know Different Things? Measuring Gender Differences in Political Knowledge.” The Journal of Politics 73 (1): 97107. doi: 10.1017/S0022381610000897.Google Scholar
Eveland, William P. Jr., and Schmitt, Josephine B.. 2015. “Communication Content and Knowledge Content Matters: Integrating Manipulation and Observation in Studying News and Discussion Learning Effects.” Journal of Communication 65 (1): 170–91. doi: 10.1111/jcom.12138.Google Scholar
Ehrlinger, Joyce, and Dunning, David A.. 2003. “How Chronic Self-Views Influence (and Potentially Mislead) Estimates of Performance.” Journal of Personality and Social Psychology 84 (4): 517. doi: 10.1037/a0017452.Google Scholar
Fitzgerald, Jennifer. 2013. “What Does ‘Political’ Mean to You?Political Behavior 35 (3): 453–79. doi: 10.1007/s11109-012-9212-2.Google Scholar
Fortin-Rittbergger, Jessica. 2016. “Cross-National Gender Gaps in Political Knowledge: How Much Is Due to Context?Political Research Quarterly 69 (3): 391402. doi: 10.1177/1065912916642867.Google Scholar
Ferrin, Monica, Marta, Fraile, and Garcia-Albacete, Gema. 2017. “The Gender Gap in Political Knowledge: Is It All About Guessing? An Experimental Approach.” The International Journal of Public Opinion Research 29 (1): 111–32.doi: 10.1093/ijpor/edv042.Google Scholar
Fraile, Marta. 2011. “Widening or Reducing the Knowledge Gap? Testing the Media Effects on Political Knowledge in Spain (2004–2006).” International Journal of Press/Politics 16 (2): 163–84.Google Scholar
Fraile, Marta. 2014. “Do Women Know Less about Politics than Men? The Gender Gap in Political Knowledge in Europe.” Social Politics 21 (2): 261–89. doi: 10.1093/sp/jxu006.Google Scholar
Fraile, Marta, and Gomez, Raul. 2017. “Why Does Alejandro Know More about Politics than Catalina? Explaining the Latin American Gender Gap in Political Knowledge.” British Journal of Political Science 47 (1): 91112. doi: 10.1017/S0007123414000532.Google Scholar
Gibson, James J., and Caldeira, Gregory A.. 2009. “Knowing the Supreme Court? A Reconsideration of Public Ignorance of the High Court.” Journal of Politics 71 (2): 429–21. doi: 10.1017/S0022381609090379.Google Scholar
Kenski, Kate, and Jamieson, Kathleen Hall. 2000. “The Gap in Political Knowledge: Are Women Less Knowledgeable Than Men About Politics?” In Everything You Think You Know About Politics …And Why You're Wrong, ed. Jamieson, Kathleen Hall. New York: Basic Books, 8389.Google Scholar
Kittilson, Miki Caul, and Schwindt-Bayer, Leslie A.. 2012. The Gendered Effects of Electoral Institutions. Political Engagement and Participation. Oxford: Oxford University Press.Google Scholar
Lizotte, Mary-Kate, and Sidman, Andrew H.. 2009. “Explaining the Gender Gap in Political Knowledge.” Politics & Gender 5 (2): 127–52. doi: 10.1017/S1743923X09000130.Google Scholar
Lupia, Arthur. 2006. “How Elitism Undermines the Study of Voter Competence.” Critical Review 18 (1–3): 217–32. doi: 10.1080 /08913810608443658.Google Scholar
Luskin, Robert, and Bullock, John G.. 2011. “‘Don't Know’ Means ‘Don't Know’: DK Responses and the Public's Level of Political Knowledge.” The Journal of Politics 73 (2): 547–57. doi:10.1017/S0022381611000132.Google Scholar
Miller, Melissa K., and Orr, Shannon K.. 2008. “Experimenting with a ‘Third Way’ in Political Knowledge Estimation.” Public Opinion Quarterly 7 (4): 768–80. doi:10.1093/poq/nfn057.Google Scholar
Mondak, Jeffrey J. 2001. “Developing Valid Knowledge Scales.” American Journal of Political Science 45 (1): 224–38. doi: 10.2307/2669369.Google Scholar
Mondak, Jeffrey J., and Anderson, Mary R.. 2004. “The Knowledge Gap: A Reexamination of Gender-Based Differences in Political Knowledge.” Journal of Politics 66 (2): 492512. doi: 10.1111/j.1468-2508.2004.00161.x.Google Scholar
Poindexter, Paula, Meraz, Sharon, and Weiss, Amy Schmitz, eds. 2008. Women, Men, and News: Divided and Disconnected in the News Media Landscape. New York: Routledge.Google Scholar
Prior, Markus. 2014. “Visual Political Knowledge: A Different Road to Competence?Journal of Politics 76 (1): 4157. doi:10.1017/S0022381613001096.Google Scholar
Prior, Markus, and Lupia, Arthur. 2008. “Money, Time, and Political Knowledge: Distinguishing Quick Recall and Political Learning Skills.” American Journal of Political Science 52 (1): 169–83. doi: 10.1111/j.1540-5907.2007.00306.x.Google Scholar
Robinson, Joshua. 2015. “Who Knows? Question Format and Political Knowledge.” International Journal of Public Opinion Research 27 (1): 121. doi: 10.1093/ijpor/edu019.Google Scholar
Rodriguez, Michael C. 2005. “Three Options are Optimal for Multiple-Choice Items: A Meta-Analysis of 80 Years of Research.” Educational Measurement: Issues and Practice 24 (2): 313.Google Scholar
Ross, Karen, and Carter, Cynthia. 2011. “Women and News: A Long and Winding Road.” Media, Culture & Society 33 (8): 1148–65. doi: 10.1177/0163443711418272.Google Scholar
Shaker, Lee. 2012. “Local Political Knowledge and Assessment of Citizen Competence.” Public Opinion Quarterly 76 (3): 525–37. doi: 10.1093/poq/nfs018.Google Scholar
Shehata, Adam, and Stromback, Jesper. 2011. “A Matter of Context: A Comparative Study of Media Environments and News Consumption Gaps in Europe.” Political Communication 28 (1): 110–34. doi: 10.1080/10584609.2010.543006.Google Scholar
Stolle, Dietlind, and Gidengil, Elisabeth E.. 2010. “What Do Women Really Know? A Gendered Analysis of Varieties of Political Knowledge.” Perspectives on Politics 8 (1): 93109. doi: 10.1017/S1537592709992684.Google Scholar
Sturgis, Patrick, Allum, Nick, and Smith, Patten. 2008. “An Experiment on the Measurement of Political Knowledge in Surveys.” Public Opinion Quarterly 72 (1): 90102. doi: 10.1093/poq/nfm032.Google Scholar
Verba, Sidney, Burns, Nancy, and Schlozman, Kay Lehman. 1997. “Knowing and Caring about Politics: Gender and Political Engagement.” Journal of Politics 59 (4): 1051–72. doi: 10.2307/2998592.Google Scholar
Verge, Tania. 2012. “Institutionalizing Gender Equality in Spain: From Party Quotas to Electoral Gender Quotas.” West European Politics 35 (2): 395414. doi: 10.1080/01402382.2011.648014.Google Scholar
Wolak, Jennifer, and McDevitt, Michael. 2011. “The Roots of the Gender Gap in Political Knowledge in Adolescence.” Political Behavior 33 (3): 505–33. doi: 10.1007/s11109-010-9142-9.Google Scholar
Figure 0

Table 1. The size of the gender gap across knowledge items (percentages)

Figure 1

Table 2. The likelihood of getting a correct answer as a function of the type of questions and gender: Random intercept multilevel mixed-effects logit estimations

Figure 2

Figure 1. Marginal effects of the format of the questions on the probabilities of providing a correct answer. Predictions are calculated using the margins command in Stata and on the basis of the estimates provided in Table 2.

Figure 3

Figure 2. Marginal effects of the content of the questions on the probabilities of providing a correct answer. Predictions are calculated using the margins command in Stata and on the basis of the estimates provided in Table 2.

Figure 4

Figure 3. Marginal effects of the temporal dimension of the questions on the probabilities of providing a correct answer. Predictions are calculated using the margins command in Stata and on the basis of the estimates provided in Table 2.

Supplementary material: File

Ferrin et al. supplementary material

Table A1 and Figure A1

Download Ferrin et al. supplementary material(File)
File 53 KB
Supplementary material: File

Ferrin et al. supplementary material

Online Appendix

Download Ferrin et al. supplementary material(File)
File 33.2 KB