Theories of democracy commonly assume that citizens must have a certain degree of information and factual knowledge to be able to understand the functioning of institutions, the performance of the incumbent government, and the actions of the main political actors. Political knowledge helps people to better assess their interest as individuals and as members of groups (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996). Moreover, governments have more incentives to be responsive when they can be held accountable, but citizens are able to hold governments accountable for their actions only when they know what governments are actually doing.
However, research on public opinion and political participation has frequently shown that citizen's average level of information, knowledge, and understanding of politics is relatively poor (Boudreau and Lupia Reference Boudreau, Lupia, Druckman, Green, Kuklinski and Lupia2011; Converse Reference Converse and Apter1964; Delli Carpini and Keeter Reference Delli Carpini and Keeter1996). Equally concerning is the fact that political knowledge appears to be unevenly distributed. This conclusion comes from the evidence portrayed in survey oriented studies that rely on a range of knowledge items that mainly focus on the accretion of electoral and partisan facts.
After decades of debate, however, there is an absence of a generally accepted measure of the public's knowledge about politics. The concept is theoretically complicated and potentially multidimensional (Mondak Reference Mondak2001), implying a particular difficulty to operationalize it.
Scholars have devoted substantial research time into estimating average levels of citizens’ political knowledge as well as their main antecedents, with the United States being the most frequent case under study. Despite valuable insights provided by previous studies, much of the research suffers from limitations related to measurement (Barabas et al. Reference Barabas, Jerit, Pollock and Rainey2014; Boudreau and Lupia Reference Boudreau, Lupia, Druckman, Green, Kuklinski and Lupia2011; Lupia Reference Lupia2006)
In fact, recent studies have debated the effect that the format of the question (Fortin-Rittberger Reference Fortin-Rittbergger2016; Luskin and Bullock Reference Luskin and Bullock2011; Miller and Orr Reference Miller and Orr2008; Robison Reference Robinson2015; Sturgis, Allum, and Smith Reference Sturgis, Allum and Smith2008), the survey protocol (Prior and Lupia Reference Prior and Lupia2008), the use of images versus words (Prior Reference Prior2014), and the codification procedure of the responses to the questions (Gibson and Caldeira Reference Gibson and Caldeira2009) have on both observed levels of knowledge and its antecedents. What is more, a comprehensive study of the U.S. case covering the period 2007–2010 found that the magnitude of the effect of education, media exposure, and gender on citizens’ knowledge about politics very much depends on the type of questions used to estimate such effects (Barabas et al. Reference Barabas, Jerit, Pollock and Rainey2014)
Among all the antecedents of knowledge, this article focuses on gender. The gender gap in political knowledge identified in previous literature in favor of men (Burns, Schlozman and Verba Reference Burns, Schlozman and Verba2001; Delli Carpini and Keeter Reference Delli Carpini and Keeter1996 and Reference Delli Carpini, Keeter, Rinehart and Josephson2000; Dolan Reference Dolan2011; Fortin-Rittberger Reference Fortin-Rittbergger2016; Fraile Reference Fraile2014; Fraile and Gómez Reference Fraile and Gomez2017;Kenski and Jamieson Reference Kenski, Jamieson and Jamieson2000; Mondak and Anderson Reference Mondak and Anderson2004; Stolle and Gidengil Reference Stolle and Gidengil2010) constitutes one of the most puzzling sources of knowledge inequalities. We test whether the magnitude of the gender gap in knowledge depends on the characteristics of the questions employed to measure individuals’ political knowledge. By integrating three different strands of research (feminist, media studies, and psychology) that, to date, have rarely been employed together, we identify three characteristics of knowledge survey items that are relevant in explaining the gender gap documented in previous studies: the content, the format, and the temporal dimension.
This study draws on a unique face-to-face survey, with up to 27 political knowledge items carried out on a national representative sample of the Spanish population, and shows that the size of the gender gap in knowledge is conditional upon the type of question being asked. The gender gap in favor of men substantively diminishes (or even reverses) in questions that are open ended, that do not rely on knowledge of recent facts, and that go beyond the traditional arenas of electoral and partisan politics. Analyzing the effect of the three items’ characteristics simultaneously allows us to contribute to the ongoing debate about the way in which knowledge should be measured.
A COMPREHENSIVE LOOK AT THE GENDER GAP IN KNOWLEDGE
The existence of relevant gender differences in levels of political knowledge is well documented in previous research. Numerous studies show that women tend to provide fewer correct answers than men to standard political knowledge questions (Burns, Schlozman, and Verba Reference Burns, Schlozman and Verba2001; Delli Carpini and Keeter Reference Delli Carpini and Keeter1996 and Reference Delli Carpini, Keeter, Rinehart and Josephson2000; Fortin-Rittberger Reference Fortin-Rittbergger2016; Fraile Reference Fraile2014; Fraile and Gomez Reference Fraile and Gomez2017; Kenski and Jamieson Reference Kenski, Jamieson and Jamieson2000; Mondak and Anderson Reference Mondak and Anderson2004; Verba, Burns, and Schlozman Reference Verba, Burns and Schlozman1997;).
Despite many attempts in the literature to explain this gap, the question remains unresolved, and the debate is very much alive. Traditional explanations of the gender gap in political knowledge point to social norms (that identify women as being responsible for parenting and other caring activities) as well as to the socioeconomic disadvantages that women have traditionally suffered (Burns, Schlozman, and Verba Reference Burns, Schlozman and Verba2001). However resources, opportunity, and motivation appear to be insufficient to fully account for the gender differences in knowledge (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996).
Recent contributions have provided alternative explanations for the gender gap in knowledge that are partly derived from traditional factors—particularly the way measures of political knowledge have been constructed, which have only measured knowledge in men's areas of interest. For example, some studies show that men and women appear to have different views and interests about politics contingent on their diverse life experience (Campbell and Winters Reference Campbell and Winters2008; Coffé Reference Coffé2013; Fitzgerald Reference Fitzgerald2013; Verba, Burns, and Lehman Schlozman Reference Verba, Burns and Schlozman1997; Wolak and McDevitt Reference Wolak and McDevitt2011). As a consequence, men and women acquire different knowledge about the political world. These differences may, in turn, affect the types of political knowledge that women and men possess. Ignoring these gender differences when measuring political knowledge might be one of the main reasons that explain why previous studies have found such a large gender gap in knowledge in favor of men and why they have failed to account for it. In fact, a feminist approach argues that the majority of questions used in most surveys to date have been biased in favor of men's interests (Dolan Reference Dolan2011; Stolle and Gidengil Reference Stolle and Gidengil2010).
The size of the gender gap in favor of men turns out to be greater in “conventional” knowledge questions (such as identifying a prominent political figure or naming the second party in congress), which implicitly treat politics as if it were synonymous with traditional arenas of electoral and partisan politics, a sphere that is often perceived as a men's game. If knowledge, on the other hand, is measured on diverse political areas and issues, such as local politics, civic rights, and social policies, then the differences between men and women tend to be reduced (Barabas et al. Reference Barabas, Jerit, Pollock and Rainey2014; Delli Carpini and Keeter Reference Delli Carpini and Keeter1996 and Reference Delli Carpini, Keeter, Rinehart and Josephson2000; Dolan Reference Dolan2011; Kenski and Jamieson Reference Kenski, Jamieson and Jamieson2000; Shaker Reference Shaker2012; Stolle and Gidengil Reference Stolle and Gidengil2010).
These alternative specific policy areas are considered to be more directly relevant to the life experiences of women than to men. In comparison to men, women have substantially more pressure to specialize in the private sphere and to focus on the needs of the family. As a consequence, women are more likely to develop an interest in social welfare and community-oriented topics, as these are closer to their daily activities (Campbell and Winters Reference Campbell and Winters2008; Stolle and Gidengil Reference Stolle and Gidengil2010). Moreover, despite the existence of a substantive gender gap in general political interest (Kittilson and Schwindt-Bayer Reference Kittilson and Schwindt-Bayer2012), some studies show that women are, on average, more interested in local and domestic political issues than men (Coffé Reference Coffé2013). As a consequence of this feminist argument, we expect the size of the gender gap in knowledge to depend on the topic addressed by the specific questions. More specifically, we propose the following:
H1:
The size of the gender gap in favor of men is greater for questions inquiring about the traditional arenas of electoral and partisan politics.
In the search for explanations to the gender gap in knowledge, little attention has been paid to the potential interaction between gender and the temporal dimension of the survey items. Knowledge items can refer to structural rules that rarely change over time, or past political events. In these cases, the media might not directly help to make citizens knowledgeable about these topics. Knowledge items included in conventional surveys can also refer to current news. These include, for example, questions relating to specific policies or political measures that have recently been implemented by the incumbent government. Mass media are the primary way of learning about these topics (Barabas et al. Reference Barabas, Jerit, Pollock and Rainey2014, 843), and therefore knowledge about recent facts might be closely related to their media coverage. But do the media reach all citizens equally? Media studies show, in fact, that women are less likely to be exposed to news across all media sources: television, radio, and press (Aalberg, Blekesaune, and Elvestad Reference Aalberg, Blekesaune and Elvestad2013; Benesh Reference Benesch2012; Poindexter, Meraz, and Schmitz Weiss Reference Poindexter, Meraz and Weiss2008; Shehata and Stromback Reference Shehata and Stromback2011). This same result has also been found in the Spanish case (Fraile Reference Fraile2011).Footnote 1
Media studies explaining the causes of the existence of the gender gap in news consumption are, however, scarce. To the best of our knowledge, no single study addresses this question directly. Instead, what literature suggests is that as women have less time available, news consumption implies a higher cost in comparison to men (Benesh Reference Benesch2012). Another speculation concerns women's preference for collecting specific information related to their daily wants and problems, rather than more abstract political content (Poindexter et al. Reference Poindexter, Meraz and Weiss2008).
A last explanation regards media content and media production. Previous scholars have demonstrated that news watching/reading/listening/producing/making is a predominantly masculine activity and that the content of such news is very much male-biased (Curran et al. Reference Curran, Coen, Soroka, Aalberg, Hayashi, Hichy, Iyengar, Jones, Mazzoleni, Papathanassopoulos, Rhee, Rojas, Rowe and Tiffen2014; Ross and Carter Reference Ross and Carter2011).Footnote 2 As a consequence, women might have less interest in “male-oriented news” than their male counterparts. From this argument we might expect that the gender gap should be of greater magnitude in questions relating to recent facts covered by the media, since women tend to be included as subjects of the news to a lesser extent than men, and women tend to be less exposed to the news than men.
H2:
The gender gap will be smaller in questions that do not relate to current events.
Another body of literature has instead suggested that at least part of the gender gap in knowledge might be the product of the format of the survey items used to measure citizens’ knowledge about politics. Studies from psychology have shown that women tend to rate themselves more negatively than men in scientific ability, and in their performance in academics tests, even if their average performance is undistinguishable from that of men (Ehrlinger and Dunning Reference Ehrlinger and Dunning2003). This lack of confidence also has a documented effect when responding to survey questions. In fact, there is persuasive evidence showing that, given the lack of confidence of women in their abilities, they tend to be generally more risk-averse (Bonte Reference Bonte2015) and are less willing to guess in response to survey items (Kenski and Jamieson Reference Kenski, Jamieson and Jamieson2000; Lizotee and Sidman Reference Lizotte and Sidman2009; Mondak and Anderson Reference Mondak and Anderson2004). Consequently, men's level of knowledge is systematically overestimated by survey items that employ a format that maximizes the possibility of guessing. While previous studies have abundantly analyzed the role of the “Do not know” protocol of survey knowledge items in explaining part of the gender gap (Lizotee and Sidman Reference Lizotte and Sidman2009; Miller and Orr Reference Miller and Orr2008; Mondak and Anderson Reference Mondak and Anderson2004; Sturgis, Allum, and Smith Reference Sturgis, Allum and Smith2008), surprisingly little attention has been paid to the potential interaction between gender and the format (open vs. closed) of the aforementioned items (with the sole exception of Ferrin, Fraile, and García-Albacete Reference Ferrin, Marta and Garcia-Albacete2017; and Fortin-Rittberger Reference Fortin-Rittbergger2016).
Knowledge items included in conventional survey questionnaires are normally presented in two kinds of format: open and closed. These differ in their ability to stimulate correct responses and to incentivize guessing. While open-ended questions produce conservative estimations of political knowledge, since some knowledgeable respondents will not provide a correct answer unless they are 100% certain (Luskin and Bullock Reference Luskin and Bullock2011; Mondak Reference Mondak2001), closed-ended questions stimulate correct answers, as respondents can attempt to answer even if they have only some partial knowledge about the question. Given the different propensity of women and men to guess, and given that closed-ended items are more likely to incite guessing than open ended items (Luskin and Bullock Reference Luskin and Bullock2011), we propose our third hypothesis:
H3:
The size of the gender gap is greater when the survey questions used to measure knowledge are in the closed-ended format.
DATA AND RESEARCH DESIGN
We have designed a survey questionnaire with the aim of assessing citizens’ levels of political knowledge in Spain, a mature democracy where what people know about politics has been studied only recently. According to previous studies, Spaniards present medium to low levels of political knowledge compared to their European counterparts (Fraile Reference Fraile2014). For the measurement of knowledge we have considered all the relevant debates about the appropriate conceptualization, operationalization, and measurement of political knowledge. We introduced up to 33 different items intended to measure knowledge, 27 of them specifically measuring political knowledge. The large number of questions provides a unique dataset in terms of variation in content, format, and temporal referents of political knowledge items. By going beyond the American and Canadian contexts, the study of the previously unexplored Spanish case scrutinizes previous findings and tests for generalization of the results.
A face-to-face survey was carried out December 13–30, 2012, on a national representative sample of the Spanish population (n = 2962) by the Spanish Centro de Investigaciones Sociologicas.Footnote 3 The survey was carried out in a nonelectoral moment (the most recent elections were held more than one year before, on November 20, 2011). We included eight items in the questionnaire, reproducing classical factual questions used in previous studies. More specifically, we included questions about relevant political actors (both men and women, reflecting an increase in the higher-profile roles of women in key political offices in the Spanish political system; see, for instance, Verge Reference Verge2012) from the incumbent government and from various opposition parties (for the exact wording of the questions see the items included in P16, P17, and P18 in the supplementary material).
In order to expand the number of political domains covered by our questionnaire, we designed a number of items aimed at measuring knowledge about the functioning of democratic institutions and specific policies. Although institutions and policies are regarded as the most important areas citizens should know about (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996, chapter 2), they are often excluded from standard surveys. Regarding the functioning of democratic institutions, we have included questions about the main task of the Spanish Parliament, the content of the Spanish Constitution, and the main characteristics of democracy and dictatorship (see the nine items included in P8, P9, P10, P11, P13 and P14 in the supplementary material).Footnote 4 Concerning specific policy questions, we have included seven items about education, healthcare, and other social services (see the items included in P25 to P28 in the supplementary material). Respondents are asked, for example, which level of government is responsible for specific policies such as collecting waste or managing public education or health centers.
Another political domain that is considered to be relevant is knowledge about economic institutions and processes (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996, chapter 2). We included four items in the questionnaire intending to measure respondents’ knowledge of the economy. More specifically, these items ask about unemployment, inflation, the Euribor,Footnote 5 and the market economy (see the four items included in P20 to P23 in the supplementary material).
Finally, we also included five items measuring cultural issues not strictly related to politics as a strategy to dissuade respondents from the idea that they were being examined. Results of the pilot study (carried out in November 2012) indicated that asking about different dimensions of politics and culture (and using different formats such as images, words, numbers, etc.) provided dynamism and enthusiasm to the interview. The evidence of these cultural items is provided in Table 1 for descriptive reasons. The analyses presented below do not include these cultural items since they do not measure political knowledge.Footnote 6
Source: Our elaboration of the data CIS2973. Questions in italics refer to cultural knowledge (P29) and to other knowledge domains (P27). For this reason, successive analyses exclude these six items. *p < 0.05.
Knowledge items do not only vary in their content (classical topics, functioning of democratic institutions, specific policies, and economic issues) but also in their format (closed ended and open ended items) and their temporal dimension (current versus non current). For all the knowledge questions, the Do not Know (DK) protocol was neutral (that is, interviewers were instructed to record spontaneous DK answers, but the DK category was not shown in the cards given to the respondents). The supplementary material includes the original wording of each of the questions and its classification in each of the content, format, and temporal categories. Finally, in the design of the questionnaire we used images and numbers when proposing the questions to the respondents, seeking to take into account the fact that citizens’ political information can be stored in a visual or even numeric format as opposed to a textual format (see for instance Prior Reference Prior2014).
Table 1 presents all the questions. Columns 2 and 3 in Table 1 show the percentage of correct responses provided by men and women for all items included in the survey. Column 4 provides the size of the gender gap (that is, the percentage of correct answers for men, minus the percentage of correct answers for women). The topics in Table 1 are ordered according to the size of the gender gap on that particular item. Table 1 also classifies the items according to the three main characteristics of interest here: the format, the content, and the temporal dimension of the question (columns 6–8, respectively).
Table 1 shows that, on average, men provide significantly higher percentage of correct answers than women in 24 out of the 33 items we included in the survey. For the remaining nine items, on the other hand, the gender gap disappears and sometimes even reverses, with women presenting similar or even more correct answers than men. This is the case, for example, for the question relating to the age at which free public education starts, or the place where citizens need to go to obtain the health card.Footnote 7
The gender gap in knowledge appears evident, although the magnitude of the gap varies across items. In some, men provide as much as 16% more correct answers than women, as is the case for the question that asks about the logo of the UGT (one of the largest trade unions in Spain, see P1703 in Table 1) and the party of Cayo Lara (coordinator and spokesperson of the left-wing political party United Left at the time of the interview, see P1802 in Table 1). The gap is smaller, however, in questions relating to the functioning of democracy and its institutions and to specific policies. Differences in the percentage of correct answers between men and women is only around 4%, for example, when respondents are asked about the main function of the Spanish Parliament or who has the right to vote (see, respectively P9 and P10 in Table 1).
Table 1 offers a first hint at suggesting that the magnitude of the gender gap is related to the content of the knowledge items. Men seem to perform particularly well on questions about the name of political actors, the party they belong to, and on items probing familiarity with economic issues. On the other hand, levels of knowledge tend to become more balanced between men and women once we consider other political aspects, for instance the functioning and competences of democratic institutions. The average magnitude of the gender gap for the four questions about the economy is 10.4% and is very similar (10.1%) for the eight conventional items of knowledge about political actors and parties. In contrast, the average magnitude of the gender gap in favor of men for the nine questions about the functioning of democratic institutions decreases considerably (4.2%), and the gap reverses in favor of women for the seven policy specific questions (−2%).
It is less clear from Table 1, however, whether the format and temporal dimension of the items are also influencing the magnitude of the gender gap in knowledge. The average magnitude of the gender gap for the seven open-ended knowledge questions appears smaller (2.1%) than the average magnitude for the 26 closed-ended knowledge questions (5.2%). Finally, the differences in the average magnitude of the gender gap for the 22 questions on current issues (4.2%) and for the 11 “not current” questions (5.2%) seem negligible.
All this preliminary descriptive evidence suggests that the magnitude of the gender gap in knowledge appears to depend more on the content of the questions than on the format and the temporal reference of such questions. We need, however, to rigorously examine the patterns found in Table 1. We have reshaped the data into long format and conducted multilevel analysis.Footnote 8 This empirical strategy allows us to group knowledge items within each individual respondent and thus to circumvent the assumption of independent residuals. Individuals therefore become the level-two unit of analysis, which groups each of the 27 knowledge questions (level-one).Footnote 9
Questions’ characteristics are measured by reproducing columns 6–8 of Table 1 and constitute the main independent variables here. The first variable classifies the questions according to their format, taking value 0 for closed-ended and 1 for open-ended questions. The second variable differentiates the items according to their substantive content. As previously discussed, this includes (i) classical knowledge topics (those relating to political parties and actors); (ii) economic issues (relating to the economy); (iii) institutions and democracy (relating to the functioning of democracy and its institutions); and (iv) specific policies.Footnote 10
The third variable distinguishes the items according to their temporal dimension, taking value 1 for questions relating to current events or facts covered by the media and value 0 for questions unrelated to issues being discussed in the media at the time of the interview—which normally refer to the past or to some specific rules of the political system. More precisely, we codified as 1 all questions concerned with issues being discussed in the media at the time of the interview (December 2012). For this scope, we performed a content analysis of two of the main national newspapers, El Pais and El Mundo, during the time of the survey fieldwork, December 13–30, 2012. We also analyzed the content of the public broadcasts included on the website of TVE1, the most popular Spanish public television channel.Footnote 11 The supplementary material provides the original wording of all the questions as well as the way each item has been classified in each of the three variables at the question level.
Our dependent variable classifies responses as correct (value 1) versus the remaining options (both incorrect and DK answers: value 0). Accordingly, we have used random intercept multilevel mixed-effects logistic regression.Footnote 12 This estimation allows us to test the extent to which the effect of the three specific features of the knowledge items (format, content, and temporality)Footnote 13 on the probabilities of providing a correct answer is conditioned on the gender of the respondent and controlling for the standard antecedents of knowledge according to previous literature: education, age, political motivations, and exposure to different news media (see Table 2).Footnote 14
Source: Our elaboration of the data CIS2973.
Notes: Robust standard errors in parentheses; ***p < 0.01, **p < 0.05, *p < 0.1; aReference category: content = classic.
FINDINGS
Table 2 presents the results of a random intercept multilevel mixed-effects logistic regression model. It shows results for the estimation of each of the independent variables at the question level plus its respective interaction term with gender (including all constitutive terms), controlling for the standard antecedents of knowledge at the individual level. Table A1 in the supplementary material provides descriptive statistics of all the variables used in the estimations.
Regarding the question format, Table 2 corroborates previous research showing that the closed-ended format is less demanding and that the magnitude of the gender gap is reduced when the knowledge questions are formulated as open-ended, as compared to closed-ended (Luskin and Bullock Reference Luskin and Bullock2011). However, what is the substantive magnitude of this finding? Figures 1 to 3 display the predicted values of providing a correct answer for men and women by comparing questions’ characteristics. Figure 1 shows that the probability of women providing a correct answer to closed-ended items is 5% lower than men (that is, 64.1% for women versus 69.1% for men). On the other hand, when open-ended items are used, the difference vanishes: the probability that women provide a correct answer to open-ended items is 76.2% versus 75.5% for men. This finding suggests that keeping constant the DK protocol (as we did with our 27 items), the higher propensity of men to guess in comparison to women amplifies gender differences when closed-ended questions are used to measure political knowledge.Footnote 15
As for the content of the questions, Figure 2 shows that women are 7.5% less likely to provide a correct answer to “classic knowledge questions” than men (that is, 59.1% for women and 66.6% for men). The gender gap is equally large with regard to questions relating to the economy: the probability that men give a correct answer to this type of questions is 8.8% higher than that of women (that is, 50.1% for women and 58.9% for men). Still the gender gap decreases significantly when measures of knowledge are widened to consider areas outside the traditional items. This is the case for the two types of items considered here: those related to the functioning of democracy and its institutions and those related to specific policies. The propensity of women to provide a correct answer to items measuring knowledge about the functioning of democracy is only 3.3% below that of men (that is, 71.9% for women and 75.2% for men). Moreover, gender differences even reverse for knowledge of specific policies, where women's probability of providing a correct answer is 3.4% higher than that of men (that is, 80.5% for women and 77.1% for men). This evidence confirms that the magnitude of the gender gap in knowledge is larger when respondents are asked about electoral politics and economic matters, whereas it decreases substantively or even disappears when we expand the definition of political knowledge to additional areas that are regarded as equally important in what citizens should know about politics (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996). While previous studies have demonstrated that the differences between men and women tend to be reduced when asked about their knowledge of local politics, civic rights, social policies, and women candidates (Barabas et al. Reference Barabas, Jerit, Pollock and Rainey2014; Delli Carpini and Keeter Reference Delli Carpini and Keeter1996 and Reference Delli Carpini, Keeter, Rinehart and Josephson2000; Dolan Reference Dolan2011; Kenski and Jamieson Reference Kenski, Jamieson and Jamieson2000; Shaker Reference Shaker2012; Stolle and Gidengil Reference Stolle and Gidengil2010), we also show here that the magnitude of the gender gap in favor of men substantively decreases for questions about familiarity with the functioning of democracy and its institutions.
Regarding the temporal dimension of the knowledge questions, Figure 3 shows that the probability that women provide a correct answer to a question relating to current affairs is about 5.5% lower than for men (63.0% for women and 68.5% for men). However, gender differences vanish for knowledge relating to current news (72.8% for women versus 73.9% for men, see Figure 3).
All the estimations summarized in Table 2 include the specification of respondents’ exposure to media as part of the standard antecedents of knowledge according to previous literature. However, we are aware that this individual level variable could capture part of the gender differences in the capacity to correctly answer current and noncurrent political knowledge questions (since there are relevant gender differences in media exposure). In order to eliminate this possibility, we replicated the estimation by excluding the three variables measuring respondents’ declared media exposure. The results fully confirm the findings of Figure 3. Excluding respondents’ media exposure from the estimation, the probability that women provide a correct answer to a question relating to current affairs is about 6.3% lower than for men (62.7% for women and 69.0% for men), while the gender gap shrinks significantly for questions that do not refer to current news (72.6% for women and 74.4% for men). Although previously overlooked in the literature, this finding suggests an important explanation for the existence of the gender gap in knowledge: the lower propensity of women in comparison to men to be exposed to political news in the mass media. Consequently, the gender differences are augmented when questions relating to current affairs are used to measure knowledge. We discuss the implication of all these findings for the study of both political knowledge and its antecedents in more detail in the next section.
CONCLUSION AND DISCUSSION
Political knowledge is considered to be a key resource for the exercise of citizenship (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996). The more knowledgeable people are, the better they understand the impact of public policies on their own interests, and the more likely they are to vote to appropriately punish and reward governments. Previous evidence has also shown that knowledgeable citizens tend to be more progressive, tolerant, and less approving of the president (Althaus Reference Althaus2003; Delli Carpini and Keeter Reference Delli Carpini and Keeter1996). What is more, political knowledge also matters for the correct functioning of democracy because it is necessary for an active citizenry. Particularly, knowledge is considered to be a crucial resource linked to the ability of citizens to effectively participate and be engaged in all types of political activities, including both electoral and nonelectoral participation (Delli Carpini and Keeter Reference Delli Carpini and Keeter1996). For these reasons, inequalities in observed levels of political knowledge have concerned scholars for decades. This article contributes to the ongoing debate about the appropriate measurement of political knowledge, on the one hand, and to the understanding of the origins of repeatedly observed gender gaps, on the other.
Our findings show that the size of the gender gap in knowledge depends both on the content of the topics covered by the survey items and on their format, confirming previous studies (Barabas et al. Reference Barabas, Jerit, Pollock and Rainey2014; Delli Carpini and Keeter Reference Delli Carpini and Keeter1996; Ferrin, Fraile, and García-Albacete Reference Ferrin, Marta and Garcia-Albacete2017; Fortin-Rittberger Reference Fortin-Rittbergger2016; Stolle and Gidengil Reference Stolle and Gidengil2010). However, we also show that, although previously overlooked in the literature, the size of the gender gap is additionally contingent on the temporal dimension of the knowledge questions, as the gap is reduced for items that are not in the current news.
These findings have two key implications for the study of political knowledge: the first relates to the debate about its measurement, while the second sharpens our understanding of its antecedents. First, this study suggests that there seem to be varieties of knowledge, each revealing something different about what citizens know and understand about politics and its different dimensions. Limiting the measurement of knowledge to electoral and partisan politics appears to provide an incomplete picture of what people know (or do not know) about politics. Our findings also suggest that not only the content, but also the format and the temporal dimension of the questions should vary in survey questionnaires aiming to assess citizens’ levels of political knowledge.
However, are these different dimensions of knowledge equally important to predict citizens’ propensity to participate in politics? Can all these types of knowledge be equally considered as significant resources for political engagement? Evaluating the extent to which each knowledge domain predicts political participation to the same extent would require a further analysis that goes beyond the scope of this article; however, preliminary tests indicate that all four knowledge domains identified here are equally relevant predictors of both electoral and nonelectoral political participation. Thus, including diverse measures of political knowledge in surveys will not imply losing our ability to understand political behavior; on the contrary, it will most likely improve our capacity to explain citizens’ propensity to engage in politics.
A second derivation of our findings is that not simply the size of the gender gap but perhaps the extent of other gaps previously observed in political knowledge are conditional upon the kind of knowledge that is being measured. Our hypotheses were derived from the combination of three strands of research that shed light on the potential effect of the type of question in the estimation of the magnitude of the gender gap. The same might be done for understanding other gaps in political knowledge such as those that depend on citizens’ cognitive abilities, their personality, or their motivation to be exposed to the media. For instance, a recent study has shown that the magnitude of the informative effect of various forms of media use varies according to the type of knowledge measured (Eveland and Schmitt Reference Eveland and Schmitt2015). Nonetheless much remains to be learned about other sources of knowledge inequalities. This constitutes a challenging investigative agenda that awaits public opinion researchers.
The two implications highlighted here lead to the same conclusion: the debate on how to measure political knowledge is very much alive. Our work suggests that future research might attempt to follow innovative paths on the development of measures of political knowledge. This implies considering more issue-specific knowledge items than the traditional battery of electoral and partisan questions by including different substantive content, diverse question formats, and items that refer both to the past and to current political processes. The debate remains open on how to obtain a measure of political knowledge that is replicable, includes indicators that are relevant to the current political context, and that can be compared over time and across groups of citizens.
SUPPLEMENTARY MATERIAL
To view supplementary material for this article, please visit https://doi.org/10.1017/S1743923X1700023X