Hostname: page-component-7b9c58cd5d-dlb68 Total loading time: 0 Render date: 2025-03-15T13:24:03.410Z Has data issue: false hasContentIssue false

Measuring Public Support for European Integration across Time and Countries: The ‘European Mood’ Indicator

Published online by Cambridge University Press:  08 May 2017

Rights & Permissions [Opens in a new window]

Abstract

Type
Notes and Comments
Copyright
© Cambridge University Press 2017 

Against the background of the growing contestation of European integration in recent decades, special attention has been paid to describing and explaining public opinion toward the European Union (EU). In the context of increasingly constraining dissensus,Footnote 1 public support – and more broadly, public acceptance of the EU – have become crucial to party strategies, European policy making and, possibly, to the course of European integration. Studying public opinion and its interaction with partisan strategies and policy making requires an adequate instrument for measuring citizens’ preferences. Comparability over time is necessary in order to provide a dynamic account of the relationship between public opinion, party politics and policy making. Yet studies of public opinion all face a common problem: consistent data series on public opinion regarding policy issues are scarce, as most items are not available in a consistent form over a sufficient time span to detect changes in public opinion. This problem worsens when performing an international comparison, given the difficulty of finding comparable data over time and across countries.

This article extends the issue evolution perspective developed in US public opinion studiesFootnote 2 to the European research agenda. We lack a consistent longitudinal and cross-national instrument for capturing public support for the EU, which could be included in multivariate analyses. As no comparative panel data is available on public opinion toward European integration, variations can only be compared at the macro level. However, with the exception of the Eurobarometer (EB) surveys, regular surveys covering at least several EU member states remain rare. This scarcity is coupled with numerous interruptions in data series, due to survey-specific or rotating modules of items, or to changes in question wording. As a result, most long-term aggregate studies of EU support analyze the evolution of responses to a single indicator: asking respondents whether EU membership is a good or a bad thing.Footnote 3 An attitudinal scale combining several indicators enables more consistent measurement and broader coverage.

We explain how the public mood approach, designed by James Stimson, makes it possible to overcome the obstacle of heterogeneous data and track public preferences toward European integration across all member states from 1973 to 2014. By computing a bi-annual European mood for each country, we provide an instrument that can be used in future comparative research on European politics. It enhances scholars’ independence from the availability of individual survey items and reduces uncertainty about the quality of findings linked to the use of single indicators.

THE ADVANTAGES OF ATTITUDINAL SCALES OVER SINGLE INDICATORS

As mentioned above, consistent data series on public opinion regarding policy issues are scarce, even more so when performing an international comparison. This data problem may seem less dramatic for EU issues than for other issues, given the regular inclusion of items on EU support in comparative surveys. Nonetheless, a number of problems are present: with the exception of the EBs, regular surveys covering at least several EU member states remain relatively rare. For example, the European Social Survey (ESS), the International Social Survey Program (ISSP), the European Value Survey (EVS) and the European Election Studies (EES) have considerable time intervals between waves. This scarcity is coupled with numerous interruptions in data series, due to survey-specific or rotating modules of items, or to changes in question wording.

As a result, only one of the EB items on general support for European integration is consistently available at least annually since 1973: ‘Generally speaking, do you think that (country’s) membership of the European Community (common market) is a good thing, a bad thing, or neither good nor bad?’, referred to as the ‘good thing’ question for the remainder of this article. Some items have been included only two or three times, others more frequently, but not systematically (see overview in the online appendix). Still others cannot be easily compared over time given alterations in the question wording or in the response categories. The heterogeneous character of this dataset complicates the identification of trends in EU support.

In this context, most long-term aggregate studies of EU support analyze the evolution of responses to the ‘good thing’ item. ‘Addressing the measurement issue by choosing the “best” single indicator’Footnote 4 is a common practice in social science but is nevertheless potentially problematic, not only because this item is relatively crude – it offers only three modalities of response – but also because the EB survey designers recently decided to leave this item out of the questionnaire for several consecutive waves.Footnote 5 While political scientists can be interested in public opinion toward a specific policy measure or event (for instance ongoing reform projects), they are mostly interested in public opinion toward latent concepts (such as globalization, government popularity, environmental protection or the welfare state), and those concepts rarely have empirical counterparts. Therefore, as Kellstedt, McAvoy and Stimson point out:

To use factor analytical language, an indicator will share some common variance with the hypothetical (and latent) concept, but will also contain some variance specific to the indicator. If we look at a single indicator, it is impossible to tell how much variance in that indicator is shared with the concept and how much of it is unique to the indicator.Footnote 6

As a consequence, even when a promising indicator exists over time, attitudinal scales are always preferable. Conclusions drawn from a unique series are likely to be subject to several types of bias and measurement errors. The framing, ordering and open vs. closed character of questions, and the range of responses proposed can considerably affect respondents’ answers.Footnote 7 The focus on only one item increases the probability of a bias, since this item may be ordered differently across surveys and countries. An identical question wording might even provide different results due to variations in the framing and ordering of the questionnaire, or to variations in the sampling methods of the survey institutes. This could explain why, based on the same question wording (‘Generally speaking, do you think that Britain’s membership of the European Union is a good thing, a bad thing, or is it neither good nor bad?’), the British Household Panel Survey (BHPS) and the EB find different levels of Euroscepticism (the proportion of respondents answering ‘a bad thing’ is 25 per cent and 31 per cent in 2006, respectively). Distortions of this type not only affect single surveys, but are potentially even bigger when comparing several surveys over time.

COMPARING DIFFERENT INDICATORS OF EU SUPPORT

In order to examine how this type of distortion may bias the measurement of EU support, we compared, for each member country, the trend in EU support (as measured by five of the most complete series). Support for European integration is measured by the ratio of the percentage of favorable responses divided by the cumulative percentage of favorable and critical responses. Observations were relatively similar across countries. This is well illustrated by the example of France (Figure 1), wherein all series appear to be highly correlated, but there are substantial differences in the level of EU support detected by each question.

Fig. 1 Evolution of EU support in France, as measured by five indicators

In all founding member states, there is a gap between strong support for the principle of European integration – as measured by asking whether membership is a good thing and if respondents are favorable toward efforts to unify Europe, as well as about the positive vs. negative image of Europe – and weaker utilitarian support – as measured by items on the national and personal benefits from integration.Footnote 8 By contrast, we do not observe such a gap in Greece, Portugal or Ireland (where citizens are highly supportive on both dimensions), or in Denmark (where lower support is observed across all indicators). In Spain, the initial gap between strong support for the principle of integration and a relatively negative assessment of the benefits reduces progressively over time. This is likely due to the country’s recent entry. Focusing on only one indicator may thus lead to overestimating the level of EU support – when concentrating, for instance, on the ‘good thing’ indicator, on which there is little variance – or to underestimating it when focusing on utilitarian support. Divergences across the different indicators would be less problematic if we could evaluate the quality of these indicators, as this would allow us to simply choose the best ones. However, separate analyses of single indicators cannot inform us about their respective quality.Footnote 9

With respect to the dynamic, the five series reveal more or less consistent trends. This may indicate the existence of a general climate of opinion toward European integration, the variations of which are reflected in each of the series. Stimson seeks to measure this kind of opinion climate with his mood measure, which captures the common variance of data series.

Indeed, Stimson’s measure of ‘public mood’ was developed in the United States precisely in order to deal with this problem of heterogeneity in series and the problem of data series interruption. This enabled the measurement of citizens’ orientations toward liberalism and conservatism over time.Footnote 10 Based on the dyad-ratios algorithm, this methodology was designed to approximate a principal component solution to series with interruptions, by first imputing the missing values for each series and each data point in time based on its correlation with all series available at that point.Footnote 11 The extracted dimension is what Stimson calls policy mood.Footnote 12 It has become a standard measure of public sentiment on domestic problems, and has been incorporated into numerous macro-level analyses, notably explanatory models of public opinion,Footnote 13 party identification,Footnote 14 election outcomesFootnote 15 and assessments of democratic responsiveness.Footnote 16

This global measure has recently been complemented by the construction of policy-specific moods on topics such as race,Footnote 17 healthcare, gay rights and macroeconomic policy.Footnote 18 The technique is now applied in the UKFootnote 19 and France, where mood measures have been constructed regarding nuclear energy policyFootnote 20 and socio-economic and cultural issues.Footnote 21 Some scholars have begun to use this instrument to implement comparisons, mainly across American states,Footnote 22 but also across social groupsFootnote 23 and party electorates.Footnote 24 Only one application covers multiple countries – the US, Canada and the UK.Footnote 25 It seems to us that the mood is a challenging instrument for capturing public opinion moves over time and across countries, and could therefore allow us to analyze the extent to which political parties are sensitive to such moves and adjust their position accordingly.

CONSTRUCTING A LONGITUDINAL AND COMPARATIVE INDICATOR OF ‘EUROPEAN MOOD’

Following Stimson’s guidelines, we have selected all EB items that were asked at least four times between 1973 and 2014 that enable research to identify favorable and critical attitudes toward the European Community (then the EU) or European integration (see Appendix 1). The retained indicators relate to different facets of European integration. Some of them capture a generalized form of support or identification, while others deal with more specific aspects. Utilitarian forms of support can further be distinguished between the evaluation of personal and collective (country-level) benefits gained from EU membership. Although some of the questions concern citizens’ evaluation of the status quo, others relate to their aspirations for the future of the EU (more or less unification, optimistic or pessimistic feeling about the future of the EU, etc.). Finally, several batteries of indicators seek to capture how far respondents wish specific policy domains to be decided by the country or by the EU (or by both), and whether they think the EU plays a positive role in a number of specific policy issues.

For each EU-27 country, each of the seventy-one items and each of the data points between 1973 and 2014 (between 323 and 1,100, depending on the year of entry into the EU), we computed a ratio of EU support by dividing the percentage of favorable responses by the cumulative percentage of favorable and critical responses.Footnote 26 Using Stimson’s dyad-ratio algorithm, we then used these estimates to compute a bi-annual and country-specific measure of ‘European mood’; the values and bootstrapped standard errors are displayed in Appendix 2 and illustrated in Figure 2. This indicator, constructed individually for each of the EU-27 countries, allows us to capture a diffuse and latent attitude toward the EU and to observe long-term trends of aggregate support over the period from 1973–2014.

Fig. 2 Average bi-annual ‘European mood’, 1973–2014

As Figure 2 shows, the ‘European mood’ fluctuates considerably over time.Footnote 27 In particular, it highlights a spectacular trend of opposition to European integration during the ‘Post-Maastricht’ blues,Footnote 28 but also reveals that this trend stopped in 1997 in several countries: those where the mood stabilized (Italy, France, Ireland, the Netherlands, the UK, Sweden) and those where the mood even reversed (Luxembourg, Belgium, Germany, Denmark, Spain and Greece), leading to a noticeable peak in EU support in 2002, with levels of support close to those of the early 1990s. The emergence of a constraining dissensus is thus neither a linear process nor a systematic one: 2003 marked a new reversal in almost all countries, possibly linked to the Eastern enlargement and debates over the constitution project. There was a subsequent upward trend from 2007 in France, Germany, Portugal, the Netherlands, Finland and Austria, following the successful negotiation of the Lisbon Treaty. In the context of the economic crisis beginning in 2008, the ‘European mood’ then plummeted everywhere. More recently, there has been a general upward trend in the mood since 2012. In sum, a descriptive examination of the ‘European mood’ suggests that it is event driven, with considerable reactions to the ratification of major treaties and to the European economic and debt crisis. When the mood moves in reaction to a political or economic event, it tends to swing back to its original level after a certain period. The considerable fluctuations observed indicate that national particularities become less relevant as EU support seems to converge across countries, as anticipated by Eichenberg and Dalton.Footnote 29

ASSESSING THE RELIABILITY OF THE ‘EUROPEAN MOOD’ AND OF SINGLE INDICATORS FOR MEASURING GENERALIZED SUPPORT FOR THE EU

Dimensionality

The loadings of the different items related to mood (Appendix 3) allowed us to verify that all of them contribute to the underlying dimension of EU support. These loadings are generally high, in particular for the ‘good thing’ indicator and the questions on the image of the EU, its future and the benefits gained from membership. Indeed, the first dimension estimated always captures a considerable share of the variance.Footnote 30 Nonetheless, there are also items for which the correlation with the mood is high in most, but not in all countries. This is mainly the case for the assessment of the EU influence on different policy domains. In some countries a generally favorable climate toward the EU is not incompatible with a desire to keep certain competences on the national level, particularly concerning sovereign policies and macro-economic domains (inflation, unemployment, police, security, defense) and to a lesser degree social policy measures (workers’ rights, gender equality, health). A closer look at these differences offers an interesting possibility for future research on attitudes toward the EU: comparing the policy domains across countries in which support for EU influence is not linked to general support for European integration could help disaggregate country specificities of citizens’ attitudes toward European integration.

How Reliable is the European Mood?

The strong contribution of the ‘good thing’ indicator to the European mood may raise concerns about instances in which this central item is not asked, which is sometimes the case in the recent period. In order to address this potential robustness problem, we estimated a second version of the mood in which we did not incorporate the ‘good thing’ item. This second estimation appears to be highly correlated with the initial one (with a coefficient of 0.8 or more in all countries except Austria (0.6) and Spain (0.2)). In those two countries, the lower correlation is due to distortions caused by the question on ‘benefits’ during the initial years of membership, when many citizens supported integration, while not identifying benefits due to the country’s recent entry. These findings suggest that the European mood is not primarily driven by the ‘good thing’ question, and that it remains valid even when this question is not asked.

Here, we advocate using attitudinal scales rather than single indicators. This raises the question: how many indicators are needed in order to guarantee the quality of the mood measure? In other words, is it appropriate to focus on the EB surveys, or would it be preferable to include indicators from national surveys of the individual countries in the measurement of the mood? An exclusive focus on EB (or other European comparative survey) data presents considerable advantages in terms of comparability, but the reliability of this source needs to be assessed. In order to do so, we used a list of items from different national surveys from Germany, Sweden and the UK (excluding the EB survey, see Appendices 4, 5 and 6 for full detail) to compute a second ‘European mood’ for these countries over the longest possible period.Footnote 31 As illustrated by Figure 3, the ‘European moods’ calculated using EB and national survey data are highly correlated (R2=0.69 in Germany, 0.85 in the UK and 0.92 in Sweden). This finding suggests that it is not necessary to add data points from national surveys to the EB included in our ‘European mood’ measure because this does not substantially affect the dynamic of the mood. As the indicators from the EB and national surveys were not identical, the high correlation between both ‘moods’ also confirms, in these three countries at least, the existence of an underlying dimension of generalized EU support.

Fig. 3 Comparison of both mood measures and the ratio of the ‘good thing’ indicator (%support) in the UK, Germany and Sweden

How Reliable is the ‘Good Thing’ Indicator?

The identification of latent dimensions in attitudes can also help evaluate the capacity of single indicators to capture the underlying concept – in our case, to assess how well the ‘good thing’ item measures generalized EU support. Figure 3 displays the ratio of the percentage of favorable responses (‘good thing’) divided by the cumulative percentage of favorable and critical responses (‘good thing’+‘bad thing’) to this question in EB surveys in Germany, Sweden and the United Kingdom. On the one hand, the comparison is good news for the existing literature, as this very widespread measure appears to be highly correlated with both ‘European moods’.Footnote 32 On the other hand, there is a non-negligible difference in the level measured by the European moods and by the ‘good thing’ indicator: as in many other member states, the latter captures a much higher level of support than the two mood measures. This is because respondents are more willing to support the general principle of European integration than more specific aspects, in particular the delegation of policy-making competences to the supranational level. In other words, these findings confirm that the ‘good thing’ indicator leads to overestimating generalized EU support, while the mood seems to better reflect its actual level.

As there is no reason why this bias should vary over time, using this indicator in multivariate analysis (in order, for instance, to analyze how public opinion toward European integration affects political parties’ positions) should not be problematic.Footnote 33 However, the level of support, as such, matters as well, notably when assessing the legitimacy of the EU. The comparatively lower level captured by our measure seems more intuitive, as the ‘constraining dissensus’ and the ‘democratic deficit of the EU’ are hotly debated in academic and political discourses. In the case of the British example, our measure is therefore better suited to explain the behavior of British politicians in the 1990s and 2000s, which has been characterized by the consolidation of strong Eurosceptic wings in the Conservative Party.

CONCLUSION

This article has underlined the potential of the public mood approach for studying public opinion and its relationship to party politics and public policy from a comparative perspective. We have constructed a bi-annual ‘European mood’ measure for each EU-27 member state over the period 1973–2014, which we test using data from the UK, Germany and Sweden. We find that this indicator, while it has the advantage of being comparable across EU countries, is very similar to what we would have measured based on indicators from national surveys and can therefore be considered a valid indicator. This new measure is now available for future research on public support for European integration.

As shown in this research note, this way of capturing public moods can also serve as a benchmark to assess the performance of single indicators of EU support. In this respect, the ‘good thing’ indicator reliably measures the dynamic of generalized EU support, but tends to overestimate its actual level. Given the recent interruptions in this series, due to the omission of this question in several waves of the EB, the ‘European mood’ allows scholars to be more independent from this (and other individual) items and to perform time-series analyses over the longest possible time span.

The ‘European mood’ approach opens up several possible avenues for further research. In the case of EU issues, given the persistent weight of intergovernmental dynamics shaped by domestic interest aggregation in a two-level game,Footnote 34 mass support may affect member states’ European policy and, eventually, the dynamics of integration. European integration provides an intriguing case for studying how the climate of public opinion, as manifested in polls, media discourses and/or public mobilizations, may affect political elites’ behaviorFootnote 35 and policy decisionsFootnote 36 – a question that has been highly debated and at the core of a very dynamic research agenda in recent years.Footnote 37 The level of support in each domestic constituency is likely to shape member states’ EU policy and their position in negotiations, as well as the outcomes of EU elections and referendums.Footnote 38 The public mood with respect to policy issues may be relevant in shaping effective policies and also in party competition. Does the European mood influence party strategies? In the extensive literature on issue competition, the influence of public opinion on party strategies is a decisive factor. Yet this influence is empirically hard to capture due to the scarcity of consistent data series on the climate of public opinion regarding single policy issues.

Future research might also usefully explore the determinants of aggregate public opinion. How can one explain the ups and downs of EU support? Is public support responding dynamically to decisions on European policy following a thermostatic logic?Footnote 39 Is it reacting to utilitarian factors including the direct benefits received from integration,Footnote 40 or to the macro-economic performance at the national and EU levels?Footnote 41 Can other macro-level factors, such as government popularity, collective memory, news coverage of European affairs or financial benefits from European integration shed light on fluctuations in EU support?Footnote 42 Do these factors exert a similar influence across countries? These research questions illustrate the research agenda that could be facilitated by the construction of measures of public mood regarding single policy issues.

Finally, the mood methodology is promising for exploring the specifics of EU support in more depth. The empirical work introduced in this article demonstrates that a single dimension tends to capture most of the variance of indicators of generalized and specific EU support, but we have also noticed intriguing exceptions – for instance, the specificity of items related to the Europeanization of social protection in Scandinavian countries or monetary policy in Germany. Authors are sometimes interested in these more specific dimensions of EU support and face even stronger problems of data interruption when trying to measure them over time. Constructing policy moods related to targeted dimensions of EU support, for instance, for the common currency or fiscal integration, offers an additional promising avenue for their research.

Footnotes

*

Centre Emile Durkheim, Sciences Po Bordeaux (email: i.guinaudeau@sciencespobordeaux.fr); Department of Comparative Politics, Universität Konstanz (email: tinette.schnatterer@uni-konstanz.de). This project benefited from the financial support of the French Research Agency (Partipol project, ANR-13-JSH1-0002-01). We are especially grateful to James Stimson, Vincent Tiberj and Christian Breunig for their active support and important input at several stages of this research project. We would also like to thank the SOM Institute, particularly Sören Holmberg, for giving us access to the Swedish data, and John Bartle for sharing the GB Policy Preferences Database (Department of Government, University of Essex, June 2015). This article also owes a lot to helpful comments by Christine Arnold, Michael Becher, Céline Belot, Shaun Bevan, Amanda Kolker, Andy Smith, Peter Van Aelst and the anonymous reviewers. Previous versions of the article were presented at the 2013 ECPR General Conference, the 2013 conference of the Comparative Agendas Project and the Comparative Political Economy workshop at the University of Konstanz. Tinette Schnatterer thanks the Margarete von Wrangell Program for work completed on this article under their habilitation program. Data replication sets are available at http://dataverse.harvard.edu/dataverse/BJPolS and online appendices are available at https://doi.org/10.1017/S0007123416000776

1 Hooghe and Marks Reference Hooghe and Marks2009.

2 Stimson Reference Stimson1999; Stimson, McKuen, and Erikson Reference Stimson, McKuen and Erikson1995.

3 See, among others, Anderson and Kaltenthaler Reference Anderson and Kaltenthaler1996; Arnold, Sapir, and de Vries Reference Arnold, Sapir and de Vries2012; Eichenberg and Dalton Reference Eichenberg and Dalton2007.

4 Kellstedt, McAvoy, and Stimson Reference Kellstedt, McAvoy and Stimson1993.

5 In member countries, the ‘good thing’ item has not been asked systematically since 2011. For instance, it was left out of the questionnaire between June 2013 and November 2014.

6 Kellstedt, McAvoy, and Stimson Reference Kellstedt, McAvoy and Stimson1993, 114.

8 See also Lubbers and Scheepers Reference Lubbers and Scheepers2005.

9 Kellstedt, McAvoy, and Stimson Reference Kellstedt, McAvoy and Stimson1993, 115.

11 For information on the dyad-ratios algorithm, see Appendix 2. See also Stimson Reference Stimson1999, chapter 3; Stimson, Thiébaut, and Tiberj Reference Stimson, Thiébaut and Tiberj2012.

15 Stimson, McKuen, and Erikson Reference Stimson, McKuen and Erikson1995.

16 Link Reference Link1995; Stimson, McKuen, and Erikson Reference Stimson, McKuen and Erikson1995.

19 Bartle, Dellepiane-Avellaneda, and Stimson Reference Bartle, Dellepiane-Avellaneda and Stimson2010.

20 Brouard and Guinaudeau Reference Brouard and Guinaudeau2014.

21 Stimson, Thiébaut, and Tiberj Reference Stimson, Thiébaut and Tiberj2012.

22 Carsey and Harden Reference Carsey and Harden2010; Erikson, Wright, and McIver Reference Erikson, Wright and McIver2007.

24 Ura and Ellis Reference Ura and Ellis2012.

25 Jennings and Wlezien Reference Jennings and Wlezien2012.

26 Following Stimson’s initial approach, missing values and respondents refusing to position themselves or giving a ‘neutral’ response have not been taken into account when calculating this index. In light of challenging recent publications on citizens’ indifference (notably Van Ingelgom Reference Van Ingelgom2014) and ambivalence (de Vries and Steenbergen Reference De Vries and Steenbergen2013) toward the EU, we are aware that excluding intermediary answers may be more problematic for us than in studies of US citizens’ attitudes toward liberalism and conservatism. However, the balance between Europhile and Eurosceptic opinions is relevant in itself to understanding the outcome of referendums, the success of Eurosceptic parties and member states’ willingness to negotiate, or not, over sensitive issues at the EU level. The mood is, precisely, designed to account for the balance between two adversarial positions. That said, our mood is strongly correlated to another version of the mood that we have estimated based on the percent of favorable responses (out of all respondents, which better accounts for intermediate respondents). This correlation is greater than 0.7 in all but three member states, and more than 0.9 in eighteen cases). Full detail about this second version of the mood are available upon request.

27 Fluctuations over time are significant in each member state, when taking the margins of confidence defined by standard errors into account.

28 See, among many others, Eichenberg and Dalton Reference Eichenberg and Dalton2007; Niedermayer Reference Niedermayer1995. The only exceptions are Ireland, where the European mood continues to increase, and Italy, where it only slightly decreases.

29 Eichenberg and Dalton Reference Eichenberg and Dalton2007, 136–8.

30 The first dimension captures 43.2 per cent, on average. Except in Luxembourg (26.5 per cent), Spain (28.4 per cent) and Finland (25.3 per cent), it is always more than 30 per cent (and up to 59.9 per cent in Romania), while a second dimension would capture less than 8 per cent. See Appendix 4 for full details.

31 For the UK, this includes items from the following survey institutes: British Election Panel Study (BEPS, one item), British Election Study (BES, two items), British Household Panel Survey (BHPS, four items), British Social Attitudes (BSA, five items), European Social Survey (ESS, one item), GALLUP (one item), Independent Communications and Marketing (ICM, two items), MORI (four items), MORI/HO (one item). Source: GB Policy Preferences Database, Department of Government, University of Essex, June 2015. The Swedish data comes from the Society Opinion Media (SOM) Institute longitudinal survey (University of Gothenburg 2015). For Germany, we used survey data from the Gesis Politbarometer, conducted by the German Institute for Election Research, from surveys of the Allensbach Institute, as well as representative surveys from the Bundesverabnd deutscher Banken.

32 The correlation of the ‘good thing’ indicator with the mood based on national data reaches 0.57 in Germany, 0.81 in the UK and 0.86 in Sweden. The correlation with the mood based on EB data is 0.70 in Germany, 0.84 in Sweden and 0.90 in the UK.

33 Arnold, Sapir, and de Vries Reference Arnold, Sapir and de Vries2012.

36 Stimson Reference Stimson1999; Stimson, McKuen, and Erikson Reference Stimson, McKuen and Erikson1995.

37 Bartels Reference Bartels2008; Gilens Reference Gilens2012; Hobolt and Klemmensen 2005; Soroka and Wlezien Reference Soroka and Wlezien2010; Wlezien and Soroka Reference Wlezien and Soroka2012.

40 Anderson and Reichert Reference Anderson and Reichert1995; Carrubba Reference Carrubba2001.

41 Anderson and Kaltenthaler Reference Anderson and Kaltenthaler1996; Duch and Taylor Reference Duch and Michaell1997.

References

Anderson, Christopher J. 1998. When in Doubt, Use Proxies: Attitudes Toward Domestic Politics and Support for European Integration. Comparative Political Studies 31 (5):569601.Google Scholar
Anderson, Christopher J., and Kaltenthaler, Karl C.. 1996. The Dynamics of Public Opinion Toward European Integration, 1973–1993. European Journal of International Relations 2 (2):175199.Google Scholar
Anderson, Christopher, and Reichert, M. Shawn. 1995. Economic Benefits and Support for Membership in the EU: A Cross-National Analysis. Journal of Public Policy 15:231249.Google Scholar
Arnold, Christine, Sapir, Eliyahu V., and de Vries, Catherine. 2012. Parties’ Positions on European Integration: Issue Congruence, Ideology or Context? West European Politics 35 (6):13411362.Google Scholar
Atkinson, Mary Layton, Baumgartner, Frank R., Coggins, Elizabeth K., and Stimson, James A.. 2011. Developing Policy-Specific Conceptions of Mood: The United States. Paper Presented at the Annual Meeting of the Comparative Agendas Project, Catania, Italy, 23 June.Google Scholar
Bartels, Larry M. 2008. Unequal Democracy: The Political Economy of the New Gilded Age. Princeton, NJ: Princeton University Press.Google Scholar
Bartle, John, Dellepiane-Avellaneda, Sebastian, and Stimson, James A.. 2010. The Moving Centre: Preferences for Government Activity in Britain, 1950–2005. British Journal of Political Science 41 (2):259285.Google Scholar
Bishop, Georg, Oldendick, Robert W., and Tuchfarber, Alfred J.. 1982. Political Information Processing: Question Order and Context Effects. Political Behavior 4 (2):177200.Google Scholar
Brouard, Sylvain, and Guinaudeau, Isabelle. 2014. Policy Beyond Politics? Public Opinion, Party Politics and the French Pro-Nuclear Energy Policy. Journal of Public Policy 1 (34):137170.Google Scholar
Bruter, Michael. 2003. Winning Hearts and Minds for Europe: The Impact of News and Symbols on Civic and Cultural European Identity. Comparative Political Studies 36 (10):11481179.Google Scholar
Carrubba, Clifford J. 2001. The Electoral Connection in European Union Politics. The Journal of Politics 63 (1):141158.Google Scholar
Carsey, Thomas M., and Harden, Jeffrey J.. 2010. New Measures of Partisanship, Ideology, and Policy Mood in the American States. State Politics and Policy Quarterly 10 (2):136156.Google Scholar
De Vries, Catherine, and Steenbergen, Marco R.. 2013. Variable Opinions: The Predictability of Support for Unification in European Mass Publics. Journal of Political Marketing 12:121141.Google Scholar
Diez-Medrano, Juan. 2003. Framing Europe: Attitudes to European Integration in Germany, Spain, and the United Kingdom. Princeton, NJ: Princeton University Press.Google Scholar
Duch, Raymond, and Michaell, Taylor. 1997. Economics and the Vulnerability of the Pan-European Institutions. Political Behavior 19 (1):6580.Google Scholar
Eichenberg, Richard C., and Dalton, Russell J.. 2007. Post Maastricht Blues: The Transformation of Citizen Support for European Integration 1973–2004. Acta Politica 42:128152.Google Scholar
Ellis, Christopher. 2010. Why the New Deal Still Matters: Public Preferences, Elite Context, and American Mass Party Change, 1974–2006. Journal of Elections, Public Opinion, and Parties 20 (1):103132.Google Scholar
Ellis, Christopher, and Faricy, Christopher. 2011. Social Policy and Public Opinion: How the Ideological Direction of Spending Influences Public Mood. The Journal of Politics 73 (4):10951110.Google Scholar
Enns, Peter K., and Kellstedt, Paul. 2008. Policy Mood and Political Sophistication: Why Everybody Moves Mood. British Journal of Political Science 38 (3):433454.Google Scholar
Erikson, Robert S., Wright, Gerald C., and McIver, John P.. 2007. Measuring the Public’s Ideological Preferences in the 50 States: Responses Versus Roll Call Data. States Politics and Policy Quarterly 7 (2):141151.Google Scholar
Gilens, Martin. 2012. Affluence and Influence: Economic Inequality and Political Power in America. Princeton, NJ: Princeton University Press and the Russell Sage Foundation.Google Scholar
Hobolt, Sara, and Klemmsen, Robert. 2005. Responsive Government? Public Opinion and Government Policy Preferences in Britain and Denmark. Political Studies 53 (2):379402.Google Scholar
Hooghe, Liesbet, and Marks, Gary. 2009. A Postfunctionalist Theory of European Integration: From Permissive Consensus to Constraining Dissensus. British Journal of Political Science 39 (1):123.Google Scholar
Jennings, Will, and Wlezien, Christopher. 2012. Measuring Public Preferences for Policy: On the Limits of ‘Most Important Problem’. Paper Presented at the Annual Meeting of the Elections, Public Opinion and Parties Group of the Political Studies Association, Oxford, 10–12 September.Google Scholar
Kellstedt, Paul. 2000. Media Framing and the Dynamics of Racial Policy Preferences. American Journal of Political Science 44 (2):239255.Google Scholar
Kellstedt, Paul, McAvoy, Gregory E., and Stimson, James A.. 1993. Dynamic Analysis with Latent Constructs. Political Analysis 5 (1):113150.Google Scholar
Link, Michael W. 1995. Tracking Public Mood in the Supreme Court: Cross-Time Analyses of Criminal Procedure and Civil Rights Cases. Political Research Quarterly 48 (1):6178.Google Scholar
Lubbers, Marcel, and Scheepers, Peer. 2005. Political Versus Instrumental Euro-Scepticism Mapping Scepticism in European Countries and Regions. European Union Politics 6:223242.Google Scholar
Manza, Jeff, Lomaz, Cook Fay, and Page, Benjamin. 2002. Navigating Public Opinion: Polls, Policy, and the Future of American Democracy. Oxford: Oxford University Press.Google Scholar
Niedermayer, Oskar. 1995. Trends and Contrasts. In Public Opinion and Internationalized Governance, edited by Niedermayer Oskar and Richard Sinnott, 5372. Oxford: Oxford University Press.Google Scholar
Putnam, Robert D. 1988. Diplomacy and Domestic Politics: The Logic of Two-Level Games. International Organization 44 (3):427460.Google Scholar
Risse, Thomas. 2005. Neofunctionalism, European Identity, and the Puzzles of European Integration. Journal of European Public Policy 12 (2):291309.Google Scholar
Soroka, Stuart N., and Wlezien, Christopher. 2010. Degrees of Democracy: Politics, Public Opinion, and Policy. Cambridge: Cambridge University Press.Google Scholar
Stevenson, Randolph T. 2001. The Economy and Policy Mood: A Fundamental Dynamic of Democratic Politics? American Journal of Political Science 45 (3):620633.Google Scholar
Stimson, James A. 1999. Public Opinion in America. Moods, Cycles, and Swings, 2nd Edition. Boulder, CO: Westview Press.Google Scholar
Stimson, James A.. 2004. Tides of Consent: How Public Opinion Shapes American Politics. Cambridge: Cambridge University Press.Google Scholar
Stimson, James A., Thiébaut, Cyrille, and Tiberj, Vincent. 2011. Le mood, un nouvel instrument Au service de l’analyse dynamique des opinions: Application aux évolutions de la xénophobie en France [The mood, a new instrument for analyzing opinion dynamics. An application to evolutions of xenophobia in France] (1990–2009). Revue française de science politique 60:901926.Google Scholar
Stimson, James A., Thiébaut, Cyrille, and Tiberj, Vincent. 2012. The Evolution of Policy Attitudes in France. European Union Politics 13 (2):293315.Google Scholar
Stimson, James A., McKuen, Michael B., and Erikson, Robert S.. 1995. Dynamic Representation. American Political Science Review 89 (3):543565.Google Scholar
Stockemer, Daniel. 2011. Citizens’ Support for the European Union and Participation in European Parliament Elections. European Union Politics 13 (1):2646.Google Scholar
Tourangeau, Roger, Rasinskin, Kenneth A., Bradburn, Norman, and d’Andrade, Ray. 1989. Belief Accessibility and Context Effects in Attitude Measurement. Journal of Experimental Social Psychology 25 (5):401421.Google Scholar
University of Gothenburg. 2015. The SOM Institute Cumulative Dataset 1986–2013. Version 1.0. Swedish National Data Service. Available at http://dx.doi.org/10.5878/002635, accessed 26 January 2017.Google Scholar
Ura, Joseph Daniel, and Ellis, Christopher. 2012. Partisan Moods: Polarization and the Dynamics of Mass Party Preferences. The Journal of Politics 74 (1):277291.Google Scholar
Van Ingelgom, Virginie. 2014. Integrating Indifference. A Comparative, Qualitative and Quantitative Approach to the Legitimacy of European Integration. Essex: ECPR Press.Google Scholar
Wlezien, Christopher. 1995. The Public as Thermostat: Dynamics of Preferences for Spending. American Journal of Political Science 39:9811000.Google Scholar
Wlezien, Cristopher, and Soroka, Stuart M.. 2012. Political Institutions and the Opinion-Policy Link. West European Politics 35 (6):14071432.Google Scholar
Figure 0

Fig. 1 Evolution of EU support in France, as measured by five indicators

Figure 1

Fig. 2 Average bi-annual ‘European mood’, 1973–2014

Figure 2

Fig. 3 Comparison of both mood measures and the ratio of the ‘good thing’ indicator (%support) in the UK, Germany and Sweden

Supplementary material: File

Guinaudeau and Schnatterer supplementary material

Guinaudeau and Schnatterer supplementary material 1

Download Guinaudeau and Schnatterer supplementary material(File)
File 235.5 KB
Supplementary material: PDF

Guinaudeau and Schnatterer supplementary material

Guinaudeau and Schnatterer supplementary material 2

Download Guinaudeau and Schnatterer supplementary material(PDF)
PDF 34.7 KB