This symposium consists of six articles that examine the 2018 state legislative elections. The first article by Adam S. Myers sets the stage by highlighting the unprecedented level of contestation in state legislative elections, especially by Democratic candidates.
The second article by Bernard L. Fraga, Eric Gonzalez Juenke, and Paru Shaw continues the theme of motivated elites within the Democratic Party. The authors demonstrate that the election of more women and candidates of color in the 2018 elections was driven by such candidates running in greater numbers than ever before rather than by an uptick in the demand for such candidates among voters.
The third article by Kristin Kanthak, Eric Loepp, and Benjamin Melusky further explores how electoral arrangements either stymie or enhance the election of women in state legislatures.
The fourth article by Ana Bracic, Mackenzie Israel-Trummel, Sarina Rhinehart, and Allyson F. Shortle takes a closer look at only one state—Oklahoma—unlike the other articles that examine all 44 states that had partisan state legislative elections in 2018. The authors examine voting behavior in Oklahoma City in the context of a highly publicized teachers’ strike that brought attitudes about gender to the forefront of voters’ minds.
The fifth article by Donald P. Haider-Markel, Barry Tadlock, Patrick R. Miller, Daniel C. Lewis, and Jami K. Taylor examines whether LGBTQ candidates experienced an electoral penalty and concludes that they did not. By examining a model of change in Democratic support between 2018 and the election preceding it, the authors address the problem that LGBTQ candidates tend to run in areas where they are less likely to be penalized by voters.
The sixth article by Jordan Butcher and Jeffrey Milyo assesses the impact of several campaign finance laws on how often incumbents are reelected.
Teams were provided with the updated State Legislative Election Returns (SLERs) database (see Klarner Reference Klarner2018) through the 2018 elections, which I collected, to enable them to participate in the symposium. The new edition of SLERs will be publicly available sometime after the 2020 election. Teams also were provided with lists of state legislative candidates as filing deadlines were reached, as well as preliminary coding of the candidates’ gender based on first names. These lists allowed the authors to collect information on the race, ethnicity, gender, and sexual orientation of state legislative candidates.
The errors in the research process about to be alluded to should not be taken as criticism of the symposium participants because I believe similar problems are widespread in political science. I often become aware of errors in others’ work because of my knowledge of common pitfalls and data sources and because I frequently help scholars by providing them with data. These problems are a natural consequence of scholars presented with substantial service and teaching requirements, imperatives to publish, and less financial support than in other fields. As a result, I refrain from mentioning any particular team, which has the awkward consequence of implicating teams that did not make errors.
Several teams made substantial errors in the implementation of their analyses necessitating redoing them. I directly handled the data for only two teams and therefore did not have access to all of the variables used in the models. Replicating teams’ samples with SLERs and comparing reported and replicated Ns resulted in several substantial errors becoming apparent, which the authors confirmed were indeed errors.
Analyzing state legislative elections in particular presents unexpected challenges because of institutional differences among states. Not taking into account irregular redistricting was a source of error for several teams—again, an error that I often observe outside of this symposium.
PREREGISTRATION
All articles in this symposium (except Myers, which originally was conceived as being a descriptive article) preregistered their research designs before the November 2018 elections.
Preregistration limits the ability of authors to make alterations to their models and samples in order to obtain statistically significant results, which are perceived as a necessary condition for publication. Specifying models, hypothesis tests, and research plans and then publicly registering this information before the dependent phenomena occurs is one way to limit “p-hacking” (Monogan Reference Monogan2015).
Preregistering a research plan does not preclude making changes to a research design requested by an editor; however, the risk of an editor desiring statistically significant results still exists. The Haider-Markel et al. article originally was planned to omit a lagged counterpart to the independent variable of interest—that is, the LGBTQ status of candidates. We believed that the scope of the extra data collection was too great. However, one reviewer’s comments made it clear that the extra data should be collected. Although the coefficient on the LGBTQ variable was positive and statistically significant in the first iteration of the manuscript, it lost its statistical significance as a result of including the lagged independent variable—but only along with other changes.
Other articles in the symposium also were markedly altered based on feedback from anonymous reviewers—again demonstrating that preregistration does not preclude making changes to a research design requested by editors and reviewers.
Preregistration also is important for topics about which people feel passionate, such as issues pertaining to groups that are historically underrepresented in our political system. The findings of Fraga et al. and Haider-Markel et al., therefore, are that much more compelling.
Preregistration is especially fruitful for evaluating election reform. When authors are advocates of a particular reform, they have an incentive to “tweak their models” to obtain results that support their side. Even if scholars are unmoved by such biases, the perception that they are not may prevent credence in their results, which delays consensus from emerging on the desirability of a particular reform.
The Butcher and Milyo article exhibits “preregistration lite” in the sense that their research design was preregistered after most of their sample (i.e., 1986–2018) was observed. The extent to which keeping one biennium of a sample from authors can prevent “model tweaking” is probably small, but it is another type of preregistration to consider.