Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-02-06T11:12:37.780Z Has data issue: false hasContentIssue false

Editor Fatigue: Can Political Science Journals Increase Review Invitation-Acceptance Rates?

Published online by Cambridge University Press:  06 August 2021

Antonio Franceschet
Affiliation:
University of Calgary, Canada
Jack Lucas
Affiliation:
University of Calgary, Canada
Brenda O’Neill
Affiliation:
Carleton University, Canada
Elizabeth Pando
Affiliation:
University of Calgary, Canada
Melanee Thomas
Affiliation:
University of Calgary, Canada
Rights & Permissions [Opens in a new window]

Abstract

In many political science journals, fewer than half of the invitations sent to potential reviewers are accepted. These low acceptance rates increase workloads for editors and lengthen the review process for authors. This article reports analyses of reviewer invitation acceptance at the Canadian Journal of Political Science between 2017 and 2020. We first describe predictors of invitation acceptance using a coded dataset of almost 1,500 invitations. We find that reviewers who are personally familiar to editors, located in the same country as the journal, and more junior scholars were more likely to accept invitations. We then report the results of an experiment that tested the effect of three letters on invitation acceptance. We find that a short personal note from the editor to accompany the auto-generated system message may increase reviewer acceptance rates but highlighting the journal’s prestige or reviewer recognition does not. We conclude by discussing the practical implications of our findings for editorial-team design and the editorial process.

Type
Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of the American Political Science Association

Editing a journal in one’s own field is an exciting professional opportunity. Yet, an inevitable—if sometimes demoralizing—problem for editors is finding reviewers who are willing to accept the task of peer reviewing submissions. With invitation-acceptance rates generally in the 40% to 60% range (Djupe Reference Djupe2015), journal editors must send, on average, two invitations for every reviewer they need. This forces editors to spend considerably more time identifying possible reviewers and delays the review process for authors.

Can anything be done to increase reviewer acceptance rates? Surprisingly, despite the data available to editors in modern editorial-management software, we have few answers to this question. A better understanding of who accepts invitations and the circumstances in which they are more likely to accept could substantially improve the publishing experience for editors, reviewers, and authors alike. We have much to gain from turning our skills as political scientists to the topic of academic journal publishing.

A better understanding of who accepts invitations, and the circumstances in which they are more likely to accept, could substantially improve the publishing experience for editors, reviewers, and authors alike. We have much to gain from turning our skills as political scientists to the topic of academic journal publishing.

This article reports findings from observational and experimental analyses of reviewer invitation-acceptance patterns at the Canadian Journal of Political Science (CJPS) between 2017 and 2020.Footnote 1 We first describe general predictors of invitation acceptance using a coded dataset of almost 1,500 unique invitations. We find that reviewers who are personally familiar to editors, who are located in Canada (and thus more aware of the CJPS), and who are more junior in their career were more likely to accept invitations. We then report the results of an experiment that we undertook in 2019–2020, which tested the effect of three different invitation letters on the probability of invitation acceptance. We find that a short personal note from the editor to accompany the auto-generated system message increases reviewer acceptance rates. Two other treatments—highlighting the journal’s reputation and emphasizing the recognition that reviewers would receive for their review—do not appear to have affected acceptance rates.

Our findings contribute to a small but valuable literature on the academic review process and have practical implications for political science (and other) journals. We conclude by describing the relevance of our findings for editorial-team design and the reviewer-invitation process, and we offer suggestions for how editors at other political science journals might build on these findings.

BACKGROUND

Why do academic researchers regularly decline review invitations? Perhaps the most commonly cited explanation is reviewer fatigue: with increasing pressure to publish and a rapidly increasing universe of political science journals, academics are inundated with invitations. Research on this issue, however, has generally found that the challenge has more to do with overall workload than with review requests themselves (Breuning et al. Reference Breuning, Backstrom, Brannon, Gross and Widmeier2015; Domínguez-Berjón et al. Reference Domínguez-Berjón, Godoy and Ruano-Ravina2018; Fox, Albert, and Vines Reference Fox, Arianne and Vines2017; Willis Reference Willis2015). In a survey of American Political Science Association (APSA) members in 2013, for example, Djupe (Reference Djupe2015) found that only 10% of respondents were doing more than one review each month.

Digitization of the invitation process also may contribute to the problem of low acceptance rates because standardized email invitations are declined or ignored more easily than a manuscript sent by mail (Djupe Reference Djupe2015, 348). Research suggests that reviewers find it easier to decline an invitation when it feels impersonal. Direct communication with reviewers, rather than automated invitations, appears to positively affect acceptance rates (Breuning et al. Reference Breuning, Backstrom, Brannon, Gross and Widmeier2015; Chetty, Saez, and Sándor Reference Chetty, Saez and Sándor2014).

The “invisible” character of manuscript reviewing also has been identified by scholars as a problem (Lu Reference Lu2013), although findings in this area are mixed (Zaharie and Seeber Reference Zaharie and Seeber2018). Some scholars have proposed systems for publicly recognizing reviewer contributions (Cantor and Gero Reference Cantor and Gero2015; Djupe Reference Djupe2015; Ling Reference Ling2011; Petchey et al. Reference Petchey, Fox and Haddon2014), whereas others have noted the importance of acknowledgment by the journal (Tite and Schroter Reference Tite and Schroter2006). Some online services, such as Publons (publons.com), have begun to implement these ideas.

In addition to these general explanations for low invitation-acceptance rates—workload demands, digitization and automation, and lack of recognition—several contextual factors may explain variation in acceptance rates. One factor is the relationship between editor and invited reviewer. Research from the field of chemistry found that invitations sent to individuals who were known personally by the editor were more likely to be accepted (Mrowinski et al. Reference Mrowinski, Fronczak, Fronczak, Nedic and Ausloos2016); these findings have been replicated in other disciplines (Zaharie and Osoian Reference Zaharie and Osoian2016).

Academic seniority also plays a role in invitation acceptance, with junior scholars more likely to accept an invitation than senior scholars. Among APSA members, assistant professors reported being the most likely to have reviewed for a journal in the past year (95%), followed by associate (89%) and full professors (87%) (Djupe Reference Djupe2015, 347). These differences may be a function of promotion incentives (Garcia et al. Reference García, Rodriguez-Sánchez and Fdez-Valdivia2013) as well as a desire among junior scholars to feel that they are part of their scientific community (Zaharie and Seeber Reference Zaharie and Seeber2018, 72). Senior scholars also may receive more invitations and thus be more likely to decline a portion of them (Djupe Reference Djupe2015, 348). Furthermore, senior scholars are likely to have more extensive service responsibilities, limiting their available time for manuscript reviews.

The character of the journal making the invitation also is relevant to acceptance rates. A reviewer’s probability of acceptance, according to Djupe (Reference Djupe2015, 349), is partly a function of the “prestige of the journals making the ‘ask.’” Acceptance rates also tend to be higher for topics or methods that are more characteristic of a journal’s standard output (Breuning et al. Reference Breuning, Backstrom, Brannon, Gross and Widmeier2015). In general, reviewers are more likely to accept invitations from higher-prestige journals and from those that focus on the reviewer’s own area of expertise.

Studies have examined other possibilities. One experimental study found that shorter deadlines did not decrease acceptance rates and that a cash incentive increased them (Chetty et al. Reference Chetty, Saez and Sándor2014; Zaharie and Seeber Reference Zaharie and Seeber2018). These studies, although valuable, have not been undertaken in political science and often involve incentives that are unavailable to political science editors. In our experiment, we instead focused on costless treatments that involve positive frames rather than threats of social sanction; this ensured that, if effective, they could be implemented easily.

Considered together, then, research across a range of disciplines suggests that some reviewer-invitation procedures may be more likely than others to result in an accepted invitation. Unfortunately, however, few studies have systematically investigated these possibilities.

DATA AND METHODS

The CJPS publishes articles in all political science subfields. Manuscripts are assessed with double-blind review, which is overseen by independent English- and French-language editorial teams. CJPS has a strong reputation in Canada but is less well known in other countries. In this respect, despite its generalist content, CJPS is probably most comparable to a strong subfield journal: that is, a good reputation within a disciplinary subcommunity and a less-established reputation outside of that community.

We relied on two datasets in our analysis. Our observational dataset contains the 1,475 initial invitations sent by the 2017–2020 CJPS English-language editors.Footnote 2 Our experimental dataset contains the 392 initial invitations sent by the English-language CJPS team between July 2019 and March 2020—the period in which our experiment was active.Footnote 3 For both analyses, we defined variables and empirical expectations in a pre-analysis plan published on July 2, 2019. This plan is available in the online supplementary materials, and we note departures from the plan as we proceed.

Observational Data: Variables and Analysis

In our observational data analysis, we focused on three predictors—each of which is drawn from the findings outlined earlier—along with our own editorial experience. Although these variables certainly are not the only factors that may affect a reviewer’s probability of acceptance, we focused on them because they are prominent in the literature and—at least in theory—are manipulable by editors. For instance, whereas an author’s writing style or the quality of an abstract may affect invitation-acceptance rates, these factors are beyond the control of journal editors. The following four factors, in contrast, are related directly to editors’ decision making.

The first predictor is an editor’s personal familiarity with the invited reviewer. This variable was hand-coded by the editors for each unique reviewer using a dichotomous operationalization: Would I feel comfortable saying “hello” to this person at a conference without introducing myself? We expected this variable to be positively associated with invitation acceptance.

Our second predictor is the reviewer’s location inside or outside of Canada. We constructed this variable using registration data in the CJPS editorial system, which we then manually verified for each reviewer. Because the CJPS is more well known in Canada, we expected that invited reviewers in Canada would be more likely to accept our invitations than those outside of the country.

Our third predictor combines familiarity and location in a variable that we refer to as a potential reviewer’s “outsider” status relative to the CJPS. These individuals are both unfamiliar to the editor sending the invitation (using the previous dichotomous coding) and located outside of Canada. We emphasize that these individuals are by no means “outsiders” to the wider political science discipline. In many cases, an outsider to CJPS is a prominent “insider” in a different disciplinary subcommunity.

Our fourth predictor is the reviewer’s career stage, operationalized as junior, mid-career, or senior.Footnote 4 We hand-coded this variable by checking the website for each of the 1,066 unique reviewers in the dataset. Given past findings in other disciplines, we expected invitation-acceptance rates to decline as seniority increases.

Experimental Data: Design and Analysis

Our experiment tested whether positively framed and easily implemented changes to the CJPS reviewer invitation could increase acceptance rates. We developed three “treatment” invitations, each of which we compared to a control group that received the standard invitation letter. In each case, we outlined the treatment as well as any expected heterogeneity in treatment effects. The nonexperimental variables—familiarity, location, outsider status, and ranking—were coded identically to the variables in the observational analysis.

Our experiment tested whether positively framed and easily implemented changes to the CJPS reviewer invitation could increase acceptance rates. We developed three “treatment” invitations, each of which we compared to a control group that received the standard invitation letter.

Our first treatment reduced the impersonal character of the invitation process by making a human connection between editor and invitee. In this treatment, editors sent the standard invitation from the CJPS editorial system and then followed up with a short message from their own institutional email account. This short follow-up message was moderately informal in tone; it simply mentioned that the editor had sent an invitation and hoped the reviewer would be able to complete the review. We kept the message short and devoid of additional content to avoid contaminating the treatment with additional information. We expected that this treatment would increase acceptance rates and that the effect might vary among familiar and nonfamiliar invitees.

Our second treatment highlighted the CJPS reputation. This invitation letter was identical to the one sent to the control group except for a new second paragraph, which described the journal as “the flagship journal of the Canadian political science community” and also noted that it was published by Cambridge University Press. We expected that this treatment would have an especially pronounced effect among non-Canadian reviewers, who are less likely to be aware of the journal’s reputation.

Our third treatment addressed the recognition that reviewers would receive for their effort. In this letter, we supplemented the control-group invitation with a paragraph noting that the CJPS would recognize reviewers’ contributions by listing their name in the final issue of the year’s CJPS volume and by providing an official recognition letter “that we hope will be useful for merit, promotion, and other recognition at your institution.” This treatment explicitly highlighted the recognition that reviewers would receive for their work.

Each treatment letter targeted a particular weakness of the review process: the impersonal, automated quality of the invitation process in the age of electronic editorial management; the desire to spend time completing reviews for journals with higher prestige; and the lack of recognition that reviewers receive for their work. We assigned reviewers to one of the treatments or the control group using a randomly ordered list for each editor. Balance tests (see the online supplementary materials) indicated some imbalance in the prestige treatment: senior and non-Canadian invitees were more likely to receive the prestige treatment than the control-group invitation. We address this imbalance by reporting both bivariate results and results with covariate controls in the following analysis.Footnote 5

Outcome Variable and Model

In both the observational and experimental analyses, our outcome variable was invitation acceptance, a dichotomous measure. We used an ordinary least squares model in our analysis in the main text, which is interpreted easily and has advantages when identifying average treatment effects with dichotomous outcome variables. We replicate our findings with logistic regression models in the online supplementary materials.

RESULTS

Figure 1 summarizes acceptance rates for each of the four predictor variables in our observational dataset. The findings support each of our main expectations: familiar invitees are more likely to accept invitations than unfamiliar invitees, Canadian invitees are more likely to accept than non-Canadians, and acceptance declines as seniority increases.Footnote 6

Figure 1 Invitation-Acceptance Rates by Location, Seniority, CJPS Outsider, and Familiarity

These differences are large enough to meaningfully affect the review process. If a journal were to send 100 manuscripts for review every year at an 81% invitation-acceptance rate—that is, the rate we find among familiar, Canadian, junior-scholar invitees—the editors would need to send 370 total invitations. However, if the acceptance rate were 27%—the rate among unfamiliar, non-Canadian, senior-scholar invitees—editors would need to send more than 1,100 invitations. This is tremendously time consuming and adds substantially to decision times for manuscripts.

Reviewer Invitation Experiment

This section discusses our experimental results, beginning with an overall summary of acceptance rates across the control and treatment groups shown in figure 2. The first plot, on the left in the figure, summarizes the full sample results. These results suggest a modest increase in acceptance rates for the “personal connection” and “recognition” treatments and a small decrease for the “prestige” treatment; however, none of these differences is statistically significant.Footnote 7 The second plot, on the right, reports results for the subsample of invited reviewers who are outsiders with respect to the CJPS—using the same data and definition as in the observational analysis. The figure shows that there is more substantial evidence for a difference between control and treatment groups.Footnote 8

Figure 2 Invitation-Acceptance Rates by Group

To better understand the possible variation in treatment effects among theoretically relevant subgroups, figure 3 summarizes the predicted probability of invitation acceptance for each part of the experiment, differentiating between familiar and nonfamiliar reviewers (the left plot), Canadian and non-Canadian reviewers (the center plot), and CJPS insider versus outsider reviewers (the right plot). These coefficients are drawn from models in which we interacted the subgroup of interest with the treatment to test for heterogeneity of effects.Footnote 9 The differences among the blue coefficients in each plot are minimal: among familiar, Canadian, and insider reviewers, our experimental treatments had no statistically significant effects. For the red coefficients, in contrast, the predicted probability of acceptance among the control group is consistently at its lowest in the control group. In the case of the experimental treatments for non-Canadians and outsiders, these differences are statistically significant.

Figure 3 Probability of Invitation Acceptance by Invitee Type and Experimental Group

These results suggest that our treatments had more pronounced effects among some groups than others and were particularly effective among those who were both unfamiliar to the CJPS team and located outside of Canada (i.e., outsiders). Among non-Canadians and outsiders, the personal-connection treatment appears to be particularly effective, increasing response rates to resemble those among the Canadian and insider groups. These findings are in keeping with the literature and our own experience; nevertheless, we must interpret the results with caution. Given the number of respondents in the outsider subsample (i.e., 139) and the number of treatments in the experiment, this subsample analysis is likely to be statistically underpowered. Moreover, the acceptance rate among control-group outsiders is unusually low (i.e., 18%). We must be aware of the risk that effects in weakly powered analyses will be statistically significant only if they are implausibly large (Gelman and Carlin Reference Gelman and Carlin2014).

Given this concern, we tested the sensitivity of this finding to the assumption that the acceptance rate in the control group was not the observed rate—the surprisingly low 18%—but rather 34%, the acceptance rate among outsiders in the observational dataset. In 1,000 simulations, we found that the personal-connection treatment was the only one of the three to be statistically significant in more than half of the simulations. This test increases our confidence that the personal connection has a positive effect on acceptance rates among outsiders, whereas the effect of the other two treatments—recognition and prestige—is less certain relative to more typical baseline acceptance rates.Footnote 10

DISCUSSION AND CONCLUSION

The results of this study dovetail with research from other disciplines on the factors that influence acceptance rates for journal-review requests. In particular, the career stage of invited reviewers matters. The more junior the scholars, the more likely they are to accept a review invitation. Our study also reinforces the significance of would-be reviewers’ personal familiarity with the inviting editor. Scholars who are located inside a particular disciplinary subcommunity—in our case, Canada—also are more likely to accept review invitations.

These results suggest that editors can take proactive steps to improve invitation-acceptance rates. Reducing the number of invitations and the time spent searching for appropriate reviewers, as well as limiting the time spent on the review process for each manuscript, are all desired outcomes—and not only for editors. It is obvious that authors also benefit from quicker turnaround times.

These results suggest that editors can take proactive steps to improve invitation-acceptance rates.

More specific practical implications of this study relate to the review-invitation process and editorial-team design. Our observational and experimental analyses both suggest that reviewers’ personal familiarity with editors matters greatly. Investing a few moments to reach out personally to prospective reviewers, especially those who may not be familiar with either the journal or the editor, pays dividends. Additionally, given the importance of familiarity for review acceptance, there is a potential benefit in having an editorial team rather than a lone editor and for having a diverse and well-networked set of team members. Particularly for an omnibus journal such as the CJPS, having different editors on the team representing the disciplinary subfields enhances the positive effects of personal familiarity across the wide variety of submitted manuscripts. Because our experimental treatments appear to increase acceptance rates among some subgroups but have no negative effects on acceptance rates, we encourage editorial teams at other political science journals to consider adopting and adapting these treatments.

Our primary hope is that our findings will encourage other political science editors to pursue similar analyses at their journal. Observational data are readily available in editorial software and quickly provide valuable insights. Experimental data can be even more valuable, particularly if larger journals—or a group of journals—tested the findings reported in this article in a large, well-powered experiment. By testing editorial practices with the same social science tools that political scientists use in their own research, political science journal editors may have the capacity to substantially improve the publishing experience for future editors, reviewers, and authors.

Supplementary Materials

To view supplementary material for this article, please visit http://dx.doi.org/10.1017/S1049096521000858.

Footnotes

1. The full title is Canadian Journal of Political Science/Revue canadienne de science politique. For simplicity, we use CJPS.

2. We excluded revise-and-resubmit invitations and a rapid-review COVID-19 series, which had an unusual invitation process.

3. The experiment’s scheduled end was June 2020, but COVID-19 necessitated changes to our invitation process; thus, we closed the study in March 2020.

4. Eleven individuals with nonacademic affiliations were coded as “other” and excluded from the analysis.

5. Our experiment was approved by the University of Calgary Research Ethics Board. Invited reviewers received a debriefing letter on July 14, 2020, with an opportunity for data withdrawal. One individual requested data withdrawal.

6. We show in the online supplementary materials that each predictor has a statistically significant relationship with invitation acceptance.

7. See the full table in the online supplementary materials.

8. Non-Canadians and nonfamiliar expectations are in our pre-analysis plan but a combination of the two is not. However, we believe that this test is in keeping with our stated intentions and has a practical value for editors.

9. See the full table in the online supplementary materials.

10. See the online supplementary materials.

References

REFERENCES

Breuning, Marijke, Backstrom, Jeremy, Brannon, Jeremy, Gross, Benjamin Isaak, and Widmeier, Michael. 2015. “Reviewer Fatigue? Why Scholars Decline to Review Their Peers’ Work.” PS: Political Science & Politics 48 (4): 595600. DOI:10.1017/S1049096515000827.Google Scholar
Cantor, Mauricio, and Gero, Shane. 2015. “The Missing Metric: Quantifying Contributions of Reviewers.” Royal Society Open Science 2:17. DOI:10.1098/rsos.140540.CrossRefGoogle ScholarPubMed
Chetty, Raj, Saez, Emmanuel, and Sándor, László. 2014. “What Policies Increase Prosocial Behavior? An Experiment with Referees at the Journal of Public Economics .” Journal of Economic Perspectives 28 (3): 169–88. DOI:10.1257/jep.28.3.169.CrossRefGoogle Scholar
Djupe, Paul A. 2015. “Peer Reviewing in Political Science: New Survey Results.” PS: Political Science & Politics 48 (2): 346–52.Google Scholar
Domínguez-Berjón, María Felícitas, Godoy, Pere, Ruano-Ravina, Alberto, et al. 2018. “Acceptance or Decline of Requests to Review Manuscripts: A Gender-Based Approach from a Public Health Journal.” Accountability in Research 25 (2): 94108. DOI:10.1080/08989621.2018.1435280.CrossRefGoogle Scholar
Fox, Charles W., Arianne, Y. K. Albert, and Vines, Timothy H.. 2017. “Recruitment of Reviewers Is Becoming Harder at Some Journals: A Test of Influence of Reviewer Fatigue at Six Journals in Ecology and Evolution.” Research Integrity and Peer Review 2:16. DOI:10.1186/s4107-0027-x.CrossRefGoogle Scholar
García, José A., Rodriguez-Sánchez, Rosa, and Fdez-Valdivia, Joaquín. 2013. “The Principal-Agent Problem in Peer Review.” Journal of the American Society for Information Science and Technology 64 (July): 1852–63. https://doi.org/10.1002/asi.Google Scholar
Gelman, Andrew, and Carlin, John. 2014. “Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors.” Perspectives on Psychological Science 9 (6): 641–51.CrossRefGoogle ScholarPubMed
Ling, Fay. 2011. “Improving Peer Review: Increasing Reviewer Participation.” Learned Publishing 24:231–33. DOI:10.1087/20110311.CrossRefGoogle Scholar
Lu, Yanping. 2013. “Experienced Journal Reviewers’ Perceptions of and Engagement with the Task of Reviewing: An Australian Perspective.” Higher Education Research and Development 32 (6): 946–59. https://doi.org/10.1080/07294360.2013.806441.CrossRefGoogle Scholar
Mrowinski, Maciej J., Fronczak, Agata, Fronczak, Piotr, Nedic, Olgica, and Ausloos, Marcel. 2016. “Review Time in Peer Review: Quantitative Analysis and Modeling of Editorial Workflows.” Scientometrics 107:271–86. DOI:10.1007/s11192-016-1871-z.CrossRefGoogle Scholar
Petchey, Owen L., Fox, Jeremy W., and Haddon, Lindsay. 2014. “Imbalance in Individual Researcher’s Peer Review Activities Quantified for Four British Ecological Society Journals, 2003–2010.” PLoS ONE 9 (3): 36. https://doi.org/10.1371/journal.pone.0092896.CrossRefGoogle ScholarPubMed
Tite, Leanne, and Schroter, Sara. 2006. “Why Do Peer Reviewers Decline to Review? A Survey.” Journal of Epidemiology & Community Health 61:912. DOI:10.1136/jech.2006.049817.CrossRefGoogle Scholar
Willis, Michael. 2015. “Why Do Peer Reviewers Decline to Review Manuscripts? A Study of Reviewer Invitation Responses.” Learned Publishing 29:57. DOI:10.1002/leap.1006.CrossRefGoogle Scholar
Zaharie, Monica Aniela, and Osoian, Codruta Luminita. 2016. “Peer Review Motivation Frames: A Qualitative Approach.” European Management Journal 34:69–79. DOI:10.1016/j.emj.2015.12.004 02632373.CrossRefGoogle Scholar
Zaharie, Monica Aniela, and Seeber, Marco. 2018. “Are Non-Monetary Rewards Effective in Attracting Peer Reviewers? A Natural Experiment.” Scientometrics 117 (3): 1587–609. https://doi.org/10.1007/s11192-018-2912-6.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1 Invitation-Acceptance Rates by Location, Seniority, CJPS Outsider, and Familiarity

Figure 1

Figure 2 Invitation-Acceptance Rates by Group

Figure 2

Figure 3 Probability of Invitation Acceptance by Invitee Type and Experimental Group

Supplementary material: PDF

Franceschet et al. supplementary material

Franceschet et al. supplementary material

Download Franceschet et al. supplementary material(PDF)
PDF 870.9 KB