Hostname: page-component-745bb68f8f-d8cs5 Total loading time: 0 Render date: 2025-02-07T00:45:56.478Z Has data issue: false hasContentIssue false

Replication, Research Transparency, and Journal Publications: Individualism, Community Models, and the Future of Replication Studies

Published online by Cambridge University Press:  29 December 2013

John Ishiyama*
Affiliation:
University of North Texas
Rights & Permissions [Opens in a new window]

Extract

Recently, the importance of research transparency via replication studies has been greatly discussed in most of the social sciences, political science included. Indeed, as Gherghina and Katsanidou (2013) and Freese (2007) note, to some extent, the discussion has been prompted by the tremendous changes in publishing in the past decade or so. With the enormous expansion in data availability and instant publication made possible by the Internet, there now are many opportunities to verify the findings presented in the discipline's major journals. “Replication, replication” has not only become the mantra for political science, but for economics, psychology, and quantitative sociology as well. These developments opened a debate on how to best “guard the high standards or research practice and allow for the maximum use of current knowledge for the further development of science” (Gherghina and Katsanidou 2013, 1; for similar sentiments see King 1995).

Type
Symposium: Openness in Political Science
Copyright
Copyright © American Political Science Association 2014 

Recently, the importance of research transparency via replication studies has been greatly discussed in most of the social sciences, political science included. Indeed, as Gherghina and Katsanidou (Reference Gherghina and Katsanidou2013) and Freese (Reference Freese2007) note, to some extent, the discussion has been prompted by the tremendous changes in publishing in the past decade or so. With the enormous expansion in data availability and instant publication made possible by the Internet, there now are many opportunities to verify the findings presented in the discipline's major journals. “Replication, replication” has not only become the mantra for political science, but for economics, psychology, and quantitative sociology as well. These developments opened a debate on how to best “guard the high standards or research practice and allow for the maximum use of current knowledge for the further development of science” (Gherghina and Katsanidou Reference Gherghina and Katsanidou2013, 1; for similar sentiments see King Reference King1995).

Some scholars who advocate greater research transparency via replication studies have provided guidelines for what should be included in publicly available replication files. For instance, Gary King (Reference King2003) has proposed a checklist of what should be included when making data available for replication. These items include the original data, the specialized software that was used, syntax files, extracts of existing data files, and comprehensive documentation to explain how to reproduce the exact output presented in the published work. Further, several journals in political science and international relations have followed these guidelines and sought to make data available for replication studies.

Certainly there are many advocates of promoting replication in political science, however, in this essay I focus on two questions that the move toward research transparency and replication raise for journal publication in political science. First, what are the implications for the journals with the shift from an “individual model” of responsibility for the provision of replication data to a more “social policy” or “community model'? Second, and perhaps more important, where should studies that replicate existing works be published?

THE DEBATE OVER INDIVIDUAL AND COMMUNITY RESPONSIBILITY

Regarding the first question of responsibility, generally speaking, there is the distinction made between the “individualistic” model and the “social” or “community” model of collective responsibility (Freese Reference Freese2007; King Reference King2006). On the one hand, the individualistic model holds that the primary responsibility for making data available for replication purposes lies with the individual author.Footnote 1 On the other hand, the social or community policy makes the provision of replication data part of the publication process (and required by journals as requirement for publication). In this case the journal, as a representative of the scholarly community, is the responsible to make sure that data for replication purposes is provided to that community.

Of the two approaches, the literature on replication clearly favors the adoption of the community model (or social policy) over the individualistic model. Indeed, the community model has some important advantages. As Freese (Reference Freese2007) notes, if a policy is enforced by the journals, readers can fully expect that the data are already provided for replication. In contrast, in the individualistic model, the reader would have to trust that the author will provide the reader with data for replication on request. Further, the community model of replication data provision guarantees that such data would be preserved over time in a reproducible format. The individualistic model relies on the individual scholar's ability to preserve such data, which may or may not happen. In other words the “social policy seeks to decouple the content of articles from the contingencies of authors' futures” (Freese Reference Freese2007, 156)

Further, as Freese (Reference Freese2007, 156) argues, a certain egalitarianism is promoted by the community model as it minimizes the “degree to which status and social networks affect access to materials necessary to verify, learn from, and build off of others' work” (see also King Reference King2006). In the individualistic model, in contrast, there is the possibility for the selective release of data. In other words, replication data would be more readily provided to notable faculty or from certain elite institutions, than to junior faculty, graduate students, or faculty members from less prestigious institutions.

Although the adoption of a community model for replication access generally benefits the scholars in political science, the adoption of such policies that increase access to replicable data also directly benefits the journals. Certainly, as Freese (Reference Freese2007, 156) notes, such an approach “also increases the extent to which articles that command scarce journal space are instructive to other researchers by allowing interested others to see more details of how exemplary work was done.” However, in addition, there is something also very practical regarding promoting widespread proliferation of replication studies: the prospect that work will be replicated promotes greater scholarly honesty in research. The pressure to produce “positive results” provides all sorts of incentives for “cooking” or “massaging” the results, and, in the worst case, for falsification of findings. Knowing that their works will be replicated (and perhaps even more importantly that this replication will be available for public scrutiny) holds the original authors accountable for their work (which is also true, I would imagine, for qualitative work as well), thus acting as a deterrent to such irresponsible behavior. The adoption of such a standard certainly will not solve all issues regarding academic honesty or prevent the search for “positive results” (certainly the registration of research designs prior to the conduct of a project would also dis-incentivize such behaviors) but it would be a big step in the right direction.

Note, however, that the move toward the adoption of replication standards and data transparency has not been without its critics (Gherghina and Katsanidou Reference Gherghina and Katsanidou2013). For instance, James Gibson (Reference Gibson1995) has argued strongly against the introduction of journal-enforced replication standards as implied by favoring the community model, suggesting that such a move would lead to a focus on minor methodological “trivia” as opposed to theory and a minimization of the value of the analysis of large secondary data sets in favor of small original ones (Gibson Reference Gibson1995, 475).

Another concern that potentially arises from the journal-enforced replication standards is that it may lead to poorly conducted replication studies that are submitted to the journals. As Funder (Reference Funder2013) contends in an editorial “Does ‘Failure to Replicate’ Mean Failed Science?”, although some egregious cases in psychology may cause alarm, such fraud is actually very rare and “focusing on them too tightly can be misleading.” There are many reasons why replications fail—the replication study may not follow the exact methods used by the original research project; or the replicator lacks the necessary skills to replicate the original study; or the original finding simply may have been a “lucky accident.” Further, scholars work very hard to work through the “chaos” of social and political reality and often are quite eager to make their results public. Although at times they may be too eager to report results, scholars' reputations and careers are on the line. A potential concern rising from an emphasis on replication studies, is that such an emphasis, if endorsed by the discipline's major journals, will incentivize “witch hunts” and an effort to “slay giants” as a career pursuit. Perhaps this can be allayed by careful review of all replication studies, but this is beyond the capacity (and currently the willingness) of most journals.

Although the literature has focused largely on advocating the provision of replication data, and the obvious benefits for the scholarly community, much less analysis has been done empirically on the current state of the discipline and how political science compares with other fields in the social sciences.

WHAT IS THE CURRENT STATE OF THE FIELD AMONG JOURNALS IN THE SOCIAL SCIENCES?

Little empirical work has examined the state of the discipline regarding how journals deal with the issue of the provision of replication data. However, a recent very important exception is an article in European Political Science by Sergiu Gherghina and Alexia Katsanidou (Reference Gherghina and Katsanidou2013). Surveying journal websites, and then following up with a survey of editors, in their study of 120 political science and international relations journals, the authors found that only 19 journals had any policies regarding replication (but importantly, most all of the high-impact journals and general journals had such policies in place).

Although this may seem a remarkably low proportion of journals, the lack of emphasis on provision of replication data is not limited to political science. Quantitative sociologists also have long lamented the lack of the availability of replication data in the leading sociology journals (see Freese Reference Freese2007, for a valuable overview of the situation in sociology). In economics, largely as the result of a series of studies that reported dismal rates of both author cooperation and lack of reproducible results (Dewald, Thursby, and Anderson Reference Dewald, Thursby and Anderson1986; McCullough, McGeary, and Harrison Reference McCullough, McGeary and Harrison2006; McCullough and Vinod Reference McCullough and Vinod2003), the official journals of the American Economic Association that publish original empirical research now have an extensive policy regarding the availability of data and materials for replication.

However, perhaps the greatest effort to address the issue of replication has occurred in psychology, and in many ways psychology is taking the lead in promoting data access and replication studies in the social sciences (Funder Reference Funder2013). Although a long tradition of experimental replication exists in the field, nonetheless, there have been remarkably low levels of cooperation in data sharing. A study of 141 articles in American Psychological Association journals—whose stated policy is similar to many political science journals and sociology journals in putting the responsibility of data availability on the authors found only 27% compliance with repeated requests for data for verification purposes (Wicherts et al. Reference Wicherts, Borsboom, Kats and Molenaar2006). More recently, concern over the falsification of results is growing as well as a call for the provision of data for reproducibility purposes (the recent case of discredited Dutch social psychologist Diederik Stapel has highlighted these concerns).Footnote 2 So great has this concern become in psychology, that a group of psychologists have launched “the Reproducibility Project” as apart of the “Open Science Framework,” which aims to replicate the results from leading psychological journals that appeared in articles in 2008 (Psychological Science, the Journal of Personality and Social Psychology, and the Journal of Experimental Psychology: Learning, Memory, and Cognition) (see http://chronicle.com/blogs/percolator/is-psychology-about-to-come-undone/29045).

Thus, the problem for providing reproducible data for replication is not a challenge only facing political science, but most of the social sciences.

INDIVIDUAL VERSUS COMMUNITY RESPONSIBILITY IN POLITICAL SCIENCE JOURNALS

What of the issue of individual versus community responsibility for the provision of replication data? A closer look at the journals in political science and international relations reveals a mixed picture in terms of who is responsible for providing access to data for replication. As Gherghina and Katsanidou (Reference Gherghina and Katsanidou2013) point out, most of the journals in political science and international relations do not have a policy regarding replication. Of the 19 that do, most emphasize individual responsibility for the provision of data for replication purposes. Some very important exceptions exist, particularly the recent changes adopted by the American Journal of Political Science, and the policies several leading international relations journals as well.

Thus far I have only discussed the distinction made in the current literature between individual and community-based models of responsibility for the providing data for replication in terms of either the journals provide access to data or the individual authors do. Perhaps it would be more useful to frame the choices in terms of provision of data (or who is responsible for holding the replication files and making them available on request) and enforcement of provision (or who makes sure that the data are actually accessible).

Table 1 illustrates three basic models of replication files management, based on these two dimensions. First is what I label the Journal Responsibility Model (JRM, which is a form of community provision), where the journal requires that data is provided to the journal prior to publication of an article (and can be stored either by the journal or at a community site such as dataverse) which then makes it available on request to scholars who seek to replicate the findings of the study. The journal naturally enforces provision of the data. A second model, the Journal Certification Model (JCM) is also a form of community provision, but is different from the Journal Responsibility Model in that the individual author(s) are responsible for holding the data and making it available (perhaps on the scholar's website) but the journal enforces provision by requiring some form of certification that data is accessible prior to publication of the article (a variation of this model might be that the journal “requires” public provision, but does not enforce this requirement). In the third model, the Trust Model, the author(s) are responsible for provision of the data and the journal trusts that the author(s) will provide the data on request.

Table 1 Three Models of Journal Replication Data Management

Generally, the norm by the journals has been to emphasize the individual's responsibility of providing replication data when requested, and that the journals will generally trust that this is done (or the trust model). For instance, the American Political Science Review emphasizes this when the instructions to the authors asks authors that if

your manuscript contains quantitative evidence and analysis, you should describe your procedures in sufficient detail to permit reviewers to understand and evaluate what has been done and— in the event the article is accepted for publication—to permit other scholars to replicate your results and to carry out similar analyses on other data sets…. In addition, authors of quantitative or experimental articles are expected to address the issue of data availability. You must normally indicate both where (online) you will deposit the information that is necessary to reproduce the numerical results and when that information will be posted (such as “on publication” or “by [definite date]”). You should be prepared, when posting, to provide not only the data used in the analysis but also the syntax files, specialized software, and any other information necessary to reproduce the numerical results in the manuscript.

Similar language regarding the provision of data for replication purposes is available in the Journal of Politics

Authors of quantitative papers published in the JOP must address the issue of data availability in Footnote 1 of their paper. Authors are expected to indicate both where (online) they will deposit the information necessary to reproduce their numerical results and when that information will be posted. Authors should include not only the data used in the analysis but also the syntax files, specialized software, and any other information necessary to reproduce the numerical results in the manuscript. A statement explaining why the data or other critical materials used in the manuscript cannot be shared, or justifying their embargo for a limited period beyond publication may fulfill this requirement. However, we strongly encourage our authors to comply with the spirit of this policy and embrace the scientific norms of professional accountability and openness.

Although the guidelines include the checklist offered by King (Reference King2003), in both cases it is clearly the author's responsibility to provide data, not the journal's. There are no specific measures to ensure that the data is actually provided, other than that an expectation is expressed that authors do so. Neither journal currently provides a site for the making replication files available for its published pieces.

In contrast, the American Journal of Political Science has recently moved in the direction of community provision of replication files, in terms of both submission of data and certification that such data will be accessible if not submitted prior to publication of the article. The journal requires that on acceptance for publication the “manuscript will not be published unless the first footnote explicitly states where the data used in the study can be obtained for purposes of replication and any sources that funded the research.” Further, and perhaps most important, the journal provides a site for storage of all replication files at the “AJPS Data Archive on Dataverse.”

Several major international relations journals, particularly those associated with the International Studies Association (ISA) have generally followed the Journal Responsibility Model and the Journal Certification Model in that they require the provision of replication files as a condition for publication, and these files are posted publicly by the journals. This was a direct result of a symposium on “Replication in International Studies Research” organized by one of the association's journals, International Studies Perspectives, in 2003. The symposium was derived from a set of papers that had been presented at the 2002 International Studies Association Meeting in New Orleans. As a result of these efforts, four leading international relations journals adopted a single common replication policy (James Reference James2003; Gleditsch et al. Reference Gleditsch, James, Ray and Russett2003a; Gleditsch et al. Reference Gleditsch, Metelits and Strand2003b)—these included International Studies Quarterly, Journal of Peace Research, Journal of Conflict Resolution, and International Interactions.

One of these journals was the flagship journal of the ISA, the International Studies Quarterly (ISQ), whose submission guidelines clearly state the requirement that authors make “their data …. fully accessible. If the data in question are not already publicly archived, authors will be required to certify that the data are readily available to others. Requests for copies of the data must be addressed to the author or authors, and not the offices of ISQ.” Thus, there is no requirement that data be deposited with ISQ as long as the author can document that it is publicly archived elsewhere. If not, data is archived with the journal and made public on the ISA's website at http://www.isanet.org/Publications/ISQ/ReplicationData.aspx.

As these examples illustrate, there is considerable variation in the implementation of replication policies by journals in political science (and to a lesser extent international relations). Many journals do not have any policies to speak of (as clearly indicated by the work of Gherghina and Katsanidou Reference Gherghina and Katsanidou2013). Even those journals that do, only a few have embraced the community-based model of requiring submission of replication files prior to publication of an article in a journal. Why have journals been slow to adopt a community-based standard?

One possible reason for the hesitancy is the lack of space to store replication files. This is probably more true for specialized journals that do not have the resources of the major general journals that are supported by subsidies from major academic presses. However, insufficient storage space may become less of a problem with the availability of such storage sites as “Dataverse” or by storage sites made available by professional associations (such as the ISA).

A more vexing problem is what to do with nonquantitative pieces that appear in the journals. Indeed, in many journals, including the major ones, the emphasis on qualitative and/or normative work is increasing, which does not lend itself as easily to storage and access (and do not necessarily follow the protocol provided by King Reference King2003). The major journals are imprecise about data provision and enforcement and mostly leave the provision of qualitative data entirely up to the authors. Thus the APSR states:

… authors of qualitative, observational, or textual articles, or of articles that combine such methods with quantitative analysis, should indicate their sources fully and clearly enough to permit ready verification by other scholars—including precise page references to any published material cited and clear specification (e.g., file number) of any archival sources. Wherever possible, use of interactive citations is encouraged. Where field or observational research is involved, anonymity of participants will always be respected; but the texts of interviews, group discussions, observers' notes, etc., should be made available on the same basis (and subject to the same exceptions) as with quantitative data.

(see http://www.apsanet.org/content_43805.cfm)Footnote 3

However, as indicated in several pieces in this PS symposium (particularly the contributions by Elman and Kapizsewski, and Moravcsik) new standards and new ideas for the provision of qualitative data for research transparency purposes are being developed. Thus, the major journals soon should be in a position to enact some of these recommended standards.

PUBLICATION VENUES FOR REPLICATION STUDIES?

Perhaps a more important issue, at least from the perspectives of the journals (which has not received nearly as much attention in the literature) is where replication studies should be published. If the prospects of public replication of published work is to deter scholarly dishonesty or misrepresentation of results, identifying a venue for the publication of such work should be a central part of any discussion of the adoption of replication policies in political science. Simply providing access to data is not enough—an outlet for the publication of such material provides an incentive for scholars to engage in such an often time-consuming activity with little obvious rewards.

The editors of the APSR have been discussing this issue for some time. In many ways this was prompted by several recent exchanges we had with a scholar who had obtained the replication data from the authors of a manuscript that had appeared in an earlier issue of the Review (in 2010, prior to the University of North Texas' team taking the reins of the journal). After obtaining the replication data from the authors of the original piece (with the editors' help) they proceeded to attempt to replicate the results, but were unable to do so. The authors notified us and asked where to publish such a replication study. Our policy at the APSR (which was also the policy of all of our predecessors and the policy of most major journals in the social sciences as well) is not to publish works that are only replication studies because they do not represent the kind of original work we publish in the Review.

There are very good reasons for APSR's policy, and we strongly believe in continuing it. We do believe, however, that a very good point was made. A venue for the publication of replication studies is necessary, especially the discipline aspires to raise the degree of scientific rigor in the field. However, as editors of the APSR we are also reluctant to publish such studies in the Review, because this would open up a “cheap” way for authors to have their work published in the APSR, and every Tom, Dick, and Harriet (pardon the expression) could potentially seek to replicate some study, just to get published in the Review. Most all other major journals in the field, we believe, do not to publish solely replication studies (certainly this is true of APSR, AJPS and JOP, as well as the major international relations journals).

Certainly in the past occasional “Forums” have been published in the Review, and in other journals, as well. This potentially allows for the incorporation of such replication studies in a rebuttal and a rejoinder, however, these instances are too rare to address the general issue. No current venue provides for the publication of replication studies of pieces that appear in APSA journals, that appear in an APSA venue (some replications of APSR articles appear in journals outside of APSA, but not in an APSA publication). If we are serious about promoting research transparency and scholarly integrity via access to replication files, we must also, as a community, provide a venue for this material to be made public (and published). Given the challenges associated with publishing replication attempts, researchers now have little incentive to conduct such studies.

What are some ways to provide such publication venues? One model is offered by psychology. The Association for Psychological Science (APS) has provided a special section in one of the society's journals dedicated to the production of replication reports. Note that replication studies rarely appear in psychology journals. The new Registered Replication Reports article type in Perspectives on Psychological Science seeks to provide an outlet for work that replicates research in psychology. The journal argues that:

  • psychological science should emphasize findings that are robust, replicable, and generalizable;

  • direct replications are necessary to estimate the true size of an effect;

  • well-designed replication studies should be published regardless of the size of the effect or statistical significance of the result; and

  • traditional psychology journals do not have the space or inclination to publish such reports (see http://www.psychologicalscience.org/index.php/news/releases/initiative-on-research-replication.html).

Note that Perspectives on Psychological Science (although a highly ranked journal) is not a general research journal. Rather, its purpose and function is similar to PS and Perspectives on Politics in political science, and International Studies Perspectives and the International Studies Review in international relations. As such, it publishes “reports and articles, including broad integrative reviews, overviews of research programs, meta-analyses, theoretical statements, book reviews, and articles on topics such as the philosophy of science, opinion pieces about major issues in the field, autobiographical reflections of senior members of the field, and even occasional humorous essays and sketches.” To follow this model would require a special section of PS reserved for replication studies

Another second minimalist alternative would be to provide an electronic “blog like” venue for the publication of replication studies, something like the “Monkey Cage” a very popular blog/newsletter that is read by thousands of political scientists (and policy makers) throughout the world. Certainly this would make replication findings more public, and require considerably less space in an existing journal (and less resources than a new journal), and certainly could be seen as a deterrent on scholarly dishonesty. However would a blog carry the same prestige come tenure and promotion time as published in a peer-reviewed publication? This strategy would not provide as strong an incentive scholars to conduct replication studies, and without such studies the deterrent effect of replication would be minimized.

Another third model would be to offer “publication” of replication studies by the major journals in the discipline, but to print those replication studies in an online supplement directly linked to the articles that appear in the journals. Journals could highlight those articles that have been replicated multiple times, providing an important service to readers, and a greater reward for better work.

A fourth model is to create an entirely new publication. This includes considering a new APSA publication (or perhaps part of a proposed new publication). Currently the association, largely as the result of the efforts of the APSA immediate past president Jane Mansbridge, has begun to assess the current array of journals and to plan for any additional journals for publication as is necessary for the discipline. Such a journal, if launched by the association, could have, as one of its core missions, the publication of replication reports, in addition to other functions.

In short, the APSA should consider potential alternative venues for the publication of replication studies (or perhaps “forums” or debates) of pieces that appear in APSA journals. Now it is not exactly clear how this should be done, if it could be done online, if it requires an editorial team, what the relationship would be with the existing APSA journals, and how would this be related to Cambridge University Press, but if we are to move forward as a discipline, we must have some venue available for the publication (or at least making public) such studies.

CONCLUDING REMARKS

This article argues that the move toward the adoption of replication policies by the major journals in political science raises two issues. One, who should be responsible for the provision of replication materials to the scholarly community? And, two, where should these replication studies be published?

First, the move toward a community or social policy model is preferable to the individualistic policies adopted by most journals (either in the form of the Journal Responsibility Model or the enforced Journal Certification Model) but this raises issues of space and storage (particularly regarding the JRM) as well as what to do with qualitative and normative work and other forms of research that are published in many general political science journals. Only providing replication materials for quantitative studies would not only be incomplete, but would send the signal that only quantitative studies should be externally validated, and that other, less important work need not be. Clearly, this is not the message that the major journals should communicate to the scholarly community. The other contributions in this symposium highlight how the journals might more effectively begin to deal with issues of data access and research transparency for qualitative work.

Second, an outlet for the publication of replication studies that appear in APSA journals is needed (although not necessarily exclusively on articles that appear in APSA journals), that the APSA should publish. This might involve one of the four alternative approaches discussed earlier, or perhaps another approach. Whatever the case, this is something that should be part of the discussion of replication and research transparency that has not, in my view, been adequately addressed.

Footnotes

1 This is not to suggest that individuals who are responsible for provide replication data are not responding to group norms emanating from a scholarly community. It means that the primary responsibility for providing data lies with the author, not the journal.

2 The Stapel case is not the only recent controversy in psychology that has increased the call for more replication studies. For a discussion of other cases see Roediger (Reference Roediger2012).

3 As for the AJPS, the guidelines do not speak directly to the issue of qualitative data at all, although the guidelines speak of “supporting information” and such material must be “made ready for permanent posting” but manuscripts without data or SI are exempt.” http://www.ajps.org/manu_guides.html

References

REFERENCES

Dewald, William G., Thursby, Jerry G., and Anderson, Richard G.. 1986. “Replication in Empirical Economics: The Journal of Money, Credit, and Banking Project.” The Journal of Money, Credit, and Banking 76: 587603.Google Scholar
Freese, Jeremy. 2007. “Replication Standards for Quantitative Social Science: Why Not Sociology?Sociological Methods and Research 36: 153–72.CrossRefGoogle Scholar
Funder, David. 2013. “Does ‘Failure to Replicate’ Mean Failed Science?” Live Science at http://www.livescience.com/32041-revisiting-science-studies.html (accessed September 30, 2013).Google Scholar
Gherghina, Sergiu, and Katsanidou, Alexia. 2013. “Data Availability in Political Science Journals.” European Political Science. Advance online publication, March 1; doi:10.1057/eps.2013.8. Google Scholar
Gibson, James L. 1995. “Cautious Reflections on a Data Archiving Policy for Political Science.” PS: Political Science and Politics 28: 473–76.Google Scholar
Gleditsch, Nils Petter, James, Patrick, Ray, James L., and Russett, Bruce. 2003a. “Editors' Joint Statement: Minimum Replication Standards for International Relations Journals.” International Studies Perspectives 4: 105.Google Scholar
Gleditsch, Nils Petter, Metelits, C., and Strand, H.. 2003b. “Posting Your Data: Will You Be Scooped or Will You Be Famous.” International Studies Perspectives 4: 8997.Google Scholar
James, Patrick. 2003. “Replication Policies and Practices in International Studies Quarterly.” International Studies Perspectives 4: 8588.Google Scholar
King, Gary. 1995. “Replication, Replication.” PS: Political Science and Politics 28: 443–99.Google Scholar
King, Gary. 2003. “The Future of the Replication Movement.” International Studies Perspectives 4: 100–05.Google Scholar
King, Gary. 2006. “Publication, Publication.” PS: Political Science and Politics 39: 119–25.Google Scholar
McCullough, B. D., McGeary, Kerry Anne, and Harrison, Teresa D.. 2006. “Lessons from the JMCB archive.” Journal of Money, Credit, and Banking 38: 1093–107.Google Scholar
McCullough, B. D., and Vinod, H. D.. 2003. “Verifying the Solution from a Nonlinear Solver.” American Economic Review 93: 873–92.Google Scholar
Roediger, Henry. 2012. “Psychology's Woes and a Partial Cure: The Value of Replication.” APS Observer 25 (2) at http://www.psychologicalscience.org/index.php/publications/observer/2012/february-12/psychologys-woes-and-a-partial-cure-the-value-of-replication.html (accessed September 30, 2013).Google Scholar
Wicherts, Jelte M., Borsboom, Denny, Kats, Judith, and Molenaar, Dylan. 2006. “The Poor Availability of Psychological Research Data for Reanalysis.” American Psychologist 61: 726–28.Google Scholar
Figure 0

Table 1 Three Models of Journal Replication Data Management