Open Access and Peer Review: The Changing Landscape
The relationship between open access publishing and peer review has become one of the most consequential debates in modern science. For researchers, funders, institutions, and readers, understanding how these two forces interact determines not just where science gets published — but how reliably it can be trusted. This page explains what open access actually means for peer review quality, where the genuine tensions lie, and how to evaluate whether a given publication meets rigorous standards.
What Open Access Actually Means
Open access (OA) refers to research that is freely available online without subscription barriers, licensing restrictions, or paywall fees. The term covers several distinct models that are frequently conflated.
Gold open access means the final published article is immediately freely available, typically because the author or their institution pays an article processing charge (APC). Green open access means the author deposits a version of the manuscript — usually a preprint or accepted manuscript — in an open repository, while the journal itself may still require a subscription. Diamond open access refers to journals that charge neither readers nor authors, typically supported by institutions, societies, or grants.
The Budapest Open Access Initiative (2002), widely considered the founding document of the open access movement, defined OA as "free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles." That original framing said nothing about peer review — an omission that has created substantial ambiguity in the decades since.
Open access is a distribution model, not a quality standard. That distinction matters enormously when evaluating whether a paper published in an OA journal has undergone rigorous review.
How Open Access Funding Models Affect Peer Review
The APC model introduces a structural tension that has been widely documented. When a journal's revenue depends on accepting articles, the incentive to reject manuscripts — and therefore reject income — creates at minimum an appearance of conflict. Critics argue this pressure can distort editorial decision-making even at journals with ostensibly rigorous review processes.
This concern is not theoretical. The rise of predatory publishing — journals that collect APCs while providing little or no genuine peer review — represents the most visible failure of the APC model when applied without institutional accountability. Jeffrey Beall, a librarian at the University of Colorado Denver, maintained a widely-referenced list of potentially predatory publishers and journals before it was taken offline in 2017. The criteria he developed, which assessed editorial transparency, board legitimacy, and review practices, remain influential in how institutions evaluate questionable journals.
The Directory of Open Access Journals (DOAJ), maintained by Infrastructure Services for Open Access (IS4OA), operates a vetting process intended to identify OA journals with legitimate peer review practices. Inclusion in the DOAJ requires journals to document their peer review type, editorial policies, and author charges transparently. The DOAJ is not a guarantee of quality, but absence from it is a meaningful signal worth investigating.
For a closer look at the structural differences between how journals handle manuscripts before and after the review process, see the discussion on peer review vs. editorial review.
Preprints and the Acceleration of Science Communication
Open access has accelerated the adoption of preprint servers — platforms where researchers post manuscripts before formal peer review. arXiv (operated by Cornell University) has hosted preprints in physics, mathematics, and related fields since 1991. bioRxiv and medRxiv, operated by Cold Spring Harbor Laboratory, have become the dominant preprint servers for life sciences and health research respectively.
Preprints allow findings to circulate rapidly, which demonstrated real value during the COVID-19 pandemic, when critical research on transmission, treatment, and vaccines moved through the scientific community weeks or months faster than formal publication would have permitted. The same dynamic also allowed flawed, unreviewed research to reach policymakers and the public before expert scrutiny had occurred.
The scientific community has not reached consensus on how to handle preprints. The International Committee of Medical Journal Editors (ICMJE), which sets authorship and reporting standards for many major medical journals, has clarified that posting preprints does not constitute prior publication that would disqualify a manuscript from submission. However, ICMJE recommends that authors note preprint status clearly and update preprints when significant errors are identified.
Understanding how different types of peer review — including post-publication review — apply to preprints helps clarify what level of scrutiny any given manuscript has actually received.
Funder Mandates and Institutional Policy
Open access is no longer purely voluntary in many research contexts. Major funding bodies now require that research they fund be made openly available within defined timeframes.
The National Institutes of Health (NIH) updated its public access policy, effective January 2023, requiring that all peer-reviewed publications arising from NIH funding be made freely available in PubMed Central immediately upon publication — eliminating the previous 12-month embargo. The policy applies to any research where NIH funding contributed to the work.
In Europe, Plan S — a coordinated initiative launched in 2018 by cOAlition S, a group of national research funders including UK Research and Innovation (UKRI) and the European Research Council (ERC) — requires that research outputs from funded projects be published in compliant open access venues immediately upon publication. Plan S does not require any particular peer review model, but it has accelerated institutional negotiations with major publishers over APC pricing and journal agreements.
These mandates have not resolved the quality question. A paper published in a compliant open access journal under funder mandate has been through whatever peer review process that journal employs — which may be rigorous, minimal, or something in between. Compliance with a mandate signals distribution policy, not review quality.
The history of peer review provides useful context for how journal economics have always intersected with review practices, long before the open access era.
Evaluating Open Access Journals: What to Look For
Determining whether an open access journal conducts genuine peer review requires looking past surface claims. Several concrete indicators are worth examining.
First, check whether the journal is indexed in established databases. MEDLINE (maintained by the National Library of Medicine), Scopus (Elsevier), and Web of Science (Clarivate) apply selection criteria that include peer review practices as an eligibility requirement. Indexing is not infallible, but it represents external vetting.
Second, examine the journal's stated review process. The Committee on Publication Ethics (COPE), a UK-registered charity that publishes peer review guidelines followed by many legitimate journals, maintains publicly available standards for what a credible review process should include. COPE membership is voluntary but signals a journal's commitment to accountable editorial practices.
Third, assess editorial board composition. Named editors with verifiable institutional affiliations and publication records are a basic marker of legitimacy. Ghost boards — lists of prominent researchers who have no actual involvement with the journal — have been documented at numerous predatory outlets.
Fourth, examine turnaround times. Legitimate journals vary considerably in speed, but review-to-decision times of 24 to 48 hours for a complex scientific manuscript are implausible unless the journal is simply not conducting review.
For a structured set of questions to ask when evaluating any publication's review process, the peer review frequently asked questions page offers direct guidance.
What This Means for Readers and Researchers
Open access has made more science readable by more people than at any previous point in history. That access carries with it the responsibility to read critically. A freely available paper is not a validated paper. A paywalled paper is not automatically rigorous.
The peer review system — regardless of the business model surrounding it — remains the primary mechanism by which scientific claims are subjected to expert scrutiny before being treated as established knowledge. Understanding how peer review works and what it can and cannot catch is essential context for anyone using published science to make decisions.
Funder mandates, indexing databases, COPE standards, DOAJ inclusion, and editorial board transparency each provide partial signals about whether a given publication can be trusted. No single indicator is sufficient. Informed evaluation requires looking at several of them together.
External references: Budapest Open Access Initiative (2002), budapestopenaccessinitiative.org; Committee on Publication Ethics (COPE), publicationethics.org; National Institutes of Health Public Access Policy, publicaccess.nih.gov; Plan S / cOAlition S, coalition-s.org; Directory of Open Access Journals (DOAJ), doaj.org; International Committee of Medical Journal Editors (ICMJE), icmje.org.
References
- National Research Council, A Framework for K–12 Science Education (2012) — National Academies Press
- University of Chicago Press — Thomas Kuhn, The Structure of Scientific Revolutions
- National Academy of Sciences — Science, Evolution, and Creationism (2008)
- Tufts Center for the Study of Drug Development
- University of California Museum of Paleontology — Understanding Evolution
- Smithsonian National Museum of Natural History — Evolution Resources
- Karl Popper, The Logic of Scientific Discovery (1934) — Stanford Encyclopedia of Philosophy entry
- Karl Popper, The Logic of Scientific Discovery (1959) — Stanford Encyclopedia of Philosophy summary