Types of Peer Review: Single-Blind, Double-Blind, Open, and Post-Publication
Peer review is not a monolithic process. The term describes a family of evaluation methods, each with distinct rules about who knows what, when they know it, and how feedback is recorded and attributed. The choice of review type shapes incentive structures, bias exposure, and the integrity of the scientific record in ways that matter far beyond editorial preference. Understanding these distinctions helps researchers navigate submission decisions, helps readers assess published work more accurately, and helps institutions design review processes that match their stated values.
The four principal models — single-blind, double-blind, open, and post-publication peer review — are not equally suited to every context. Each represents a specific trade-off between accountability, candor, bias mitigation, and practical feasibility.
Single-Blind Peer Review
In single-blind peer review, reviewers know the identities of the authors, but authors do not know who reviewed their work. This is the oldest and still the most widely practiced model across scientific disciplines. Major publishers including Elsevier, Springer Nature, and Wiley employ it as their default in many journals.
The asymmetry is intentional in principle: knowing the source of a manuscript theoretically helps reviewers contextualize the work — a methodology paper from a lab with a track record in that technique may warrant different calibration than one from a group with no prior output in the area. Critics, however, argue the asymmetry functions in reverse. Research published in Nature and elsewhere has documented consistent biases in single-blind review toward authors at prestigious institutions, male authors, and researchers in high-income countries. A study by Tomkins, Zhang, and Heavlin (2017) analyzing WSDM conference submissions found that single-blind reviewers assigned significantly higher scores to papers with prestigious author affiliations compared to double-blind conditions.
The model persists largely for practical reasons. Anonymizing authors in fields with small, specialized communities is often futile — methodology, citation patterns, and prior work make authorship identifiable regardless of formal anonymization. In those environments, the formal apparatus of double-blinding may provide false assurance.
Double-Blind Peer Review
Double-blind peer review anonymizes both parties: authors do not know the identities of reviewers, and reviewers do not know the identities of authors. The goal is to evaluate the work on its merits alone, insulated from reputational heuristics.
The Committee on Publication Ethics (COPE), which sets widely adopted ethical standards for academic publishing, identifies double-blind review as a meaningful structural protection against certain categories of bias. The model is standard practice at journals including The Lancet, JAMA, and many social science publications where author identity correlates strongly with reviewer expectation.
Practical challenges are substantial. Effective anonymization requires authors to scrub self-citations, acknowledgments, funding disclosures, and institutional affiliations from submitted manuscripts — a burden that can obscure legitimate context and that experienced reviewers frequently circumvent regardless. A 2014 study in PLOS ONE found that reviewers correctly identified at least one author in approximately 30% of supposedly double-blind submissions in their field.
The model works best in larger disciplines where the pool of active researchers exceeds the point of easy recognition, and where journals enforce anonymization protocols rigorously at the submission stage.
Open Peer Review
Open peer review replaces anonymity with transparency. Definitions vary, but the core feature is that reviewer identities are disclosed — either to the authors during review, to the public after publication, or both. Some implementations also publish the full text of reviewer reports alongside the accepted manuscript, creating a permanent, citable record of the deliberative process.
The BMJ adopted signed peer review in 1999 and remains one of the most prominent examples of open review in medicine. eLife, PeerJ, and F1000Research publish reviewer reports as a matter of policy. The rationale is direct: accountability improves conduct. Reviewers who sign their names are less likely to produce dismissive, careless, or retaliatory reviews.
Evidence on this is mixed. Signed reviews tend to be more constructive in tone. They also tend to be less critical in substance — a result consistent with social pressure dynamics that make open criticism uncomfortable when names are attached. Researchers in early career stages or at less prestigious institutions may be particularly reluctant to submit critical signed reviews of work from senior figures who may influence hiring or funding decisions.
The International Association of Scientific, Technical and Medical Publishers (STM Association) has tracked rapid adoption of open review elements among member journals, noting that hybrid approaches — publishing reports without necessarily naming reviewers — are increasingly common as journals attempt to capture transparency benefits while managing reviewer reluctance.
For a broader overview of how review processes are structured from submission through decision, see How It Works and Key Dimensions and Scopes of Peer Review.
Post-Publication Peer Review
Post-publication peer review relocates evaluation from the gatekeeping stage to the ongoing public record. Rather than determining whether work is published, it responds to work that is already public. Platforms including PubPeer, Retraction Watch, and the F1000Research model enable structured commentary on published papers, with varying degrees of editorial moderation.
The model gained significant visibility during the COVID-19 pandemic, when preprints on medRxiv and bioRxiv accumulated formal post-publication commentary faster than traditional journal review could respond to rapidly evolving evidence. The National Institutes of Health, which funds the bulk of American biomedical research under authority including 42 U.S.C. § 289a, explicitly addressed preprint citation in grant applications during this period, reflecting institutional recognition that the boundary between pre- and post-publication review had shifted.
Post-publication review is not a replacement for pre-publication gatekeeping in most contexts — it functions better as a correction mechanism than a primary filter. High-profile retractions, including those catalogued by Retraction Watch (which has documented over 45,000 retracted papers since its founding in 2010), frequently involve post-publication identification of errors or misconduct that pre-publication review failed to catch.
The model raises questions about who participates, what weight commentary carries, and how journals integrate external criticism into formal records. These questions are active areas of policy development at organizations including COPE and the World Association of Medical Editors (WAME).
Choosing the Right Model: What Researchers and Institutions Should Know
No single model eliminates bias or guarantees quality. Each makes specific trade-offs that are more or less acceptable depending on field norms, journal scope, author demographics, and institutional priorities. A discipline with 200 active specialists has different structural needs than one with 20,000.
Researchers submitting manuscripts benefit from reviewing a journal's stated review policy before submission — not all journals that claim double-blind review enforce anonymization protocols effectively. Institutions designing internal review processes for grant applications or technical reports should evaluate which biases they are most concerned about and which model addresses those specifically.
Regulatory guidance on peer review for federally funded research is addressed in part through NIH peer review policy documentation and the requirements of the Public Health Service Act. The National Science Foundation similarly requires peer review of all competitive research proposals under its merit review framework.
For answers to common questions about the review process, see the Peer Review Frequently Asked Questions page. If you are navigating a specific review situation and need guidance on next steps, How to Get Help for Peer Review covers available resources and what to expect from them. Questions not addressed here can be directed through the contact page.
References: Tomkins, A., Zhang, M., & Heavlin, W. D. (2017). Reviewer bias in single- versus double-blind peer review. PNAS, 114(48), 12708–12713. Committee on Publication Ethics (COPE): publicationethics.org. World Association of Medical Editors (WAME): wame.org. NIH Peer Review Policy: grants.nih.gov/policy/peer.
References
- National Research Council, A Framework for K–12 Science Education (2012) — National Academies Press
- Tufts Center for the Study of Drug Development
- ICPSR — Inter-university Consortium for Political and Social Research
- Presidential Commission for the Study of Bioethical Issues
- Smithsonian National Museum of Natural History — Evolution Resources
- Karl Popper, The Logic of Scientific Discovery (1934) — Stanford Encyclopedia of Philosophy entry
- Karl Popper, The Logic of Scientific Discovery (1959) — Stanford Encyclopedia of Philosophy summary
- Grant, P.R. & Grant, B.R. — 40 Years of Evolution: Darwin's Finches on Daphne Major Island (2014