Peer Review: Frequently Asked Questions

Peer review sits at the center of how science decides what counts as knowledge — and yet it remains one of the most misunderstood processes in academic and professional life. These questions address the mechanics, the misconceptions, the standards, and the practical realities of peer review across scientific, medical, and academic contexts.

What are the most common misconceptions?

The biggest one: that peer review certifies a study is correct. It doesn't. Peer review is a quality filter, not a truth guarantee. Reviewers check methodology, logic, and presentation — they rarely have access to raw data, and they almost never attempt to replicate the experiment. The Reproducibility Project, coordinated by the Center for Open Science, found that only about 36 of 100 psychology studies replicated successfully when independently repeated — and all 100 had passed peer review.

A close second: the assumption that anonymous review is neutral. Double-blind review (where neither author nor reviewer knows the other's identity) reduces some bias, but studies published in journals including JAMA and Nature have documented that reviewer bias by institution prestige and geographic origin persists even under blinded conditions.

Third, peer review is often imagined as fast. The median time from submission to first decision at major biomedical journals runs between 30 and 90 days, with revision cycles frequently extending the process past 12 months.

Where can authoritative references be found?

The home page at peerreviewauthority.com consolidates foundational resources, but primary references exist across several named institutions. The Committee on Publication Ethics (COPE) publishes guidelines and flowcharts covering editorial decisions, conflicts of interest, and reviewer conduct. The International Committee of Medical Journal Editors (ICMJE) maintains recommendations that over 5,000 journals have formally adopted. For open-access policies and preprint standards, the National Institutes of Health (NIH) Office of Research Integrity provides policy documentation relevant to federally funded work.

How do requirements vary by jurisdiction or context?

Peer review standards are not uniform — they shift dramatically depending on funding source, publication venue, and regulatory context. Work funded by NIH or NSF operates under federal guidelines that mandate transparent conflict-of-interest disclosure and, increasingly, open data requirements. The European Research Council imposes its own framework, and journals published under Plan S mandates must comply with open-access standards that affect how peer review is conducted and documented.

In clinical research, the distinction matters enormously. Manuscripts submitted to journals indexed in MEDLINE face ICMJE editorial standards. Regulatory submissions to the FDA use an internal scientific review process that differs structurally from journal peer review — the agency's review is not anonymous, and it carries legal weight that no journal review does.

What triggers a formal review or action?

Three conditions most reliably initiate a formal peer review action: submission of a manuscript to a journal, application for competitive grant funding, and post-publication concern raised about data integrity. The third category — post-publication review — has grown significantly with the rise of platforms like PubPeer, where researchers flag image manipulation, statistical anomalies, or duplicate data.

Institutional review is a separate track. When a university receives a formal allegation of research misconduct under 42 CFR Part 93 (the federal regulation governing misconduct in PHS-funded research), it must complete an inquiry within 60 days and, if warranted, a full investigation within 120 days of that inquiry's completion (per 42 CFR §93.307).

How do qualified professionals approach this?

Experienced reviewers treat a manuscript review as a structured audit, not a read-through. A disciplined approach typically follows this sequence:

  1. Conflict check — Identify any competing interests with the authors, institutions, or funders before reading.
  2. Abstract and conclusion scan — Assess whether claims match findings before examining methodology.
  3. Methods scrutiny — Evaluate sample size justification, statistical tests, and controls.
  4. Results verification — Check that figures, tables, and in-text numbers are internally consistent.
  5. Literature audit — Identify whether relevant prior work is cited or conspicuously omitted.
  6. Constructive written summary — Separate major concerns (methodology flaws) from minor ones (citation gaps, figure quality).

The distinction between major revision and rejection is a judgment call that separates experienced reviewers from novices — the former is used when the core question is valid but execution is flawed; the latter when the fundamental premise or design is unsalvageable.

What should someone know before engaging?

First-time reviewers are often surprised to learn that most journals offer no formal training — acceptance of a review invitation is essentially an implicit claim of competence. Elsevier, Wiley, and the American Chemical Society each publish reviewer guidelines, but these are instructional documents, not prerequisites. Publons (now part of Web of Science) operates a structured Peer Review Training program that logs completed coursework and reviews on a verifiable profile.

Authors engaging with the process should understand that reviewer comments, however bluntly phrased, carry no institutional authority. A reviewer recommendation to reject is advisory — editorial decision-making power rests solely with the editor.

What does this actually cover?

Peer review covers the evaluation of scientific and scholarly work, not its production. It is a process of structured critical assessment applied to manuscripts, grant proposals, conference abstracts, and institutional research programs. It does not cover fabrication detection with certainty, statistical re-analysis of raw data in most cases, or legal adjudication of scientific disputes. For a detailed breakdown of what the process includes, how it works covers the mechanics step by step.

What are the most common issues encountered?

Reviewer fatigue is the system's most documented structural problem. A 2023 analysis published in PLOS ONE estimated that the global peer review system requires approximately 15,000 reviewer-years of effort annually, while the supply of willing qualified reviewers grows more slowly than submission volume. The result is longer wait times and declining acceptance rates at high-volume journals.

Beyond capacity, the four most recurring process failures are:

References