How It Works

Peer review is the mechanism by which scientific and scholarly communities decide what counts as reliable knowledge — and what gets published, funded, or discarded. The process routes a manuscript through independent expert evaluation before it reaches readers, acting as a distributed quality filter across tens of thousands of journals and grant agencies. Understanding the sequence, the roles, and the common points of failure helps authors, editors, and institutions navigate it with less friction.

What practitioners track

The metrics that matter in peer review are not always the ones that get the most attention. Turnaround time is the operational heartbeat: most journals target a decision within 60 to 90 days of initial submission, though fields with smaller reviewer pools — ecology, computational linguistics, rare-disease medicine — routinely run longer. The Nature family of journals publishes anonymized turnaround data showing median first-decision times ranging from roughly 5 weeks to over 20 weeks depending on the journal (Nature Portfolio Editorial Policies).

Acceptance rates get tracked obsessively, for good reason. High-prestige journals in biomedicine and physics reject 85 to 95 percent of submissions at some stage, often before external review even begins. That pre-review triage — sometimes called a "desk rejection" — is itself a form of editorial quality control, not a bug.

What practitioners also track, less visibly: reviewer response rates. The average invitation-to-acceptance rate for reviewers has declined over the past decade, meaning editors at journals indexed in MEDLINE or Web of Science are often contacting 6 to 10 potential reviewers to secure 2 completed reports (Publons Global State of Peer Review 2018).

The basic mechanism

A submitted manuscript enters a journal's editorial management system — ScholarOne, Editorial Manager, and Open Journal Systems are the three most widely deployed platforms — and receives an internal tracking number. An editor reads the abstract, scans the methods, and makes a rapid judgment: does this fit the journal's scope? Does it clear a basic threshold of methodological coherence?

If it passes desk review, the editor identifies 2 to 4 domain experts and sends formal invitation letters. This is where the process becomes genuinely probabilistic. A reviewer who accepts commits, in principle, to returning a structured report within 3 to 4 weeks. That report will assess originality, methodological rigor, clarity of presentation, appropriate citation of prior work, and validity of conclusions. The editor then synthesizes those reports into one of four canonical decisions:

  1. Accept as submitted — rare, under 5 percent of reviewed manuscripts at competitive journals.
  2. Minor revisions — the manuscript requires small corrections; authors typically respond within 4 to 6 weeks.
  3. Major revisions — substantial rework needed; the revised version usually returns to at least one original reviewer.
  4. Reject — the manuscript is declined, with reviewer feedback provided to the authors.

The contrast between minor and major revisions is more than semantic. A minor revision rarely goes back to external reviewers — the handling editor checks the response. A major revision often restarts the review clock, sometimes adding 6 to 12 weeks to total time-in-system.

Sequence and flow

The choreography of a standard double-blind review round follows a defined arc:

  1. Submission — author uploads manuscript, figures, and cover letter through the journal's portal.
  2. Editorial triage — handling editor assesses scope fit and minimum quality threshold (typically 24–72 hours at active journals).
  3. Reviewer identification — editor searches author-suggested reviewers, competing-interests disclosures, and prior citation networks to find qualified, unconflicted experts.
  4. Review period — reviewers submit structured reports, usually on a form that separates confidential comments to editors from comments returned to authors.
  5. Editorial synthesis — editor weighs reports, resolves disagreements, and drafts the decision letter.
  6. Author response — if revisions are requested, authors submit a revised manuscript and a point-by-point response document.
  7. Second-round review — for major revisions, the revised manuscript returns to some or all original reviewers.
  8. Final decision — accept, reject, or (occasionally) a second round of major revisions.

Post-acceptance, the manuscript enters production — copyediting, typesetting, proofing — before online-first publication. That final stage adds another 4 to 16 weeks depending on journal infrastructure and backlog.

The peer review overview at peerreviewauthority.com traces how this sequence differs across preprint servers, open-review platforms, and traditional closed-review journals — the variation is larger than most authors expect before they submit their first manuscript.

Roles and responsibilities

Three distinct roles carry the process:

Authors prepare the manuscript, disclose conflicts of interest, and respond to reviewer critiques. The response letter — a point-by-point accounting of every change made — is one of the most consequential documents in the process and one of the least-discussed.

Reviewers provide expert evaluation on a volunteer basis. The median reviewer completes approximately 8 reports per year globally, according to the Publons dataset, while publishing on average 5 papers in the same period — a rough parity that many argue is the load-bearing assumption of the entire system.

Editors hold the most consequential authority: they make the final decision, they absorb conflicting reviewer opinions, and they exercise judgment that no algorithm has successfully replaced. At smaller journals, a single editor-in-chief performs all three functions. At large commercial publishers like Elsevier or Springer Nature, professional staff editors with PhD training handle high-volume fields, while academic associate editors manage specialized subfields.

A fourth role — statistical reviewers — is now standard at clinical trial journals and epidemiology publications, where a specialist independently assesses power calculations, randomization procedures, and reported confidence intervals. The BMJ and JAMA formalized this requirement for randomized controlled trials (ICMJE Recommendations for Randomized Trials). The separation of content review from statistical review is one of the cleaner innovations the field has produced — a division of labor that reduces the cognitive load on generalist reviewers while raising the floor on quantitative rigor.