Peer Review for Grant Applications: NSF, NIH, and Federal Funding
Federal grant funding in the sciences depends almost entirely on peer review. Whether a researcher is submitting a proposal to the National Institutes of Health, the National Science Foundation, or another federal agency, the application will be evaluated by a panel of qualified scientific peers before any funding decision is made. Understanding how that process works — who reviews, what criteria are applied, and how decisions are reached — is essential for anyone navigating federal funding for the first time or trying to improve their track record after prior rejections.
How Federal Grant Peer Review Differs from Journal Peer Review
Peer review for grant applications shares structural similarities with peer review in scientific journals, but the purpose, timeline, and criteria are meaningfully different. Journal peer review asks: Is this work sound, significant, and worthy of publication? Grant peer review asks: Should public funds be committed to support this proposed work?
That distinction matters. A grant reviewer is evaluating a plan, not completed research. The assessment is forward-looking and probabilistic — reviewers are judging the likelihood that a proposed study will succeed, produce useful results, and justify its budget. This introduces a different kind of expertise requirement. Reviewers must understand not only the science but also the feasibility of proposed methods, the track record of the investigator, and the appropriateness of the resource environment.
Both NIH and NSF have formalized this through established review criteria, and those criteria are publicly documented.
NIH Peer Review: The Center for Scientific Review and Study Sections
The National Institutes of Health conducts grant peer review through the Center for Scientific Review (CSR), which manages the majority of NIH applications. The CSR assigns applications to Scientific Review Groups (SRGs), commonly called study sections, organized by scientific domain. Each study section typically consists of 15–25 scientists with relevant expertise, selected and managed by a Scientific Review Officer (SRO).
NIH uses five scored review criteria, each rated on a scale of 1 (exceptional) to 9 (poor):
- **Significance** — Does the study address an important problem?
- **Investigators** — Are the principal investigators and collaborators qualified?
- **Innovation** — Does the application introduce novel concepts or methods?
- **Approach** — Is the research strategy rigorous and achievable?
- **Environment** — Does the institutional setting support the work?
These criteria are outlined in the NIH Grants Policy Statement, which is updated regularly and publicly available through the NIH Office of Extramural Research (grants.nih.gov). Each application receives an Overall Impact Score, and those scoring in approximately the top half receive a percentile ranking within their study section's recent review history. Program officers at individual NIH institutes then use those scores — along with programmatic priorities — to make final funding recommendations.
Applicants should be aware that NIH review is two-stage. Applications first go through the SRG for scientific review, then to the relevant institute's advisory council for secondary review on program relevance and funding priority.
NSF Peer Review: Merit Review and the Two Criteria Framework
The National Science Foundation operates its grant review process under what it calls the NSF Merit Review Process, governed by NSF's Merit Review Criteria established in the America COMPETES Reauthorization Act of 2010 (Public Law 111-358) and further detailed in the NSF Proposal and Award Policies and Procedures Guide (PAPPG), which is updated with each fiscal year.
NSF's review framework rests on two primary criteria:
- **Intellectual Merit** — The potential to advance knowledge and understanding within and across fields
- **Broader Impacts** — The potential to benefit society and contribute to the achievement of specific desired societal outcomes
Both criteria carry equal formal weight, though applicants frequently underinvest in articulating broader impacts. NSF program officers solicit reviews from external ad hoc reviewers (typically three to five per proposal) and may also convene panel reviews for programs with high submission volumes. Unlike NIH, NSF does not assign numeric scores in the same formalized tiered system; reviewers typically rate proposals as Excellent, Very Good, Good, Fair, or Poor.
Program officers at NSF retain substantial discretionary authority in funding decisions and may fund proposals that reviewers rated somewhat lower if they align with programmatic gaps or portfolio needs — and vice versa. This makes the written reviewer feedback, which is provided to applicants in review summaries, a critical resource for resubmission strategy.
Other Federal Funding Bodies and Their Review Processes
Beyond NIH and NSF, several other federal agencies conduct formal peer review of grant applications with their own frameworks:
The Department of Defense (DoD), through mechanisms like the Congressionally Directed Medical Research Programs (CDMRP), uses a two-tier review process that includes both scientific peer review and a programmatic review by a consumer advocate panel — a structure designed to ensure alignment with patient-centered priorities.
The Department of Energy (DOE) Office of Science manages peer review through its program offices and the Basic Energy Sciences Advisory Committee, applying review criteria that emphasize scientific quality and alignment with DOE mission priorities.
The Patient-Centered Outcomes Research Institute (PCORI), while not a federal agency, operates under a congressional mandate (Patient Protection and Affordable Care Act, Section 6301) and uses a structured merit review process with explicit attention to patient and stakeholder engagement criteria.
Each agency publishes its review criteria in publicly accessible program announcements and funding opportunity notices, typically hosted on grants.gov or the agency's own extramural research pages.
What Applicants Should Understand About Reviewer Selection and Conflict of Interest
Reviewer selection is not random. Federal agencies actively work to match reviewer expertise to application content, and they maintain policies designed to prevent conflicts of interest. NIH's conflict of interest policy for peer reviewers prohibits individuals from reviewing applications from their own institution, close collaborators, direct competitors, and anyone with whom they have a financial relationship. These policies are codified in the NIH Grants Policy Statement and in federal regulations under 42 CFR Part 52h (for NIH specifically).
NSF's conflict of interest standards for reviewers are described in the PAPPG and align with broader federal standards under the Office of Government Ethics guidelines. Reviewers who identify a conflict are expected to recuse themselves before any discussion.
Understanding this structure matters for applicants because it informs how to think about reviewer feedback. A reviewer may lack familiarity with a niche methodology even if they are broadly expert in the field — which is why proposal writing must anticipate and address technical details that specialists might take for granted. For deeper context on how reviewer roles and responsibilities vary across contexts, see how peer review works and the discussion of peer review ethics.
How to Seek Guidance and Evaluate Sources of Help
Navigating federal grant peer review is a legitimate area where professional guidance matters. Researchers seeking help should distinguish between several types of resources:
Institutional resources — Most research universities and medical centers maintain offices of sponsored programs or research development offices staffed by grant professionals with direct knowledge of agency processes. These offices often provide internal mock review panels, proposal editing, and budget development support.
Program officers — NIH and NSF program officers are explicitly available to discuss proposals before submission. Contacting them to confirm fit and discuss scope is not only permitted but encouraged. Their contact information appears in every program announcement.
Professional organizations — The National Organization of Research Development Professionals (NORDP) represents research development professionals and maintains a community of practice around grant strategy and institutional support. The Grant Professionals Association (GPA) offers certification (Grant Professional Certified, or GPC) and ethical standards for grant writing professionals.
When evaluating external consultants or grant writing services, verify whether they have domain expertise in the specific agency and program type, ask whether they have experience with successful funded proposals in your research area, and clarify their role — some consultants write, some strategize, and some provide review. For questions about the broader peer review ecosystem, the peer review FAQ and how to get help pages on this site address common points of confusion.
Federal grant peer review is rigorous by design. It is also learnable. Applicants who understand the criteria, the reviewer perspective, and the institutional resources available to them are substantially better positioned to compete effectively — and to interpret feedback constructively when outcomes fall short.
References
- National Research Council, A Framework for K–12 Science Education (2012) — National Academies Press
- Tufts Center for the Study of Drug Development
- EPICA project, as cited by NOAA's National Centers for Environmental Information
- University of Chicago Press — Thomas Kuhn, The Structure of Scientific Revolutions
- National Academy of Sciences — Science, Evolution, and Creationism (2008)
- A Framework for K–12 Science Education (National Research Council, 2012)
- Presidential Commission for the Study of Bioethical Issues
- Smithsonian National Museum of Natural History — Evolution Resources