peer-review
Structured manuscript and grant review with checklist-based evaluation covering methodology assessment, statistical validity, reporting standards compliance (CONSORT/STROBE), and constructive feedback for formal peer review writing.
Write formal peer reviews with structured checklist evaluation
Trigger phrases
Phrases that activate this skill when typed to Claude Code:
peer review this manuscriptreview this paperwrite a formal reviewevaluate this manuscriptcritique this study
What it does
peer-review is a Claude Code skill from K-Dense AI’s scientific-agent-skills repo. It turns Claude into a structured manuscript reviewer that works through a checklist covering major scientific concerns — hypothesis clarity, study design, statistical validity, reporting standards compliance, and writing quality — then produces a formal review document with major and minor comments formatted for journal submission.
A session produces a review letter: a summary paragraph, numbered major concerns, numbered minor concerns, and a recommendation (accept/minor revision/major revision/reject). The checklist ensures nothing obvious is missed.
When to use it
Reach for it when:
- You’ve been assigned a peer review and want a structured first pass before adding your expert judgment
- You’re revising your own manuscript and want to anticipate what reviewers are likely to flag
- You’re mentoring junior researchers and want to show what a complete review looks like
When not to reach for it:
- Evaluating the quality of evidence for a clinical decision — use
scientific-critical-thinking - Scoring a set of manuscripts with a numerical rubric — use
scholar-evaluation
Install
Copy the SKILL.md from K-Dense AI’s peer-review folder into .claude/skills/peer-review/ in your project.
Trigger phrases: “peer review this manuscript”, “review this paper”, “write a formal review”.
What a session looks like
A typical session has three phases:
- Manuscript intake. Paste the manuscript text or provide the file path. Claude identifies the study type (RCT, observational, review) and selects the appropriate reporting checklist.
- Checklist evaluation. Claude works through each checklist item systematically, flagging concerns with supporting quotes from the manuscript.
- Review letter draft. Findings are structured into a formal review document with summary, major concerns, minor concerns, and a recommendation. You edit and add domain expertise before submitting.
Receipts
Where it works well:
- Methods and statistics sections — Claude catches missing details (sample size justification, blinding procedures, multiple comparison corrections) reliably
- Formatting and reporting guideline compliance — CONSORT flow diagrams, STROBE checklists, PRISMA flow
Where it backfires:
- Novelty assessment — Claude cannot judge whether a finding advances a field without domain knowledge you supply
- Detecting subtle data fabrication or image manipulation — this is not a research integrity tool
Pattern that works: use the skill output as a structured scaffold, then layer your own expert commentary on top rather than treating the generated review as final.
Source and attribution
Originally authored by K-Dense Inc.. The canonical SKILL.md lives in the peer-review folder of their public scientific-agent-skills repository.
License: MIT. Install, adapt, and redistribute with attribution preserved.
This page documents the skill from a practitioner’s perspective. For the formal spec and any updates, defer to the source repo.