# code-reviewer

> A Claude Code agent persona that reviews code for correctness, security, maintainability, and performance — with explicit severity markers (🔴 blocker, 🟡 suggestion, 💭 nit) and a refusal to bikeshed style.

**Use case**: Get a code review that prioritizes by impact and explains the why, not just the what

**Canonical URL**: https://agentcookbooks.com/skills/code-reviewer/

**Topics**: claude-code, skills, code-review, code-quality

**Trigger phrases**: "review this PR", "review this diff", "spawn a code reviewer"

**Source**: [Michael Sitarzewski](https://github.com/msitarzewski/agency-agents/blob/main/engineering/engineering-code-reviewer.md)

**License**: MIT

---

## What it does

`code-reviewer` is the review persona in the agency-agents collection. It evaluates a diff or PR across five axes — correctness, security, maintainability, performance, testing — and writes feedback with explicit severity tags: 🔴 blocker (must fix), 🟡 suggestion (should fix), 💭 nit (nice to have). Style preferences without a linter behind them are filtered out as bikeshed; the persona will not produce a 40-comment review where 35 of them are nits.

Six rules govern the output: be specific (cite file and line), explain why (not just what), suggest don't demand, prioritize, praise good code where it exists, and ship one complete review rather than drip-feeding rounds. The aim is review-as-mentorship, not review-as-gatekeeping.

## When to use it

- Reviewing a PR where you want a thorough pass before requesting human review
- Self-review before pushing — catch the obvious blockers first
- Reviewing security-sensitive diffs (auth, input handling, data access) where you want explicit vuln-class tagging
- Onboarding to a new codebase — the persona's "explain why" rule produces feedback that teaches

When *not* to reach for it:

- Tiny diffs (rename, typo fix) — the structure is overkill
- Hot paths where you only care about one axis (e.g., perf review only) — a more focused persona will give cleaner output
- Style enforcement — that's what linters are for, and the persona explicitly refuses to be one

## Install

From [msitarzewski/agency-agents](https://github.com/msitarzewski/agency-agents) at `engineering/engineering-code-reviewer.md`. Copy to `~/.claude/agents/` or use the repo's installer. Standalone.

## What a session looks like

1. **Summary.** Persona opens with overall impression, key concerns, what's good. No drip-feeding — the user sees the shape of the review before the line-level comments.
2. **Blockers (🔴).** Security holes (SQL injection, XSS, auth bypass), data-loss risks, race conditions, broken API contracts, missing error handling on critical paths. These get specific line cites and a *why* paragraph.
3. **Suggestions (🟡).** Missing input validation, unclear naming, missing tests for important behavior, N+1 queries, code duplication that should be extracted.
4. **Nits (💭).** Style inconsistencies the linter doesn't catch, minor naming, doc gaps, alternative approaches worth considering.
5. **Praise.** Clever solutions, clean patterns, good test coverage. The rule is explicit: positive feedback is part of the review, not a footnote.
6. **Next steps.** Which blockers are required for merge, which suggestions can land in a follow-up, what to investigate further.

The discipline that makes it work: severity tags. Without them, every comment reads as equally weighted, and the developer can't tell what's blocking merge vs. what's a future-improvement note.

## Receipts

_TODO — to be filled in from a real review session. Once the persona has been pointed at a non-trivial PR, this section will capture: how the 🔴/🟡/💭 distribution shook out, whether the persona caught issues a human reviewer missed (or vice versa), and how its "explain why" rule held up under pressure on a complex diff._

## Source and attribution

From [Michael Sitarzewski's agency-agents repository](https://github.com/msitarzewski/agency-agents/blob/main/engineering/engineering-code-reviewer.md).

License: MIT.

Quote from the persona body, verbatim: *"Reviews code like a mentor, not a gatekeeper. Every comment teaches something."* The severity markers and the "explain why" rule are the operational expression of that stance.