# humanizer

> A Claude Code skill that removes AI writing patterns from text — em dash overuse, rule of three, promotional language, sycophantic openers, vague attributions — and rewrites it to sound natural and human.

**Use case**: Strip AI tells from any text in two passes

**Canonical URL**: https://agentcookbooks.com/skills/humanizer/

**Topics**: claude-code, skills, humanizing

**Trigger phrases**: "humanize this", "remove AI patterns", "make this sound human", "de-AI this", "this sounds like ChatGPT"

**Source**: [blader](https://github.com/blader/humanizer)

**License**: MIT

---

## What it does

`humanizer` is a Claude Code skill that takes text drafted through an LLM and systematically removes the patterns that make it read as machine-generated. It targets 29 named anti-patterns drawn from Wikipedia's "Signs of AI writing" guide — everything from inflated significance language ("a testament to," "pivotal moment," "evolving landscape") to structural tells like rule-of-three overuse, em dash clustering, and the inline-header vertical list.

The process runs in three passes:

1. **First-pass rewrite.** Claude scans the input against all 29 patterns and rewrites problematic sections — replacing copula avoidance ("serves as a") with plain "is," cutting promotional adjectives ("nestled," "vibrant," "groundbreaking"), removing sycophantic openers ("Great question! Certainly!"), and stripping vague attributions ("industry experts argue") in favor of specific sources or no attribution at all.

2. **Self-audit.** After the draft rewrite, the skill asks itself: "What makes the below so obviously AI-generated?" It answers in brief bullets — the remaining rhythm tells, the over-tidy paragraph structure, the places where word choice still reads algorithmic. This is the step most one-pass editors skip.

3. **Final revision.** Claude revises the draft against its own audit notes and presents the cleaned version with a summary of what changed.

The skill also supports an optional **voice calibration mode**. If you supply a sample of your own writing — a paragraph, a previous post, a message — the skill reads it before rewriting: noting your sentence length patterns, how you handle transitions, whether you use em dashes or parentheticals, how casual or formal your word choices run. The rewrite then matches your patterns rather than defaulting to a generic "natural" voice.

## When to use it

Reach for it when:

- A blog post or wiki page went through Claude and needs to sound like a person wrote it
- You've drafted marketing copy in Claude and it came back with "showcasing," "fostering," or three-item lists where one item would have done
- An internal document is circulating and someone flagged it as "very obviously AI"
- You need a LinkedIn post or email newsletter intro that doesn't announce itself as generated

When not to reach for it:

- Raw creative fiction where non-naturalistic voice is intentional — the 29 patterns are prose patterns, not literary ones
- Short code comments or technical documentation where the AI-pattern concerns simply don't apply
- Text that's already human-written; running the skill on good prose adds noise, not signal
- Very short texts (under 50 words) — the overhead of three passes doesn't pay off at that length

## Install

Copy `SKILL.md` from [blader's repository](https://github.com/blader/humanizer) into `.claude/skills/humanizer/` in your project.

```
.claude/
  skills/
    humanizer/
      SKILL.md
```

No additional config required. The skill activates on: "humanize this," "make this sound human," "de-AI this," "remove AI patterns," or "this sounds like ChatGPT."

To use voice calibration: include a writing sample in the same prompt. "Humanize this text. Here's a sample of my writing for voice matching: [sample]" or point it at a file: "Use my writing style from [file path] as a reference."

## What a session looks like

A typical session on a 400-word blog intro:

1. You paste the text or point at a file and trigger the skill.
2. Claude produces a **draft rewrite** — the text with all 29 patterns addressed. This is faster than it sounds; most AI text hits a predictable subset of patterns (inflated significance, em dashes, rule of three, sycophantic opener).
3. Claude then runs its own audit pass: "What still reads as AI-generated?" It usually catches 2–3 residual tells — often rhythm (too-even sentence lengths) and word-choice clusters (multiple "delve" synonyms in adjacent paragraphs).
4. Claude revises and presents the **final version**, with a brief summary of changes.

If you supplied a writing sample, the rewrite in step 2 targets your voice: your sentence length patterns, your transition style, your punctuation habits. The audit in step 3 checks the match.

The whole pass on a short piece takes one back-and-forth. The self-audit is what separates this from a simple find-and-replace on AI vocabulary words.

## Receipts

Honest assessment of where this skill earns its keep and where it doesn't:

**Where it works well:**
- LinkedIn posts that went through Claude — these tend to hit the rule-of-three, em dash, and promotional-adjective patterns almost every time; a single pass cleans them up substantially
- Blog section intros and "overview" paragraphs, which are where AI text is most pattern-heavy
- Marketing copy with multiple "showcasing," "fostering," or "tapestry" instances — the vocabulary audit catches these fast
- Any text where you can hear "the AI voice" on the first read; that's a signal it's hitting multiple patterns, and the skill handles that well

**Where it backfires:**
- Texts that are already lightly AI-patterned — the skill can over-edit, removing things that were fine
- Very short pieces; the three-pass structure adds overhead that a single light read wouldn't
- Texts where the AI patterns were intentional (e.g., a satirical piece mimicking AI-speak)

**Pattern that works:** run it after you've drafted with Claude, not as a final step on already-human writing. The signal-to-noise ratio is highest when the input is fresh from an LLM. Running it on a text that's been through two rounds of human editing usually produces unnecessary churn.

## Source and attribution

Originally written by [blader](https://github.com/blader). The canonical SKILL.md lives in the [humanizer repository](https://github.com/blader/humanizer).

The skill's 29-pattern framework is based on [Wikipedia's "Signs of AI writing"](https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing) guide, maintained by WikiProject AI Cleanup. The patterns documented there come from observations across thousands of instances of AI-generated text on Wikipedia — the guide is a living document and the skill version tracks it.

License: MIT. Install, adapt, and redistribute with attribution preserved.

This page documents the skill from a practitioner's perspective. For the formal spec and any updates, defer to the source repo.