/copy-editing vs /humanizer: only one made edits

Two skills, one blog post, one experiment to find the seam between them. Both sit on the wiki under the same upstream source. Both are MIT-licensed. Both are good. They overlap in obvious ways and split apart on the cases where it matters — and the split is the part worth knowing. The setup: a 65-line technical post, /copy-editing first, then /humanizer on the same post with the explicit instruction to surface anything the prior pass missed. The receipt below: 4 edits from copy-editing (title, description, opening line, failure-mode header), 0 from humanizer (nine of ten AI patterns clean, one borderline-but-acceptable). The negative result is the lesson — on technical short-form the two skills overlap heavily, the conversion-shaped sweeps don’t earn their keep, and running both back-to-back is wasted activations. The decision rule for when each one wins lives at the bottom of the post.

The setup

The target was a short technical post — the first cookbook recipe on this site, 65 lines, deliberately tight. Real configs, two failure modes, light brand voice. The kind of post where a careless edit pass would over-correct and strip out the personality that makes it readable.

I ran /copy-editing first and applied its suggestions. Then I ran /humanizer on the now-edited post, with the explicit instruction to surface anything the prior pass had missed.

What /copy-editing caught — 4 edits

The skill loads a Seven Sweeps Framework: Clarity, Voice/Tone, So What, Prove It, Specificity, Heightened Emotion, Zero Risk. For a technical post the conversion-shaped sweeps (So What / Prove It / Heightened Emotion / Zero Risk) don’t earn their keep. The useful subset is Clarity, Voice/Tone, Specificity, and the word-level swap table near the bottom (utilize → use, leverage → use, things → specific noun, etc.).

The four edits that landed:

  1. Title trimmed from 67 chars to 53. Parenthetical pile-up removed. Primary keyword “Claude Code” front-loaded.
  2. Description tightened from 165 chars to 142 — within the SEO target band. A mixed metaphor unified (“lands”/“ships” → “ships”).
  3. Opening sentence cut a cliché — “before I got it right” → “before it worked.” Concrete and shorter.
  4. Failure-mode header rewritten from a mechanism description (“hook ran on Edits that didn’t change the path”) to a failure description (“Silent success looked like the hook wasn’t running”). Same content underneath; the new header tells the reader what to remember, not what technically happened.

Roughly 5–8% prose reduction without removing any factual content. Receipts kept their numbers; mistakes kept their bite.

What /humanizer found on the second pass — nothing

The skill ships a catalog of ten AI writing patterns:

#PatternStatus on the post
1Significance inflation (revolutionary, transformative)clean
2Vague attributions (experts say, research shows)clean — uses “I” throughout
3Synonym cyclingclean — “hook” stays “hook”
4Rule of threeborderline — opening enumeration “one hook, real config, the two ways it broke” is a content list, not stylistic puff
5Copula avoidance (serves as, represents)clean
6Hedging filler (it’s worth noting)clean
7Promotional buzzwords (leverage, holistic)clean — purely technical vocab
8Sycophantic openers (Great question!)clean — opens with substance
9Generic conclusions (in today’s evolving…)clean — concrete numbers
10Negative parallelism (not just X but Y)clean

Nine clean, one borderline-but-acceptable. The skill’s “soul” checklist (opinions, varied rhythm, specifics, messiness, contractions) also passes on all five dimensions.

The skill was ready to make edits. There were no edits to make.

What the negative result actually says

The honest receipt: /copy-editing already caught everything /humanizer would have caught on this kind of post. They overlap heavily on technical short-form. The Seven Sweeps Framework’s word-level checks are largely the same words the humanizer’s pattern catalog flags. Running them back-to-back on technical short-form is wasted skill activations.

Where the two skills genuinely differ is upstream of the prose:

  • /humanizer wins on raw AI drafts. When the source opens with “In today’s rapidly evolving AI landscape” and never recovers, the humanizer’s pattern-detection structure is purpose-built. The Seven Sweeps would also fix it, but slower — you’d be running all seven passes when one focused pattern audit would do.
  • /copy-editing wins on existing copy that needs structural improvements. “Prove It” (every claim has evidence), “Specificity” (numbers and timeframes), and “Zero Risk” (objections handled near CTAs) are conversion-shaped sweeps the humanizer doesn’t do.
  • For bilingual content, /humanizer is unique. The skill ships with Finnish/English examples per pattern — the only skill on the wiki with cross-language AI-pattern awareness.

The decision rule

Source contentReach for
Raw AI draft, marketing copy with buzzwords/humanizer
Already-edited prose that needs structural / conversion improvements/copy-editing
Mixed-language EN/FI content/humanizer
Technical short-formEither, not both

The lesson is that humanizer is good — for content this skill family didn’t see. Two-skill stacks need to cover non-overlapping ground or they cost activations without earning them.

What’s next

Two more cross-skill comparisons in the queue. Some will fail like this one (one skill is enough) and some won’t (two skills cover non-overlapping ground). The point of the editorial bar isn’t to ship every result; it’s to ship the ones with non-obvious findings. A skill that costs activations without producing edits is a non-obvious finding.

Subscribe to RSS for the rest.