seo-content

Content quality and E-E-A-T analysis with AI citation readiness assessment, covering experience signals, expertise indicators, authoritativeness markers, and trustworthiness factors aligned with Google's 2025 quality rater guidelines.

Score content for E-E-A-T and AI citation readiness

Source AgriciDaniel
License MIT
First documented

Trigger phrases

Phrases that activate this skill when typed to Claude Code:

  • content quality
  • E-E-A-T
  • content analysis
  • thin content
  • content audit

What it does

seo-content is a Claude Code skill from AgriciDaniel’s claude-seo repo. It evaluates content against Google’s September 2025 Quality Rater Guidelines, scoring on four E-E-A-T dimensions: Experience (firsthand signals — original research, before/after results, process documentation), Expertise (author credentials, technical depth, accurate sourcing), Authoritativeness (external citations, brand mentions, industry recognition), and Trustworthiness (contact info, privacy policy, date stamps, HTTPS).

Beyond E-E-A-T it scores AI Citation Readiness for Generative Engine Optimization — assessing whether content is structured to be cited by Google AI Overviews, ChatGPT, and Perplexity. Key signals include passage-level citability (optimal 134–167 word self-contained answer blocks), question-based headings, tables for comparative data, and entity clarity. The skill explicitly notes the March 2024 merger of the Helpful Content System into Google’s core ranking algorithm.

When to use it

Reach for it when:

  • A page ranks technically well but doesn’t convert — E-E-A-T gaps are often the cause
  • You want to know if your content is structured to appear in AI Overviews or ChatGPT web search citations
  • You are auditing a site acquisition and need a content quality baseline before doing anything else

When not to reach for it:

  • The URL is behind authentication or a paywall; the skill can only analyze visible content and will report the limitation honestly
  • You need keyword volume or difficulty data — use seo-dataforseo or seo-google for that

Install

Copy the seo-content SKILL.md into .claude/skills/seo-content/.

Trigger phrases: “content quality”, “E-E-A-T”, “content analysis”, “readability check”, “thin content”, “content audit”.

Invoke with /seo content <url> for a full analysis. DataForSEO MCP integration is optional — if available, it adds real keyword volume and intent data to the assessment.

What a session looks like

A typical session has three phases:

  1. E-E-A-T audit. The skill scores each dimension against QRG criteria. Experience is the hardest to fake and the highest signal — original case studies, named authors with disclosed credentials, and firsthand data points all score strongly here.
  2. Content metrics. Word count is checked against page-type minimums (blog posts 1,500+, service pages 800+), but with an explicit note that word count is not a ranking factor — topical coverage completeness is. Readability, keyword density (1–3%), and multimedia presence are also evaluated.
  3. AI Citation Readiness report. Content is scored on citability signals: self-contained answer blocks, direct definitions, specific statistics with attribution, and structured data presence. A platform-by-platform breakdown is given for Google AI Overviews, ChatGPT, and Perplexity.

Receipts

Works well: The AI Citation Readiness section catches structural patterns traditional SEO audits ignore — buried conclusions, vague general statements, and missing author attribution are exactly the signals that prevent content from being cited in AI search, and they get surfaced here.

Backfires: Flesch Reading Ease scores are reported but explicitly flagged as not a Google ranking factor (the skill quotes Mueller confirming this). Some users expect a readability score to mean more than it does; the skill adds appropriate caveats rather than letting the number drive decisions.

Pattern that works: Fix E-E-A-T gaps before optimizing for AI citation — the same signals that make content trustworthy to human quality raters also make it citable by AI systems. They are not separate workstreams.

Source and attribution

Originally written by AgriciDaniel. The canonical SKILL.md and supporting files live in the seo-content folder of the claude-seo repository.

License: MIT. Install, adapt, and redistribute with attribution preserved.

This page documents the skill from a practitioner’s perspective. For the formal spec and updates, defer to the source repo.