# seo-technical

> Technical SEO audit across nine categories — crawlability, indexability, security, URL structure, mobile optimization, Core Web Vitals, structured data, JavaScript rendering, and IndexNow protocol — with AI crawler management guidance.

**Use case**: Audit nine technical SEO categories with a single command

**Canonical URL**: https://agentcookbooks.com/skills/seo-technical/

**Topics**: claude-code, skills, marketing, seo

**Trigger phrases**: "technical SEO", "crawl issues", "robots.txt", "Core Web Vitals", "site speed"

**Source**: [AgriciDaniel](https://github.com/AgriciDaniel/claude-seo/tree/main/skills/seo-technical)

**License**: MIT

---

## What it does

`seo-technical` is a Claude Code skill from AgriciDaniel's [claude-seo repo](https://github.com/AgriciDaniel/claude-seo). It audits nine technical SEO categories and produces a scored report with findings prioritized Critical > High > Medium > Low.

The nine categories: Crawlability (robots.txt, sitemap, noindex tags, crawl depth, JS rendering, crawl budget), Indexability (canonical tags, duplicate content, thin content, pagination, hreflang, index bloat), Security (HTTPS, SSL, security headers — CSP, HSTS, X-Frame-Options, Referrer-Policy), URL Structure (clean URLs, redirect chain limits, 301 vs 302 usage, trailing slash consistency), Mobile Optimization (responsive design, touch targets 48x48px minimum, 16px base font, mobile-first indexing note: 100% complete as of July 5, 2024), Core Web Vitals (LCP target 2.5s, INP target 200ms — INP replaced FID on March 12, 2024, never reference FID, CLS target 0.1), Structured Data (see `seo-schema` for full analysis), JavaScript Rendering (CSR vs SSR detection, December 2025 canonical conflict guidance), and IndexNow (Bing, Yandex, Naver support check).

Optional integrations: DataForSEO for real on-page analysis via Lighthouse and `domain_analytics_technologies`; Google API credentials for real CrUX field data and GSC URL inspection.

## When to use it

Reach for it when:

- You need a structured nine-category technical audit rather than ad-hoc checks
- A deployment just shipped and you want a post-deploy technical health check
- You are assessing a site for acquisition or a new client onboarding

When *not* to reach for it:

- You need real CrUX field data for CWV — the skill provides lab-based estimates from HTML signals; for 75th-percentile real-user data use `seo-google crux <url>`
- You need a full schema audit — the skill checks for schema presence and basic type validation, but full schema generation and deprecation analysis belongs in `seo-schema`

## Install

Copy the [`seo-technical` SKILL.md](https://github.com/AgriciDaniel/claude-seo/tree/main/skills/seo-technical) into `.claude/skills/seo-technical/` along with the `scripts/` directory.

Trigger phrases: "technical SEO", "crawl issues", "robots.txt", "Core Web Vitals", "site speed", "security headers".

Invoke with `/seo technical <url>`. DataForSEO MCP and Google API credentials are optional but improve data quality significantly if available.

## What a session looks like

A typical session has three phases:

1. **Technical data collection.** The page is fetched and HTTP response headers are captured. robots.txt is checked from the root domain. The sitemap is located and validated. HTML is parsed for canonical tags, meta robots, security headers, viewport meta, schema blocks, and image dimensions. JavaScript framework detection identifies CSR vs SSR patterns.
2. **Nine-category scoring.** Each category is scored independently. The December 2025 JavaScript SEO guidance is applied — if canonical tags differ between raw HTML and JS-rendered output, or if `noindex` is present in raw HTML but removed by JS, these are flagged as specific findings with the expected Google behavior described.
3. **Category scorecard and action plan.** A table shows pass/warn/fail per category. Critical issues (blocking indexing or causing penalties) come first. AI crawler management gets specific robots.txt directive recommendations based on whether the site's AI visibility strategy is allow-all, selective-allow, or training-only-block.

## Receipts

**Works well:** The AI crawler section is consistently relevant right now — most sites have not deliberately configured robots.txt for the nine known AI crawlers, and the distinction between training crawlers (GPTBot, Google-Extended) and search crawlers (ChatGPT-User, Googlebot) is not obvious. This section makes the decision explicit.

**Backfires:** The CWV section flags potential issues from HTML signals (missing image dimensions, render-blocking scripts) but cannot measure real user experience. For sites with significant traffic, the CrUX field data via `seo-google` is always more authoritative than inferred signals from HTML analysis alone.

**Pattern that works:** Run the technical audit first, fix Critical and High items, then run content and schema audits. Technical issues — especially indexation problems — can hide the impact of content and schema improvements. Fix the foundation before optimizing on top of it.

## Source and attribution

Originally written by [AgriciDaniel](https://github.com/AgriciDaniel). The canonical SKILL.md and supporting files live in the [`seo-technical` folder](https://github.com/AgriciDaniel/claude-seo/tree/main/skills/seo-technical) of the [claude-seo repository](https://github.com/AgriciDaniel/claude-seo).

License: MIT. Install, adapt, and redistribute with attribution preserved.

This page documents the skill from a practitioner's perspective. For the formal spec and updates, defer to the source repo.