accessibility-auditor
A Claude Code agent persona that audits interfaces against WCAG 2.2 AA — runs automated scans, then forces manual screen-reader and keyboard testing because automation only catches roughly 30% of real issues.
Get a real accessibility audit, not a green Lighthouse score that hides barriers
Trigger phrases
Phrases that activate this skill when typed to Claude Code:
audit accessibilityWCAG auditscreen-reader test this
What it does
accessibility-auditor is the WCAG-audit persona in the agency-agents collection. It runs an automated baseline (axe-core, Lighthouse), then forces manual assistive-technology testing — keyboard-only navigation, screen reader walkthroughs (VoiceOver, NVDA, JAWS), 200% and 400% zoom, reduced motion, high contrast — because the persona’s stated rule is that automation catches roughly 30% of real issues. The remaining 70% require sitting with the product and a screen reader.
Every finding cites a specific WCAG 2.2 success criterion by number and name, classifies severity (Critical / Serious / Moderate / Minor), and ships with a concrete code-level fix. The persona refuses what it calls “compliance theater” — a green Lighthouse score on a product that’s unusable with a screen reader is treated as a louder failure, not a quieter one.
When to use it
- Pre-launch audit for a public-facing product where ADA, EAA, or Section 508 exposure matters
- Auditing a custom component (modal, date picker, carousel, tabs) before it lands in the design system
- Re-audit after fixes to confirm remediation actually works in real assistive tech
- Reviewing a Lighthouse “100” page to find the actual barriers it didn’t catch
When not to reach for it:
- Internal-only tools where compliance isn’t a requirement and the user base is fully sighted (still nice to have, but the depth is overkill)
- Pages that aren’t built yet — design-system-level review with
accessibilitybaked in earlier is cheaper than retrofitting - Pure marketing copy review — the persona is structural, not editorial
Install
From msitarzewski/agency-agents at testing/testing-accessibility-auditor.md. Copy to ~/.claude/agents/ or use the repo’s installer. The persona expects an environment where you can run npx @axe-core/cli and npx lighthouse against a local or staging URL — manual screen-reader steps require an actual operator, not just the agent.
What a session looks like
- Automated baseline. Run axe-core against every page with
--tags wcag2a,wcag2aa,wcag22aa. Run Lighthouse with--only-categories=accessibility. Capture color-contrast failures, missing alt text, broken ARIA — the low-hanging fruit. - Manual keyboard pass. Tab through every interactive element. Check for focus traps, missing focus indicators, illogical tab order, missing skip links. Every flow must complete keyboard-only.
- Screen-reader pass. Walk through critical user journeys with VoiceOver (Safari/macOS) or NVDA (Firefox/Windows). Check heading hierarchy, landmark regions, form-field labeling, live-region announcements, focus management on modals.
- Visual stress tests. Zoom to 200% and 400% — content overlap, horizontal scroll, hidden controls. Enable
prefers-reduced-motionand verify animations respect it. Enable forced-colors mode and check legibility. - Custom-component deep dive. Every custom widget (tabs, accordion, menu, carousel, date picker) is “guilty until proven innocent” — audit each against WAI-ARIA Authoring Practices.
- Report. Per-issue: WCAG criterion (number + name), severity, user impact, evidence (screenshot or transcript), current state, recommended fix, verification steps. Prioritize by user impact, not compliance level.
The discipline that makes it work: step 3. Skipping the screen-reader pass and relying on automation is the failure mode the persona was built against.
Receipts
TODO — to be filled in from a real audit session. Once the persona has been pointed at a live page or component, this section will capture: how the automated-vs-manual issue split actually broke down (is the 70/30 claim real?), which custom components surfaced the most barriers, and the WCAG criteria that came up most often.
Source and attribution
From Michael Sitarzewski’s agency-agents repository.
License: MIT.
Quote from the persona body, verbatim: “If it’s not tested with a screen reader, it’s not accessible.” The whole audit flow is built to make that rule operational rather than aspirational.