From accessibility-agents
Drive a power-user accessibility audit of a single web route — runtime + static in parallel, cited by WCAG SC, consolidated into a CSV report, and followed by a batched fix-and-verify loop. Use this whenever the user asks to audit, scan, review, or fix accessibility for a specific URL, route, page, component, screen, or view of a web app — including phrasings like "check a11y on /checkout", "audit my settings page", "WCAG audit the dashboard", "find accessibility bugs on this route", "fix the a11y issues on /foo", or any request that mentions a URL + source file together. Prefer this skill over firing individual agents ad-hoc, because it enforces parallel specialist dispatch, WCAG SC citations, structured CSV output, pre-fix snapshots, batched remediation, and post-fix verification — the coordination pieces Claude otherwise skips.
npx claudepluginhub gautam-bansal-toddle/a11y-plugin --plugin accessibility-agentsThis skill uses the workspace's default tool permissions.
This skill orchestrates the `accessibility-agents` plugin (80 specialist agents + 24 MCP tools under the `a11y-agent-team` server) to produce a complete, WCAG-cited, file-grounded audit of one web route and drive a controlled fix loop against it.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
This skill orchestrates the accessibility-agents plugin (80 specialist agents + 24 MCP tools under the a11y-agent-team server) to produce a complete, WCAG-cited, file-grounded audit of one web route and drive a controlled fix loop against it.
It exists because the plugin's pieces are individually powerful but the model, left to its own devices, tends to serialize specialists, skip runtime verification, produce unstructured findings, and fix issues one-at-a-time instead of by WCAG criterion. Those four mistakes cost 3–5× more turns and produce inconsistent fixes. This skill codifies the coordination so the plugin actually operates at full capacity.
Before proceeding, confirm you have:
http://localhost:3000/platform/settings/templates). The dev server must be running — the runtime pass depends on it. If the URL is not reachable, stop and tell the user.src/routes/.../Templates.js or src/pages/settings/templates.tsx). This pins the static pass — specialists read these exact files rather than guessing.pwd). The UserPromptSubmit hook detects web projects by grepping package.json in CWD; if you are outside the repo, delegation won't fire automatically.$A11Y_AUTH_STATE_FILE, or absence for public routes. Phase 0 covers capturing and wiring this if needed.If any of these is missing or ambiguous, ask once, then proceed.
Most real app routes sit behind a login. Before Phase 1, do a two-second probe against the URL. If the page returns a login form, redirects to /login, or the interactive element count comes back as zero from run_playwright_keyboard_scan, the runtime pass is hitting a login wall — every runtime finding will be noise about the login page, not the intended route.
Resolve this by pointing the MCP tools at a saved authenticated session via the A11Y_AUTH_STATE_FILE environment variable. The variable takes a path to a Playwright storageState JSON (cookies + localStorage). When set, every runtime tool launches a Chromium context with that state loaded, so scans run as the logged-in user.
To capture the state file, use the bundled helper:
node ${CLAUDE_PLUGIN_ROOT}/skills/route-audit/scripts/capture-auth-state.mjs \
http://localhost:3000/login \
--out $(pwd)/.a11y-auth/state.json
This opens a visible browser, waits for the user to log in manually, then writes the state file. Tell the user to:
export A11Y_AUTH_STATE_FILE=$(pwd)/.a11y-auth/state.json
cd <repo-root>
claude --plugin-dir /Users/gautambansal/Coding/a11y-plugin
Add .a11y-auth/ to .gitignore — the state file is effectively a credential.
If the route is public, skip all of this and go straight to Phase 1.
Output a one-paragraph plan first: which specialists will run, which MCP tools will run, and why each is relevant to the route. Wait for user "go" only if the user has been explicit about wanting a preview; otherwise proceed directly — most users want execution, not permission.
Then dispatch in a single assistant message with multiple Agent tool calls so the runs execute concurrently. Serial dispatch wastes ~60% of wall time on this workflow.
a11y-agent-team tools, against the live URL)Always run these five on first audit of a route:
| Tool | What it catches |
|---|---|
run_axe_scan | axe-core WCAG 2.1 AA violations in the current DOM |
run_playwright_a11y_tree | What a screen reader actually exposes (names/roles/states) |
run_playwright_keyboard_scan | Tab order, focus traps, reachability of interactive elements |
run_playwright_contrast_scan | Runtime text contrast (catches things computed CSS hides from static) |
run_playwright_viewport_scan | Reflow/overflow at 320px, 768px, 1024px, 1440px and 200% zoom |
Run run_axe_scan in multiple UI states whenever the route has interactive affordances (drawers, modals, forms, wizard steps, validation). At minimum: initial load. Additionally, if applicable:
Capture each state's axe output to reports/a11y/snapshots/axe-<state-slug>-before.json so post-fix diffs are possible.
accessibility-agents:accessibility-lead)Dispatch the lead with an explicit specialist list. The lead has a bias toward running only a subset if you don't name them — name them. Base the list on what the route renders, not what it's named:
aria-specialist, keyboard-navigator, contrast-master, alt-text-headings, cognitive-accessibility, i18n-accessibility.forms-specialist — any <input>, <select>, <textarea>, or form library usagemodal-specialist — any dialog, drawer, popover, tooltip, or portal patternlive-region-controller — any toast, notification, loading spinner, dynamic status texttables-data-specialist — any <table> or grid componentmedia-accessibility — any <video>, <audio>, <iframe> embedsdata-visualization-accessibility — any chart/graph library (recharts, d3, chart.js)web-component-specialist — any custom element (<my-el>) or shadow DOMmobile-accessibility — if the viewport scan surfaces mobile-specific issuesBefore specialists report findings, instruct the lead to call get_accessibility_guidelines (MCP resource) for each component type involved. Every finding must be tagged with the exact Success Criterion (e.g., 2.1.1 Keyboard, 4.1.2 Name, Role, Value, 1.4.3 Contrast (Minimum)). This matters because fix suggestions without WCAG anchors can't be filed as tickets and tend to drift into style preferences.
On second and subsequent audits of files in the same repo, instruct the lead to call check_audit_cache first. If a file's hash is unchanged since the last audit, skip it and reuse prior findings. This is how incremental audits stay fast on monorepos.
Collect findings from Phase 1 and invoke accessibility-agents:web-csv-reporter:
CSV path: reports/a11y/<route-slug>-<YYYY-MM-DD>.csv
Columns (exact order): id, severity, wcag_criterion, source, tool_or_agent, file, line_range, element_selector, issue, fix_suggestion, status
severity: P0 (AT-blocking WCAG violation), P1 (serious AT degradation), P2 (usability issue), P3 (best practice).source: runtime (from an MCP scan tool) or static (from a specialist agent).status: all rows start as open.severity ascending, then wcag_criterion.Markdown summary: reports/a11y/<route-slug>-<YYYY-MM-DD>.md — one page with:
Present the report summary to the user in chat:
Then ask a scoped question: which severity tier should I fix now — all P0s, P0+P1, or none yet? Do not start fixing until the user answers.
The reason: severity triage is subjective enough that automated judgment creates rework. Giving the user a clear decision point at this gate saves more time than it costs.
Hand the approved severity tier and the CSV path to accessibility-agents:web-issue-fixer. Give these specific instructions:
wcag_criterion before fixing. Repeated ARIA or focus-management issues should be fixed as one coordinated change — not one row at a time — so the fixes share a style.run_axe_scan + run_playwright_a11y_treerun_playwright_keyboard_scanrun_playwright_contrast_scanrun_playwright_viewport_scanstatus=fixed only after verification passes. If a verifier regresses (new violations appear), revert the edits in that group, mark the rows open with a note in issue, and continue to the next group.The Edit/Write gate from the plugin's hooks will already be unlocked from Phase 1 (the accessibility-lead touched the session marker). If the user started a fresh session between audit and fix, re-invoke the lead first or the fix tool calls will be blocked.
After the fix loop completes:
reports/a11y/snapshots/axe-<state-slug>-after.json.update_audit_cache to record current file hashes so the next audit is incremental.reports/a11y/CHANGELOG.md:
YYYY-MM-DD /route/path P0: 4→0 P1: 7→2 P2: 12→8 files: 3 deferred: "P1 modal focus (tracking #123)"
For a route that was audited earlier today and a small edit was just made:
accessibility-lead for read-only verification — that adds a dispatch hop and the lead has no value to add when you already know what to check.run_playwright_a11y_tree because "axe already ran". They catch different bugs — axe misses semantic / naming issues that the tree exposes directly.wcag_criterion./login, visible login form). Stop, instruct the user to capture auth state via the Phase 0 script, then retry. Scanning the login page itself is not the audit the user asked for.a11y-agent-team server)Runtime (live URL required): run_axe_scan, run_playwright_a11y_tree, run_playwright_keyboard_scan, run_playwright_contrast_scan, run_playwright_viewport_scan.
Static HTML / source: check_heading_structure, check_link_text, check_form_labels, check_contrast, check_color_blindness, check_reading_level.
Meta / guidance: get_accessibility_guidelines (WCAG rules by component type), check_audit_cache, update_audit_cache.
Documents (not used by this skill — route audits only): scan_pdf_document, scan_office_document, batch_scan_documents, run_verapdf_scan, convert_pdf_form_to_html, fix_document_metadata, fix_document_headings.
Media / text: validate_caption_file, check_reading_level.
Reporting: generate_accessibility_statement (skip unless the user specifically asks for a public statement).
accessibility-agents:*)Coordinator: accessibility-lead (always the entry point for the static pass).
Web specialists (use these for routes): aria-specialist, keyboard-navigator, contrast-master, forms-specialist, modal-specialist, alt-text-headings, live-region-controller, cognitive-accessibility, i18n-accessibility, mobile-accessibility, tables-data-specialist, media-accessibility, data-visualization-accessibility, web-component-specialist.
Reporting/remediation: web-csv-reporter, web-issue-fixer, playwright-verifier.
Skip non-web specialists (office, PDF, EPUB, desktop, markdown) — they're for other workflows.
A run of this skill has succeeded when all of the following are true:
reports/a11y/<route-slug>-<date>.csv with every row WCAG-cited.status=fixed row was verified by a runtime tool after the fix..a11y-cache.json was updated.