Accessibility testing and auditing skill. Activates when user needs to verify WCAG 2.1 AA/AAA compliance, audit color contrast, keyboard navigation, screen reader compatibility, or fix accessibility issues. Integrates with Axe, Pa11y, and Lighthouse. Combines automated scanning with manual checklist review. Triggers on: /godmode:a11y, "check accessibility", "WCAG audit", "a11y review", or as pre-ship quality gate.
From godmodenpx claudepluginhub arbazkhan971/godmodeThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
/godmode:a11y/godmode:ship workflowDetermine which pages, components, or flows to audit:
A11Y AUDIT SCOPE:
Target: <page/component/entire app>
WCAG level: <AA (default) | AAA>
Run automated accessibility scanners in sequence:
# Install if needed
npm install --save-dev @axe-core/cli
# Install if needed
npm install --save-dev pa11y pa11y-ci
# Run Lighthouse accessibility category
npx lighthouse <url> --only-categories=accessibility --output=json --output-path=./a11y-report.json
AUTOMATED SCAN RESULTS:
Axe findings: <N> violations, <N> incomplete, <N> passes
Pa11y findings: <N> errors, <N> warnings, <N> notices
Systematic check against WCAG 2.1 principles:
CHECK — Can all users perceive the content?
1.1 Text Alternatives:
CHECK — Can all users operate the interface?
2.1 Keyboard Accessible:
CHECK — Can all users understand the content?
3.1 Readable:
CHECK — Does the content work with assistive technologies?
4.1 Compatible:
Analyze every foreground/background color combination:
COLOR CONTRAST ANALYSIS:
| Element | FG | BG | Ratio | Result |
Tools for color contrast:
# Using color-contrast-checker
npx color-contrast-checker --fg "#999" --bg "#fff"
Test every interactive flow using keyboard only:
KEYBOARD NAVIGATION AUDIT:
Flow: <user flow being tested>
Test these common keyboard patterns:
KEYBOARD PATTERNS:
| Component | Expected Keyboard Behavior |
Verify content is announced correctly:
SCREEN READER AUDIT:
Screen reader: <VoiceOver (macOS) / NVDA (Windows) / JAWS / TalkBack (Android)>
Browser: <Safari / Chrome / Firefox>
ARIA live region checklist:
- [ ] Toast notifications use aria-live="polite" or role="status"
- [ ] Error alerts use aria-live="assertive" or role="alert"
- [ ] Loading states announced with aria-busy="true"
For each issue found:
### FINDING <N>: <Title>
**Severity:** CRITICAL | HIGH | MEDIUM | LOW
**WCAG criterion:** <number> — <name> (Level <A/AA/AAA>)
Impact: <Who is affected and how — specific disability/assistive technology>
Remediation:
<!-- The accessible fix -->
<fixed code>
Verification: <How to confirm the fix — tool command or manual test>
Severity definitions:
- **CRITICAL**: Complete blocker for assistive technology users.
Missing form labels, keyboard traps, no alt text on functional images.
IF Lighthouse a11y score < 90: treat as NEEDS REMEDIATION.
WHEN axe reports > 0 critical violations: block deployment.
IF contrast ratio < 4.5:1 for normal text: flag as HIGH severity.
WHEN keyboard trap detected: flag as CRITICAL, fix before any other issue.
```bash
# Run full automated a11y scan pipeline
npx axe-core/cli http://localhost:3000 --exit
npx pa11y-ci --config .pa11yci.json
npx lighthouse http://localhost:3000 --only-categories=accessibility --output=json
AUTO-FIXABLE ISSUES:
1. Add missing alt="" to decorative images
2. Associate orphaned labels with inputs via for/id
3. Add aria-label to icon-only buttons
4. Fix heading hierarchy gaps
5. Add lang attribute to <html>
6. Add skip navigation link
7. Wrap form error messages in aria-live region
8. Add role="presentation" to layout tables
For each auto-fix:
FIX <N>: <description>
File: <path>
Before: <original code>
After: <fixed code>
WCAG: <criterion satisfied>
| ACCESSIBILITY AUDIT — <target> |
| WCAG Level: <AA/AAA> |
| Automated Scores: |
| Axe: <N> violations |
| Pa11y: <N> errors |
| MUST FIX before shipping: |
| 1. <CRITICAL/HIGH finding> |
| 2. <CRITICAL/HIGH finding> |
| SHOULD FIX: |
| 3. <MEDIUM finding> |
| 4. <MEDIUM finding> |
Verdicts:
AUTO-DETECT SEQUENCE:
1. Scan package.json / requirements.txt for framework:
- React/Next.js → check for jsx/tsx files, component patterns
- Vue → check for .vue files
- Angular → check for angular.json
- Vanilla → check for .html files
2. Detect UI library:
- grep for '@mui', 'antd', '@chakra-ui', 'tailwind', 'bootstrap'
3. Detect existing a11y tooling:
- Check devDependencies for axe-core, pa11y, jest-axe, cypress-axe
- Check for .pa11yci.json, .axe config files
4. Detect existing a11y patterns:
- grep for 'aria-', 'role=', 'alt=', 'sr-only', 'visually-hidden'
- grep for 'prefers-reduced-motion' in CSS/SCSS
5. Detect testing infrastructure:
- Storybook? → can use @storybook/addon-a11y
- Jest? → can use jest-axe
- Playwright/Cypress? → can use axe integration
6. Count components and pages in scope automatically
Never ask to continue. Loop autonomously until zero CRITICAL/HIGH violations or budget exhausted.
MECHANICAL CONSTRAINTS — NON-NEGOTIABLE:
1. NEVER skip the manual checklist even if automated tools report 0 violations.
Automated tools catch 30-40% of issues. The checklist catches the rest.
2. NEVER mark PASS if any CRITICAL finding exists — regardless of Lighthouse score.
3. NEVER auto-fix without verifying the fix does not break existing functionality.
4. git commit BEFORE running verify — if verify reveals regression, revert the commit.
5. Every finding MUST include: severity, WCAG criterion, location, evidence, remediation.
6. Log all findings in TSV format for tracking:
SEVERITY\tWCAG\tLOCATION\tTOOL\tDESCRIPTION
7. Color contrast failures are ALWAYS HIGH or CRITICAL — no exceptions.
8. Keyboard traps are ALWAYS CRITICAL — no exceptions.
9. If auto-fix changes > 10 files, split into separate commits per concern.
10. Re-run automated scans AFTER applying fixes to confirm zero regressions.
A11Y AUDIT REPORT:
| Pages audited | <N> |
|--|--|
| Components audited | <N> |
| Total violations | <N> |
| Critical | <N> (keyboard traps, no alt) |
| High | <N> (contrast, missing labels) |
| Medium | <N> (aria improvements) |
| Low | <N> (best practice suggestions) |
| Auto-fixed | <N> |
| Manual review | <N> |
| WCAG level | A | AA | AAA |
| Verdict | PASS | NEEDS REMEDIATION |
timestamp skill page severity wcag element tool description status
2026-03-20T14:00:00Z a11y /home CRITICAL 2.1.2 .modal manual keyboard trap in modal fixed
2026-03-20T14:01:00Z a11y /form HIGH 1.4.3 .label axe contrast ratio 3.2:1 < 4.5:1 fixed
After EACH accessibility fix:
1. MEASURE: Re-run axe-core and manual check on the fixed component.
2. COMPARE: Did the violation count decrease? Did any new violations appear?
3. DECIDE:
- KEEP if: target violation fixed AND no new violations introduced
- DISCARD if: fix introduced new violations OR broke existing functionality
4. COMMIT kept changes. Revert discarded changes before fixing the next issue.
## Keep/Discard
KEEP if: improvement verified. DISCARD if: regression or no change. Revert discards immediately.
## Stop Conditions
STOP when ANY of these are true: