Analyzes web and mobile applications for WCAG 2.2 AA accessibility compliance. Audits code (HTML, React, React Native, SwiftUI), interprets automated tool output, and processes manual tester findings. Produces structured audit reports that implementation agents can action. Use this skill when conducting accessibility audits, reviewing code for a11y issues, or synthesizing accessibility test results.
Analyzes web and mobile apps for WCAG 2.2 AA compliance and produces actionable audit reports.
/plugin marketplace add srstomp/pokayokay/plugin install srstomp-pokayokay@srstomp/pokayokayThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/code-analysis.mdreferences/common-issues.mdreferences/tester-findings.mdreferences/wcag-22-aa.mdAnalyze applications for WCAG 2.2 AA compliance and produce actionable audit reports.
┌─────────────────────────────────────────────────────────────┐
│ INPUTS │
├─────────────────┬─────────────────┬─────────────────────────┤
│ Code │ Tool Output │ Tester Findings │
│ (HTML, React, │ (axe, Light- │ (manual testing │
│ RN, SwiftUI) │ house, etc.) │ reports) │
└────────┬────────┴────────┬────────┴────────┬────────────────┘
│ │ │
└─────────────────┼─────────────────┘
▼
┌────────────────────────┐
│ ANALYSIS PROCESS │
│ • Map to WCAG 2.2 AA │
│ • Classify severity │
│ • Identify patterns │
│ • Note remediation │
└───────────┬────────────┘
▼
┌────────────────────────┐
│ AUDIT REPORT │
│ Structured for │
│ implementation agent │
└────────────────────────┘
| Principle | Meaning | Key Questions |
|---|---|---|
| Perceivable | Users can perceive content | Can everyone see/hear/read it? |
| Operable | Users can interact | Can everyone navigate and use controls? |
| Understandable | Users can comprehend | Is content and UI predictable and clear? |
| Robust | Works with assistive tech | Does it work with screen readers, etc.? |
Use this scale for all findings:
| Severity | Definition | Example | Priority |
|---|---|---|---|
| Critical | Blocks access entirely | No keyboard navigation, missing alt text on key images | P0 — Fix immediately |
| Serious | Major barrier, workaround difficult | Poor contrast, form errors not announced | P1 — Fix before release |
| Moderate | Barrier exists, workaround possible | Focus order confusing, missing skip links | P2 — Fix soon |
| Minor | Inconvenience, not a barrier | Redundant alt text, minor heading hierarchy issues | P3 — Fix when able |
Can the user complete the task?
├── No → Is there a workaround?
│ ├── No → CRITICAL
│ └── Yes, but difficult → SERIOUS
└── Yes → Is the experience degraded?
├── Significantly → MODERATE
└── Slightly → MINOR
Use this structure for all audit reports:
# Accessibility Audit Report
## Summary
| Metric | Count |
|--------|-------|
| Critical | X |
| Serious | X |
| Moderate | X |
| Minor | X |
| **Total Issues** | X |
**Overall Assessment**: [Pass / Fail / Conditional Pass]
**WCAG Version**: 2.2 AA
**Platform**: [Web / iOS / Android / React Native]
**Audit Date**: [Date]
**Auditor**: [Agent/Human]
## Critical Issues
### [Issue ID]: [Brief Title]
- **WCAG Criterion**: [X.X.X Name]
- **Severity**: Critical
- **Location**: [File/Component/Screen]
- **Description**: [What's wrong]
- **Impact**: [Who is affected and how]
- **Code Sample** (if applicable):
[Problematic code]
- **Remediation**: [How to fix]
- **Remediation Code** (if applicable):
[Fixed code]
[Repeat for each critical issue]
## Serious Issues
[Same structure]
## Moderate Issues
[Same structure]
## Minor Issues
[Same structure]
## Passed Criteria
[List WCAG criteria that were checked and passed]
## Out of Scope
[Anything not tested and why]
## Recommendations
[Overall recommendations beyond specific fixes]
## Testing Methodology
- **Automated Tools**: [List tools used]
- **Manual Testing**: [Describe manual checks]
- **Assistive Tech Tested**: [Screen readers, etc.]
| Input | Analysis Approach |
|---|---|
| Code | Static analysis against WCAG criteria |
| Automated tool output | Map findings to WCAG, verify, remove false positives |
| Tester findings | Standardize format, map to WCAG, classify severity |
| Mixed | Synthesize all sources, deduplicate |
Every finding must map to a specific WCAG criterion:
If a finding doesn't map to WCAG AA, classify as:
Use the severity definitions above. Be consistent:
Look for systemic issues:
Every issue needs actionable remediation:
Quick checklist for code review. Details in references/code-analysis.md.
| Tool | Strength | Limitation |
|---|---|---|
| axe | Comprehensive, low false positives | Can't test keyboard nav, focus order |
| Lighthouse | Quick overview, integrated | Less detailed than axe |
| WAVE | Visual overlay helpful | Can be noisy |
| Accessibility Inspector (iOS) | Native iOS testing | Manual process |
| Android Accessibility Scanner | Native Android testing | Limited depth |
Manual tester reports may need standardization:
See references/tester-findings.md for interpretation guide.
References:
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Activates when the user asks about Agent Skills, wants to find reusable AI capabilities, needs to install skills, or mentions skills for Claude. Use for discovering, retrieving, and installing skills.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.