Adds comprehensive accessibility testing to CI/CD pipelines using axe-core Playwright integration or pa11y-ci. Automatically generates markdown reports for pull requests showing WCAG violations with severity levels, affected elements, and remediation guidance. This skill should be used when implementing accessibility CI checks, adding a11y tests to pipelines, generating accessibility reports, enforcing WCAG compliance, automating accessibility scans, or setting up PR accessibility gates. Trigger terms include a11y ci, accessibility pipeline, wcag ci, axe-core ci, pa11y ci, accessibility reports, a11y automation, accessibility gate, compliance check.
Automates WCAG compliance checks in CI/CD pipelines using axe-core or pa11y-ci. Generates detailed markdown reports for pull requests with violation severity, affected elements, and remediation guidance.
/plugin marketplace add hopeoverture/worldbuilding-app-skills/plugin install a11y-checker-ci@worldbuilding-app-skillsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
assets/a11y-test.spec.tsassets/github-actions-a11y.ymlassets/pa11y-config.jsonscripts/generate_a11y_report.pyAutomated accessibility testing in CI/CD pipelines with comprehensive reporting.
To enforce accessibility standards in continuous integration, this skill configures automated WCAG compliance checks using industry-standard tools and generates detailed reports for every pull request.
Use this skill when:
Industry-standard accessibility testing engine with Playwright integration.
Advantages:
Command-line accessibility testing tool for multiple URLs.
Advantages:
To select the appropriate tool:
Use @axe-core/playwright when:
Use pa11y-ci when:
For @axe-core/playwright:
npm install -D @axe-core/playwright
For pa11y-ci:
npm install -D pa11y-ci
Create test file using assets/a11y-test.spec.ts:
import { test, expect } from '@playwright/test'
import AxeBuilder from '@axe-core/playwright'
test.describe('Accessibility Tests', () => {
test('homepage meets WCAG standards', async ({ page }) => {
await page.goto('/')
const accessibilityScanResults = await new AxeBuilder({ page })
.withTags(['wcag2a', 'wcag2aa', 'wcag21a', 'wcag21aa'])
.analyze()
expect(accessibilityScanResults.violations).toEqual([])
})
})
Create configuration using assets/pa11y-config.json:
{
"defaults": {
"timeout": 30000,
"chromeLaunchConfig": {
"executablePath": "/usr/bin/chromium-browser",
"args": ["--no-sandbox"]
},
"standard": "WCAG2AA",
"runners": ["axe", "htmlcs"],
"ignore": []
},
"urls": [
"http://localhost:3000",
"http://localhost:3000/entities",
"http://localhost:3000/timeline"
]
}
Create report generator using scripts/generate_a11y_report.py:
python scripts/generate_a11y_report.py \
--input test-results/a11y-results.json \
--output accessibility-report.md \
--format github
The script generates markdown reports with:
Use template from assets/github-actions-a11y.yml:
name: Accessibility Tests
on:
pull_request:
branches: [main, master]
push:
branches: [main, master]
jobs:
a11y:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Build application
run: npm run build
- name: Start server
run: npm start &
- name: Wait for server
run: npx wait-on http://localhost:3000 -t 60000
- name: Run accessibility tests
run: npm run test:a11y
- name: Generate report
if: always()
run: |
python scripts/generate_a11y_report.py \
--input test-results/a11y-results.json \
--output accessibility-report.md \
--format github
- name: Comment PR
if: github.event_name == 'pull_request' && always()
uses: actions/github-script@v7
with:
script: |
const fs = require('fs')
const report = fs.readFileSync('accessibility-report.md', 'utf8')
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: report
})
- name: Upload report
if: always()
uses: actions/upload-artifact@v4
with:
name: accessibility-report
path: |
accessibility-report.md
test-results/
- name: Fail on violations
if: failure()
run: exit 1
Use template from assets/gitlab-ci-a11y.yml:
accessibility-test:
stage: test
image: mcr.microsoft.com/playwright:v1.40.0-focal
script:
- npm ci
- npm run build
- npm start &
- npx wait-on http://localhost:3000 -t 60000
- npm run test:a11y
- python scripts/generate_a11y_report.py
--input test-results/a11y-results.json
--output accessibility-report.md
--format gitlab
artifacts:
when: always
paths:
- accessibility-report.md
- test-results/
reports:
junit: test-results/junit.xml
only:
- merge_requests
- main
Add to package.json:
{
"scripts": {
"test:a11y": "playwright test a11y.spec.ts",
"test:a11y:ci": "playwright test a11y.spec.ts --reporter=json",
"pa11y": "pa11y-ci --config .pa11yci.json"
}
}
# Accessibility Test Report
**Status:** [ERROR] Failed
**Total Violations:** 12
**Pages Tested:** 5
**WCAG Level:** AA
**Date:** 2025-01-15
## Summary by Severity
- [CRITICAL] Critical: 2
- [SERIOUS] Serious: 5
- [MODERATE] Moderate: 3
- [MINOR] Minor: 2
## Violations
### [CRITICAL] Critical (2)
#### 1. Form elements must have labels (form-field-multiple-labels)
**WCAG Criteria:** 3.3.2 (Level A)
**Impact:** Critical
**Occurrences:** 3 elements
**Description:**
Form fields should have exactly one associated label element.
**Affected Elements:**
- Line 45: `<input type="text" name="entity-name">`
- Line 67: `<input type="email" name="user-email">`
- Line 89: `<select name="entity-type">`
**How to Fix:**
Add a `<label>` element with a `for` attribute matching the input's `id`:
\`\`\`html
<label for="entity-name">Entity Name</label>
<input id="entity-name" type="text" name="entity-name">
\`\`\`
**More Info:** https://dequeuniversity.com/rules/axe/4.7/label
---
## Progress
| Metric | Previous | Current | Change |
|--------|----------|---------|--------|
| Total Violations | 15 | 12 | [OK] -3 |
| Critical | 3 | 2 | [OK] -1 |
| Serious | 7 | 5 | [OK] -2 |
| Moderate | 4 | 3 | [OK] -1 |
| Minor | 1 | 2 | [ERROR] +1 |
To fail builds on specific violations, configure thresholds:
const results = await new AxeBuilder({ page }).analyze()
// Fail on any critical violations
const critical = results.violations.filter(v => v.impact === 'critical')
expect(critical).toHaveLength(0)
// Allow up to 5 moderate violations
const moderate = results.violations.filter(v => v.impact === 'moderate')
expect(moderate.length).toBeLessThanOrEqual(5)
Use assets/a11y-thresholds.json:
{
"thresholds": {
"critical": 0,
"serious": 0,
"moderate": 5,
"minor": 10
},
"allowedViolations": [
"color-contrast"
],
"ignoreSelectors": [
"#third-party-widget",
"[data-testid='external-embed']"
]
}
To disable or configure specific rules:
const results = await new AxeBuilder({ page })
.disableRules(['color-contrast'])
.withRules({
'custom-rule': { enabled: true }
})
.analyze()
Test different page types:
const pages = [
{ url: '/', name: 'Homepage' },
{ url: '/entities', name: 'Entity List' },
{ url: '/timeline', name: 'Timeline View' }
]
for (const { url, name } of pages) {
test(`${name} accessibility`, async ({ page }) => {
await page.goto(url)
const results = await new AxeBuilder({ page }).analyze()
expect(results.violations).toEqual([])
})
}
Test pages requiring authentication:
test.use({ storageState: 'auth.json' })
test('dashboard accessibility', async ({ page }) => {
await page.goto('/dashboard')
const results = await new AxeBuilder({ page }).analyze()
expect(results.violations).toEqual([])
})
Create custom report templates in assets/report-templates/:
github-template.md - GitHub PR commentsgitlab-template.md - GitLab MR commentsslack-template.md - Slack notificationshtml-template.html - HTML reportsConfigure report distribution:
python scripts/generate_a11y_report.py \
--input results.json \
--output-dir reports/ \
--formats github gitlab slack html \
--slack-webhook $SLACK_WEBHOOK \
--github-token $GITHUB_TOKEN
Store results for trend analysis:
# Save results with timestamp
python scripts/save_a11y_results.py \
--input test-results/a11y-results.json \
--database a11y-history.db
# Generate trend report
python scripts/generate_trend_report.py \
--database a11y-history.db \
--days 30 \
--output a11y-trends.md
Generate metrics for dashboards:
{
"timestamp": "2025-01-15T10:30:00Z",
"commit": "abc123",
"branch": "feature/new-ui",
"violations": {
"critical": 2,
"serious": 5,
"moderate": 3,
"minor": 2
},
"wcagCompliance": {
"a": false,
"aa": false,
"aaa": false
},
"pagesTested": 5,
"totalElements": 1247,
"testedElements": 1247
}
Consult the following resources for detailed information:
scripts/generate_a11y_report.py - Report generatorscripts/save_a11y_results.py - Historical data storagescripts/generate_trend_report.py - Trend analysisassets/a11y-test.spec.ts - Playwright test templateassets/pa11y-config.json - pa11y-ci configurationassets/github-actions-a11y.yml - GitHub Actions workflowassets/gitlab-ci-a11y.yml - GitLab CI configurationassets/a11y-thresholds.json - Violation thresholdsreferences/wcag-criteria.md - WCAG standards referencereferences/common-violations.md - Common issues and fixesCreating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.