From qa-lead
Bootstrap the quality documentation structure for a project. Creates docs/quality/, generates initial templates, and writes domain CLAUDE.md. Idempotent — merges missing sections into existing files without overwriting.
npx claudepluginhub hpsgd/turtlestack --plugin qa-leadThis skill is limited to using the following tools:
Bootstrap the quality documentation structure for **$ARGUMENTS**.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Designs, implements, and audits WCAG 2.2 AA accessible UIs for Web (ARIA/HTML5), iOS (SwiftUI traits), and Android (Compose semantics). Audits code for compliance gaps.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Bootstrap the quality documentation structure for $ARGUMENTS.
mkdir -p docs/quality
For each file below, apply the safe merge pattern:
<!-- Merged from qa-lead bootstrap v0.1.0 -->docs/quality/CLAUDE.mdCreate with this content (~130 lines):
# Quality Domain
This directory contains quality assurance documentation: test strategy, quality gates, and definitions of ready/done.
## What This Domain Covers
- **Test strategy** — overall approach to testing across the project
- **Quality gates** — automated and manual checkpoints before promotion
- **Definitions of Ready/Done** — shared team agreements on work-item lifecycle
- **Acceptance criteria** — BDD-format specifications for features
## Test Pyramid
Follow the test pyramid to balance speed, cost, and confidence:
/ E2E \ Few — slow, expensive, high confidence
/----------\
/ Integration \ Some — moderate speed, test boundaries
/----------------\
/ Unit Tests \ Many — fast, cheap, test logic
/____________________\
| Layer | Proportion | Speed | What to Test |
|-------|-----------|-------|--------------|
| Unit | ~70% | < 10ms each | Pure logic, transformations, calculations |
| Integration | ~20% | < 1s each | API boundaries, DB queries, service interactions |
| E2E | ~10% | < 30s each | Critical user journeys only |
### Unit test conventions
- One assertion per test (prefer)
- Arrange-Act-Assert (AAA) pattern
- Test behaviour, not implementation
- Name tests: `should [expected behaviour] when [condition]`
### Integration test conventions
- Use real databases where practical (testcontainers or in-memory)
- Mock only external third-party services
- Test API contracts (request/response shapes)
### E2E test conventions
- Cover the top 5–10 critical user journeys
- Run in CI on every PR (parallelised)
- Use page object pattern for UI tests
## BDD Conventions
Use Given/When/Then format for acceptance criteria:
```gherkin
Feature: [Feature name]
Scenario: [Scenario description]
Given [precondition]
When [action]
Then [expected outcome]
Before development begins on any user story:
Timebox to 30 minutes. Output: refined acceptance criteria in Given/When/Then format.
A story is ready for development when:
A story is done when:
| Tool | Purpose |
|---|---|
| GitHub Actions | CI test gates — runs tests on every PR |
| SonarCloud | Code coverage tracking and quality gate enforcement |
| Skill | Purpose |
|---|---|
/qa-lead:test-strategy | Create or review a test strategy |
/qa-lead:write-acceptance-criteria | Write BDD acceptance criteria for a feature |
#### File 2: `docs/quality/test-strategy.md`
Create with this content:
```markdown
# Test Strategy — [Project Name]
> Replace [Project Name] with the actual project name.
## 1. Scope
### In scope
<!-- Which parts of the system are covered by this strategy -->
### Out of scope
<!-- What is explicitly NOT tested (e.g., third-party SaaS internals) -->
## 2. Test Levels
| Level | Tools | Scope | Run When |
|-------|-------|-------|----------|
| Unit | | Business logic, utilities | Every commit |
| Integration | | API boundaries, DB queries | Every PR |
| E2E | | Critical user journeys | Every PR |
| Performance | | Response times, throughput | Pre-release |
## 3. Test Environments
| Environment | Purpose | Data |
|-------------|---------|------|
| Local | Developer testing | Seed data |
| CI | Automated gates | Ephemeral |
| Staging | Pre-production validation | Anonymised production-like |
| Production | Smoke tests only | Real data |
## 4. Test Data Strategy
<!-- How test data is created, managed, and cleaned up -->
## 5. Defect Management
| Severity | Response Time | Resolution Target |
|----------|--------------|-------------------|
| Critical (P1) | Immediate | Same day |
| Major (P2) | Within 4h | Within sprint |
| Minor (P3) | Next standup | Backlog |
| Trivial (P4) | Triage | Best effort |
## 6. Risks and Mitigations
| Risk | Impact | Mitigation |
|------|--------|------------|
| Flaky tests erode trust | Medium | Zero-tolerance flaky policy |
| Low coverage areas | High | Coverage tracking per module |
| Slow test suite | Medium | Parallelisation, test pyramid adherence |
docs/quality/definition-of-ready.mdCreate with this content:
# Definition of Ready
A user story is **ready** for development when ALL of the following are true:
## Required
- [ ] User story follows the format: "As a [persona], I want [action] so that [benefit]"
- [ ] Acceptance criteria written in Given/When/Then format
- [ ] Three Amigos session completed (PO + Dev + QA)
- [ ] Dependencies identified and available (APIs, designs, data)
- [ ] Story is estimated (story points)
- [ ] Story fits within a single sprint
## Recommended
- [ ] UX designs or wireframes attached (if UI work)
- [ ] API contract defined (if integration work)
- [ ] Edge cases and error scenarios documented
- [ ] Performance expectations stated (if applicable)
docs/quality/definition-of-done.mdCreate with this content:
# Definition of Done
A user story is **done** when ALL of the following are true:
## Code
- [ ] Code implemented according to acceptance criteria
- [ ] Code peer-reviewed and approved
- [ ] No TODO/FIXME comments left without a linked issue
## Testing
- [ ] Unit tests written and passing
- [ ] Integration tests written and passing (where applicable)
- [ ] Acceptance criteria verified (automated preferred)
- [ ] No regressions in existing tests
- [ ] Code coverage maintained or improved
## Quality
- [ ] SonarCloud quality gate passes
- [ ] No new critical or blocker issues
- [ ] Linting and formatting checks pass
## Documentation
- [ ] Public API changes documented
- [ ] README updated (if behaviour changes)
- [ ] ADR written (if architectural decision made)
## Deployment
- [ ] Deployed to staging and verified
- [ ] Feature flag configured (if applicable)
- [ ] Monitoring/alerting in place (if new service)
docs/quality/quality-gates.mdCreate with this content:
# Quality Gates
Quality gates are automated checkpoints that code must pass before promotion.
## Gate Definitions
### Gate 1: PR Merge
**Enforced by:** GitHub Actions CI pipeline
| Check | Tool | Threshold |
|-------|------|-----------|
| Unit tests | CI runner | 100% pass |
| Integration tests | CI runner | 100% pass |
| Code coverage | SonarCloud | >= project threshold |
| Static analysis | SonarCloud | No new critical/blocker |
| Linting | CI runner | Zero violations |
| Peer review | GitHub | >= 1 approval |
### Gate 2: Staging Promotion
**Enforced by:** GitHub Actions deploy pipeline
| Check | Tool | Threshold |
|-------|------|-----------|
| All Gate 1 checks | CI | Pass |
| E2E tests | CI runner | 100% pass |
| Security scan | SonarCloud / CI | No high/critical |
| Performance budget | CI runner | Within budget |
### Gate 3: Production Release
**Enforced by:** Release checklist + CI
| Check | Tool | Threshold |
|-------|------|-----------|
| All Gate 2 checks | CI | Pass |
| Release checklist | Manual | Complete |
| Rollback plan | Documentation | Documented |
| Smoke tests | Post-deploy CI | Pass |
## Overriding a Gate
Gates should not be bypassed. If a gate must be overridden:
1. Document the reason in the PR/release
2. Get approval from tech lead and QA lead
3. Create a follow-up issue to address the underlying problem
4. Time-bound the override (revert within one sprint)
After creating/merging all files, output a summary:
## Quality Bootstrap Complete
### Files created
- `docs/quality/CLAUDE.md` — domain conventions and skill reference
- `docs/quality/test-strategy.md` — test strategy template
- `docs/quality/definition-of-ready.md` — Definition of Ready checklist
- `docs/quality/definition-of-done.md` — Definition of Done checklist
- `docs/quality/quality-gates.md` — quality gate definitions
### Files merged
- (list any existing files where sections were appended)
### Next steps
- Fill in `test-strategy.md` with project-specific tools and scope
- Customise coverage thresholds in `quality-gates.md`
- Use `/qa-lead:test-strategy` to elaborate the test strategy