From magic-powers
Use when designing quality assurance processes — defining quality standards, integrating QA checkpoints into SDLC, creating process documentation, onboarding teams to quality practices, and building a quality-first engineering culture.
npx claudepluginhub kienbui1995/magic-powers --plugin magic-powersThis skill uses the workspace's default tool permissions.
- Setting up QA for a new team or project from scratch
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
# Quality Standard: Code Quality
## Purpose
Ensure all code merged to main meets minimum quality standards.
## Standards
### Code Review (mandatory)
- Every PR requires at least 1 reviewer (2 for security-sensitive code)
- Reviewer checks: logic correctness, security, test coverage, style
- Author must not approve their own PR
- Review completed within 24 hours (48 hours for complex PRs)
### Test Coverage
- Unit test branch coverage: >= 80% for business logic
- New code without tests: block merge (enforced by CI)
- Flaky tests: fix within 2 business days or quarantine
### Static Analysis
- SonarQube quality gate: no new Critical/Blocker issues
- Security scan: no new HIGH/CRITICAL vulnerabilities (Snyk/Dependabot)
- Linting: must pass (no warnings allowed on new code)
### Documentation
- Public APIs: OpenAPI spec updated
- Architecture decisions: ADR written for significant changes
- README updated if behavior changes
## Enforcement
- CI pipeline enforces coverage, linting, security scan
- PR template reminds reviewers of standards
- Weekly metrics review: any team consistently below standard -> coaching
Requirements phase:
QA activity: Requirements review for testability, ambiguity
Output: Testable requirements, acceptance criteria per story
Gate: No story moves to development without acceptance criteria
Design phase:
QA activity: Review architecture for testability, observability
Output: Test approach defined (what level, which tools)
Gate: Test approach approved before implementation starts
Development phase:
QA activity: TDD coaching, PR review participation, daily stand-up
Output: Unit tests alongside code, PR quality review
Gate: Coverage threshold enforced in CI
Testing phase:
QA activity: Test execution, defect reporting, regression
Output: Test report, defect list, go/no-go recommendation
Gate: All P1/P2 defects resolved before release
Release phase:
QA activity: Release readiness review, sign-off
Output: Release sign-off document
Gate: QA sign-off required before production deployment
Post-release:
QA activity: Production monitoring, escape rate measurement
Output: Monthly QA metrics report
Gate: Escape rate reviewed monthly
## Definition of Done (DoD)
A user story is Done when:
### Development
- [ ] Code reviewed and approved (2 reviewers for security changes)
- [ ] Unit tests written with >= 80% branch coverage for new code
- [ ] Integration tests added for new API endpoints
- [ ] No new SonarQube Critical/Blocker issues
- [ ] No new security vulnerabilities (Snyk)
- [ ] Code merged to main branch
### Testing
- [ ] AC (acceptance criteria) tested and passing
- [ ] Regression suite passes (no new failures)
- [ ] Cross-browser tested if UI changes
- [ ] Performance acceptable (no significant regression)
### Documentation
- [ ] API docs updated (if applicable)
- [ ] User-facing changes in release notes
### Operations
- [ ] Logging added for new operations
- [ ] Alerts configured for new failure modes
- [ ] Feature flag configured (for gradual rollout)
Levels of quality maturity:
Level 1 — Reactive (fire-fighting)
- Defects found in production
- QA as final gatekeeper (bottleneck)
- No test automation
Actions: Establish basic process, introduce TDD, set up CI
Level 2 — Defined (process exists)
- Test coverage exists
- QA integrated in sprint
- Some automation
Actions: Enforce standards, introduce code review, measure escape rate
Level 3 — Managed (metrics-driven)
- Escape rate tracked
- Shift-left in progress (more unit tests, fewer prod bugs)
- Developer-owned quality
Actions: Shift QA to coaching role, improve metrics, automate quality gates
Level 4 — Optimizing (quality is owned by everyone)
- Defects mostly caught in unit tests
- QA focuses on strategy and risk
- Quality metrics visible to all stakeholders
Current level assessment -> define improvement roadmap -> measure progress quarterly
test-strategy — process design informs the overall test strategyqa-audit — process designed here is what gets audited for complianceqa-risk-management — risk assessment informs which process controls to prioritize