From add
Runs quality gates for linting, type checking, unit tests with coverage, spec compliance, and smoke checks at local/CI/deploy levels.
npx claudepluginhub mountainunicorn/add --plugin addThis skill is limited to using the following tools:
Execute quality gates to verify code meets production standards. This skill runs automated checks and produces a structured pass/fail report.
Runs lint, type-check, tests, and build checks for Node.js/TS, Python, Rust, Go, Java projects to verify code health after changes.
Verifies implementation completion by running tests, code hygiene review, spec compliance validation, and drift checks; blocks claims on failures. Use before commits or merges.
Enforces pre-deployment quality gates with Vitest tests for frontend safety, API routes, business logic, and i18n sync. Blocks deploys without passing tests and evidence.
Share bugs, ideas, or general feedback.
Execute quality gates to verify code meets production standards. This skill runs automated checks and produces a structured pass/fail report.
The Verify skill is the final checkpoint before deployment. It runs a sequence of quality gates determined by context and provides clear go/no-go status for each gate. Output is a structured report with pass/fail status and next steps.
The five-gate system ensures code quality at every stage:
Read .add/config.json
Count active rules for maturity level
.add/config.json (default: alpha)rules/ directorymaturity: fieldmaturity: poc is active at all levelsmaturity: beta is active at beta and ga onlyDetermine execution level
Verify required files exist
Check environment
Check for session handoff
.add/handoff.md if it existsPurpose: Ensure code follows style conventions and has no obvious issues.
Steps:
Detect linter(s) from config or package.json
Run linter(s)
# JavaScript
npx eslint --max-warnings 0 src/ tests/
# Python
python -m flake8 src/ tests/ --max-line-length 100
# Go
gofmt -w ./...
golangci-lint run ./...
Capture output
Handle --fix flag
Pass/Fail
Report:
Gate 1: Lint & Formatting
- Errors: 0
- Warnings: 0
- Status: ✓ PASS
Files checked: 12
Issues fixed (--fix): 0
Purpose: Catch type errors and ensure type safety.
Steps:
Detect type checker(s)
tsc --noEmitflowmypy src/Run type checker
# TypeScript
npx tsc --noEmit --strict
# Python with MyPy
python -m mypy src/ --strict
Capture output
Pass/Fail
Report:
Gate 2: Type Checking
- Errors: 0
- Status: ✓ PASS
Type checker: TypeScript (--strict)
Files checked: 12
Purpose: Verify all tests pass and code coverage meets threshold.
Steps:
Run test suite
# JavaScript
npm test -- --coverage --silent
# Python
python -m pytest --cov=src --cov-report=term-summary tests/
Parse test output
Parse coverage output
Identify coverage gaps (if below threshold)
Pass/Fail criteria
Report:
Gate 3: Unit Tests & Coverage
- Tests Run: 32
- Tests Passed: 32
- Tests Failed: 0
- Duration: 2.3s
- Line Coverage: 87% (threshold: 80%) ✓
- Branch Coverage: 82% (threshold: 80%) ✓
- Status: ✓ PASS
Coverage gaps (< 80%):
- src/utils.ts: 73%
- src/api.ts: 79%
Purpose: Verify implementation meets spec and integration points work.
Steps:
Read spec file (specs/{feature}.md)
Verify test coverage
Run integration tests (if separate suite)
npm test -- --testMatch="**/*.integration.test.ts"
# or
python -m pytest tests/integration/ -v
Spec compliance check
Pass/Fail
Report:
Gate 4: Spec Compliance & Integration Tests
- Spec: Feature X v1.0
- Acceptance Criteria: 5 total
- AC-001: ✓ Tested and passing
- AC-002: ✓ Tested and passing
- AC-003: ✓ Tested and passing
- AC-004: ✓ Tested and passing
- AC-005: ✓ Tested and passing
- Integration Tests: 8 total
- 8 passed, 0 failed
- Status: ✓ PASS
Test mapping file: tests/feature-mapping.md
All requirements traced and verified.
Purpose: Quick health check after deployment to catch obvious breakage.
Steps:
Identify smoke test script
npm run test:smoke or ./scripts/smoke-tests.shRun smoke tests against deployed environment
ENVIRONMENT=staging npm run test:smoke
# or
./scripts/smoke-tests.sh production
Capture output
Pass/Fail
Report:
Gate 5: Smoke Tests (Post-Deploy)
- Environment: staging
- Smoke tests run: 6
- Passed: 6
- Failed: 0
- Duration: 15s
- Status: ✓ PASS
Endpoints verified:
✓ GET /api/health
✓ GET /api/version
✓ POST /api/submit (with test data)
✓ GET /api/status
✓ Database connection
✓ Cache layer
After each gate's core checks, run maturity-scaled checks from the quality-gates rule. These checks progressively tighten with project maturity.
Execution Pattern (repeat for each gate):
.add/config.json (default: alpha)rules/quality-gates.md → Maturity-Scaled Checks section.add/config.json qualityChecks if presentReport Section (added to each gate's output):
Maturity-Scaled Checks ({maturity level}):
Code Quality: ✓ PASS (complexity max: 12, threshold: 15)
Security: ✓ PASS (no secrets, OWASP clean)
Readability: ⚠ ADVISORY (2 functions missing docstrings)
Performance: ⊘ SKIPPED (not checked at alpha)
Repo Hygiene: ✓ PASS (branch naming ok, .gitignore exists)
Advisory findings (non-blocking):
- src/utils.ts:45 — function missing docstring on export
- src/api.ts:12 — function missing docstring on export
Add this section to the overall verification report, after the gate summary table:
## Maturity-Scaled Checks Summary
Maturity Level: {level} (from .add/config.json)
| Category | Status | Blocking | Advisory | Details |
|----------|--------|----------|----------|---------|
| Code Quality | ✓ PASS | 0 | 0 | All metrics within thresholds |
| Security | ✓ PASS | 0 | 2 | OWASP spot-check: 2 minor findings |
| Readability | ⚠ WARN | 0 | 3 | 3 exports missing docstrings |
| Performance | ⊘ SKIP | — | — | Not checked at alpha maturity |
| Repo Hygiene | ✓ PASS | 0 | 0 | All hygiene checks pass |
Advisory Findings (non-blocking):
1. [Security] src/api.ts:34 — input not sanitized before template literal
2. [Security] src/auth.ts:89 — password comparison not constant-time
3. [Readability] src/utils.ts:45 — exported function missing docstring
4. [Readability] src/api.ts:12 — exported function missing docstring
5. [Readability] src/form.ts:78 — magic number 86400 (use named constant)
Runs Gates 1-2:
Typical use: Before committing code locally
Runs Gates 1-3:
Typical use: CI/CD pipeline on every push
Runs Gates 1-4:
Typical use: Before deploying to production
Runs Gate 5 only:
Typical use: After deployment to verify health
Generate a comprehensive verification report:
# Quality Gates Verification Report
## Execution Context
- Level: {level}
- Timestamp: {ISO timestamp}
- Feature: {feature-name}
- Branch: {git branch}
- Active Rules: {N} at {maturity} level
## Summary
Overall Status: ✓ ALL GATES PASSED [or ✗ GATES FAILED]
| Gate | Name | Status | Details |
|------|------|--------|---------|
| 1 | Lint & Formatting | ✓ PASS | 0 errors, 0 warnings |
| 2 | Type Checking | ✓ PASS | 0 type errors |
| 3 | Tests & Coverage | ✓ PASS | 32/32 tests, 87% coverage |
| 4 | Spec Compliance | ✓ PASS | 5/5 ACs tested |
| 5 | Smoke Tests | ⊘ SKIPPED | Not applicable at this level |
## Gate 1: Lint & Formatting
- Linter: eslint
- Status: ✓ PASS
- Errors: 0
- Warnings: 0
- Files checked: 12
## Gate 2: Type Checking
- Type checker: TypeScript (strict mode)
- Status: ✓ PASS
- Type errors: 0
## Gate 3: Unit Tests & Coverage
- Status: ✓ PASS
- Tests run: 32
- Passed: 32
- Failed: 0
- Duration: 2.3s
- Coverage:
- Line: 87% (target: 80%) ✓
- Branch: 82% (target: 80%) ✓
- Function: 100% ✓
## Gate 4: Spec Compliance & Integration Tests
- Status: ✓ PASS
- ACs tested: 5/5
- Integration tests: 8 passed, 0 failed
- Spec: Feature X v1.0 (fully compliant)
## Gate 5: Smoke Tests
- Status: ⊘ SKIPPED (not applicable at 'deploy' level)
---
## Recommendations
Ready to proceed: ✓ YES
- All gates passed
- Code meets quality standards
- Safe to merge and deploy
Next steps:
1. [If all gates pass] Run /add:deploy to commit and push
2. [If gates fail] Fix issues and re-run /add:verify
Detailed gate results:
- No critical issues
- No warnings
- Coverage healthy
---
## Configuration Used
- test.framework: jest
- test.minCoverage: 80%
- code.lint: eslint with airbnb config
- ci.gates: [lint, types, tests, spec-compliance]
Use TaskCreate and TaskUpdate to report progress through the CLI spinner. Create tasks at the start of each major phase and mark them completed as they finish.
Tasks to create:
| Phase | Subject | activeForm |
|---|---|---|
| Pre-flight | Running pre-flight checks | Running pre-flight checks... |
| Gate 1 | Lint and formatting | Checking lint and formatting... |
| Gate 2 | Type checking | Running type checks... |
| Gate 3 | Tests and coverage | Running tests and checking coverage... |
| Gate 4 | Spec compliance | Verifying spec compliance... |
| Gate 5 | Smoke tests | Running smoke tests... |
| Maturity | Maturity-scaled checks | Running maturity-scaled checks... |
| Report | Generating verification report | Generating verification report... |
Mark each task in_progress when starting and completed when done. This gives the user real-time visibility into skill execution.
Gate fails
Tools not installed
npm install or pip installCoverage below threshold
Tests timeout
npm test -- --testNamePattern="test_name"Environment issues
When --fix is provided:
Gate 1 (Lint): Auto-fix formatting issues
Gate 2 (Types): Cannot auto-fix type errors
Gate 3 (Tests): Cannot auto-fix test failures
Gate 4 (Spec): Cannot auto-fix spec mismatches
Gate 5 (Smoke): Cannot auto-fix smoke test failures
{
"test": {
"framework": "jest",
"minCoverage": 80,
"convention": "test_*.test.ts",
"integrationConvention": "*.integration.test.ts"
},
"code": {
"lint": "eslint",
"types": "tsc --strict",
"style": "prettier"
},
"ci": {
"gates": ["lint", "types", "unit-tests", "spec-compliance", "integration-tests"],
"smokeTestScript": "npm run test:smoke"
}
}
After completing this skill, do BOTH:
Append one observation line to .add/observations.md:
{YYYY-MM-DD HH:MM} | verify | {one-line summary of outcome} | {cost or benefit estimate}
If .add/observations.md does not exist, create it with a # Process Observations header first.
Write a structured JSON learning entry per the checkpoint trigger in rules/learning.md (section: "After Verification"). Classify scope, write to the appropriate JSON file (.add/learnings.json or ~/.claude/add/library.json), and regenerate the markdown view.