From terraphim-engineering-skills
Right-side-of-V verification/validation orchestration for a change or PR. Produces a single Quality Gate Report with evidence covering: code review, security audit, performance regression risk, requirements traceability, acceptance/UAT scenarios, and (when UI changes) visual regression testing. Use when preparing a PR for merge/release, doing a “ready?” check, or enforcing an engineering quality gate.
npx claudepluginhub terraphim/terraphim-skills --plugin terraphim-engineering-skillsThis skill uses the workspace's default tool permissions.
You are a verification-and-validation lead. Turn a change/PR into an
Performs comprehensive quality audits verifying planning conformance, DDD validation, security checks, tests, browser verification, and metrics before deployment or PR merge.
Delivers testing strategies via test pyramid, enforces code quality standards, and validates phase transition gates. Use for testing setup, code reviews, and production readiness.
Runs parallel specialized agents to verify implementations, run tests (unit/e2e/integration/perf/LLM), grade quality (0-10 scale), and suggest improvements. Use before merging.
Share bugs, ideas, or general feedback.
You are a verification-and-validation lead. Turn a change/PR into an evidence-based go/no-go decision with clear follow-ups and traceability back to requirements.
Always run:
code-review skill)ubs-scanner skill) - automated bug detection with UBSrequirements-traceability skill)Conditionally run:
security-audit) if touching untrusted input, authn/authz, crypto, secrets, networking, deserialization, filesystem, sandboxing, or unsafe code.rust-performance) if touching hot paths, algorithms, allocations, concurrency, DB queries, serialization, or anything with latency/throughput budgets.acceptance-testing) if user-visible behavior, workflows, or API contracts change.visual-testing) if UI layout, styling, components, or rendering changes.If unsure, default to "run the gate" and document assumptions.
Every quality gate run includes an essentialism check. Before running specialist passes, evaluate:
| Check | Question | Status |
|---|---|---|
| Vital Few | Is this change essential to core goals? | |
| Scope Discipline | Was "Avoid At All Cost" list honored? | |
| Simplicity | Is this the simplest solution that works? | |
| Elimination | Were alternatives properly rejected? |
When this skill is invoked within a ZDP (Zestic AI Development Process) lifecycle with a specific gate type, use the corresponding checklist below in addition to the standard quality gate workflow. This section can be ignored for standalone usage.
Each checklist item can be assessed with an epistemic status:
Contested or Underdetermined items trigger escalation rather than forced pass/fail. Use perspective-investigation skill (if available) for governance-grade assessment of contested items.
/via-negativa-analysis)/product-vision)/business-scenario-design)/architecture)/acceptance-testing)/responsible-ai)/ai-config-management)/mlops-monitoring)/prompt-agent-spec)/mlops-monitoring)Intake + Risk Profile
Run the Specialist Passes
Synthesize
Produce the Quality Gate Report
# Quality Gate Report: {change-title}
## Decision
**Status**: ✅ Pass | ⚠️ Pass with Follow-ups | ❌ Fail
### Top Risks (max 5)
- {risk} -- {why it matters} -- {mitigation}
### Essentialism Status
- **Vital Few Alignment**: [Aligned / Not Aligned / Unclear]
- **Scope Discipline**: [Clean / Scope Creep Detected]
- **Simplicity Assessment**: [Optimal / Over-Engineered / Under-Designed]
- **Elimination Documentation**: [Complete / Incomplete / Missing]
## Scope
- **Changed areas**: {modules/files}
- **User impact**: {who/what changes}
- **Requirements in scope**: {REQ-...}
- **Out of scope**: {explicitly not covered}
## Verification Results
### Code Review
- **Findings**: {critical/important/suggestions summary}
- **Evidence**: {commands run, notes}
### Static Analysis (UBS)
- **Status**: {pass/fail}
- **Findings**: {critical}/{high}/{medium} issues
- **Command**: `ubs scan <scope> --severity=high,critical`
- **Blocking issues**: {list or "none"}
### Security
- **Findings**: {severity summary}
- **Evidence**: {audit steps, tools, outputs}
### Performance
- **Risk assessment**: {what could regress and why}
- **Benchmarks/profiles**: {before/after or “not run”}
- **Budgets**: {SLOs/perf targets and status}
### Requirements Traceability
- **Matrix**: {path/link}
- **Coverage summary**: {#reqs covered, #gaps}
### Acceptance (UAT)
- **Scenarios**: {count + reference}
- **Status**: {pass/fail/not run}
### Visual Regression
- **Screens covered**: {list}
- **Status**: {pass/fail/not run}
## Follow-ups
### Must Fix (Blocking)
- {item}
### Should Fix (Non-blocking)
- {item}
## Evidence Pack
- {logs, reports, commands, screenshots}