Evidence-before-claims verification gate — test suite, acceptance criteria, regression check, evidence collection
Verifies code changes against test suites and acceptance criteria to produce evidence-based pass/fail decisions.
npx claudepluginhub jugrajsingh/skillgardenThis skill is limited to using the following tools:
Run evidence-based verification: test suite execution, acceptance criteria checking, regression detection, and gate decision.
$ARGUMENTS = optional scope (slug or "current plan").
If $ARGUMENTS is empty:
Detect the test runner from project files:
Execute the test suite and capture full output (stdout + stderr):
# Example for Python:
pytest --tb=short -v 2>&1
Record results:
If the test suite fails to run at all (import errors, configuration issues), report this as a critical failure immediately.
Source acceptance criteria based on available context:
If slug provided and plan exists:
If no plan available:
Extract criteria from commit messages:
git log develop...HEAD --format="%B" | grep -i "acceptance\|criteria\|verify"
Extract from PR description if one exists
If no criteria found, report: "No acceptance criteria found. Verification limited to test suite results."
List each criterion with a number for reference.
For EACH acceptance criterion, collect concrete evidence:
Types of evidence:
Test output — grep test results for tests that exercise this criterion
pytest -v -k "{related_test_name}" 2>&1
File diff — show the implementation that satisfies the criterion
git diff develop...HEAD -- {relevant_file}
Command output — run a command that demonstrates the result
# e.g., python -c "from module import feature; print(feature())"
For each criterion, record:
Evidence rules:
Run the broader test suite (not just tests related to new changes):
# Full suite
pytest --tb=short 2>&1
Check for:
New warnings — compare test output for warning messages that weren't present before
Performance degradation — if test execution time is available from CI or previous runs, check for >2x increase
Flaky tests — if any test failed, run it again to check for flakiness:
pytest {failed_test} -v --count=2 2>&1
(If pytest-repeat is not available, run the test twice manually)
Record regression check results:
## Verification Report
### Test Suite
- Runner: {runner_name}
- Status: {PASS or FAIL}
- Tests: {passed}/{total} ({skipped} skipped)
- Duration: {time}
- Warnings: {count}
### Acceptance Criteria
| # | Criterion | Status | Evidence |
|---|-----------|--------|----------|
| 1 | {criterion text} | VERIFIED | {concrete evidence reference} |
| 2 | {criterion text} | FAILED | {what is wrong} |
### Regression Check
- Broader suite: {PASS or FAIL}
- New warnings: {count}
- Performance: {normal or degraded}
- Flaky tests: {count}
### Gate Decision
{PASS or BLOCKED}
The gate is PASS only when ALL of the following are true:
The gate is BLOCKED when ANY of the following are true:
If BLOCKED, list every failure explicitly:
### Gate Decision
BLOCKED
Failures:
1. Test suite: {N} tests failing
2. Criterion #2: {what failed}
3. Regression: {what regressed}
Recommended actions:
1. {specific action to fix failure 1}
2. {specific action to fix failure 2}
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.