From agent-capability-standard
Check correctness against tests, specs, or invariants; produce pass/fail evidence. Use when validating changes, testing hypotheses, checking invariants, or confirming behavior matches expectations.
npx claudepluginhub synaptiai/synapti-marketplace --plugin agent-capability-standardThis skill is limited to using the following tools:
Before verification, gather current state:
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Before verification, gather current state:
pwdgit status --short 2>/dev/null || echo "Not a git repo"git log --oneline -5 2>/dev/null || echo "No git history"find . -type f -mmin -30 -not -path './.git/*' 2>/dev/null | head -10 || echo "None"Execute verify to determine whether an artifact (code, configuration, state, output) conforms to a specification, passes tests, or satisfies declared invariants.
Success criteria:
Compatible schemas:
schemas/output_schema.yaml| Parameter | Required | Type | Description |
|---|---|---|---|
spec | Yes | string|object | The specification, test suite, or invariants to check against |
artifact | Yes | string|object | The target being verified (file path, function, API endpoint, state) |
constraints | No | object | Verification constraints: timeout, test filter, coverage threshold |
check_types | No | array | Types of checks: assertion, invariant, regression, schema |
Load specification: Read the spec/test/invariant definition
Examine artifact: Inspect the target being verified
Execute checks: Run each verification check systematically
Analyze results: Determine verdict from check outcomes
Ground claims: Attach evidence anchors to all findings
file:line, tool:bash:<command>, or test:<test_name>Format output: Structure results per output contract
Return a structured object:
verdict: PASS | FAIL | INCONCLUSIVE
checks_run:
- name: string # Check identifier
type: assertion | invariant | regression | schema
target: string # What was checked
result: PASS | FAIL
evidence: string # Output or file reference
failures:
- check: string # Which check failed
expected: string # What was expected
actual: string # What was observed
severity: low | medium | high | critical
fix_suggestions:
- failure_ref: string # Reference to failure
suggestion: string # How to fix
confidence: number # 0.0-1.0 for suggestion quality
coverage:
checked: integer # Number of checks run
total: integer # Total checks possible
percentage: number # Coverage percentage
confidence: number # 0.0-1.0
evidence_anchors: ["file:line", "tool:bash:...", "test:..."]
assumptions: [] # Explicit assumptions made
| Field | Type | Description |
|---|---|---|
verdict | enum | Overall verification result |
checks_run | array | List of all checks executed with results |
failures | array | Details of failed checks |
fix_suggestions | array | Actionable suggestions to fix failures |
coverage | object | Metrics about verification coverage |
confidence | number | 0.0-1.0 based on evidence quality and check coverage |
evidence_anchors | array | File:line, URLs, or tool output references |
assumptions | array | Explicit assumptions about expected behavior |
Input:
spec: "pytest tests/test_calculator.py -v"
artifact: "src/calculator.py"
constraints:
timeout: "60s"
check_types: ["regression", "assertion"]
Output:
verdict: FAIL
checks_run:
- name: test_add_positive
type: assertion
target: Calculator.add
result: PASS
evidence: "tool:bash:pytest - PASSED"
- name: test_divide_by_zero
type: assertion
target: Calculator.divide
result: FAIL
evidence: "tool:bash:pytest - AssertionError: Expected ZeroDivisionError"
failures:
- check: test_divide_by_zero
expected: "ZeroDivisionError raised"
actual: "Returns infinity"
severity: high
fix_suggestions:
- failure_ref: test_divide_by_zero
suggestion: "Add check for divisor == 0 before division in Calculator.divide()"
confidence: 0.9
coverage:
checked: 5
total: 5
percentage: 100.0
confidence: 0.95
evidence_anchors:
- "tool:bash:pytest tests/test_calculator.py -v"
- "src/calculator.py:42"
assumptions:
- "Test environment has pytest installed"
- "Tests are deterministic"
Evidence pattern: Test output captured from pytest execution, line numbers from stack traces.
Input:
spec:
type: schema
schema_path: "schemas/config.json"
artifact: "config/production.yaml"
Output:
verdict: PASS
checks_run:
- name: schema_validation
type: schema
target: config/production.yaml
result: PASS
evidence: "All 12 required fields present, types valid"
failures: []
fix_suggestions: []
coverage:
checked: 1
total: 1
percentage: 100.0
confidence: 1.0
evidence_anchors:
- "config/production.yaml:1-45"
- "schemas/config.json:1"
assumptions:
- "JSON Schema draft-07 validation semantics"
Apply the following verification patterns:
Verification tools: Bash (for test execution), Read (for evidence files)
mutation: falserequires_checkpoint: falserequires_approval: falserisk: mediumCapability-specific rules:
This skill includes utility scripts for automated verification:
Located at: scripts/verify-state.sh
Usage:
./scripts/verify-state.sh [--git] [--files] [--tests] [--all]
Options:
--git - Verify git state (uncommitted changes, branch status)--files - Verify file integrity (broken symlinks, empty files)--tests - Execute project tests (auto-detects npm/pytest/rspec)--all - Run all verification checks (default)Output:
.verification-report.jsonExample:
# Run all checks
./scripts/verify-state.sh --all
# Check only git and file integrity
./scripts/verify-state.sh --git --files
Commonly follows:
model-schema - Provides the invariants/spec to verify against (REQUIRED)act-plan - Verify the outcome of executed changesplan - Provides expected postconditions to verifyCommonly precedes:
audit - Record verification results for compliancerollback - If verify FAIL, trigger rollback (CAVR pattern)critique - Analyze root cause of failuresAnti-patterns:
Workflow references:
reference/composition_patterns.md#debug-code-change for CAVR patternreference/composition_patterns.md#digital-twin-sync-loop for verify-after-act usage