npx claudepluginhub caphtech/claude-marketplace --plugin disposable-pluginWant just this skill?
Then install: npx claudepluginhub u/[userId]/[slug]
Analyze a disposable prototype across 10 quality axes using static analysis, test results, and Codex MCP triangulation. Produces structured autopsy report with scored findings and recommendations. Part of H-DGM cycle. Use after disposable-spike completes.
This skill uses the workspace's default tool permissions.
Disposable Autopsy — Phase 2: Analyze
Perform 10-axis analysis of a disposable prototype, combining quantitative metrics with qualitative AI review.
Prerequisites
- Completed spike:
.disposable/cycles/cycle_{N}/spike-complete.jsonmust exist - Spike branch
disposable/cycle_{N}must exist - Codex MCP available for triangulated review (optional but recommended)
Procedure
Step 1: Load Spike Context
- Determine cycle: use
$ARGUMENTSif provided, otherwise read latest from.disposable/history.json - Load metrics from
.disposable/cycles/cycle_{N}/spike-complete.json - Checkout spike branch:
git checkout disposable/cycle_{N} - Read generated source files for analysis
Step 2: Static Analysis (Quantitative)
Extract quantitative signals from metrics:
| Metric | Maps to Axis |
|---|---|
| lint.error count | correctness, readability |
| tests.failed | correctness, error-handling |
| tests.passed / tests.total | testability |
| coverage.line.pct | testability, maintainability |
| coverage.branch.pct | error-handling |
Step 3: Qualitative Analysis (10 Axes)
Analyze the prototype code against each axis. For each axis:
- correctness — Does the code do what was specified? Check requirements coverage, logic errors
- architecture — Module boundaries, dependency direction, separation of concerns
- security — Input validation, injection risks, auth boundaries, secret handling
- performance — Algorithmic complexity, unnecessary allocations, N+1 patterns
- testability — Test isolation, mock-ability, deterministic behavior
- readability — Naming, function length, cognitive complexity
- maintainability — DRY, coupling metrics, change amplification risk
- error-handling — Error propagation, recovery paths, fail-fast behavior
- dependency-hygiene — Minimal dependencies, version constraints, license compatibility
- documentation — API contracts, non-obvious behavior, setup instructions
For each axis, assign:
status:scored|na|insufficient-evidencescore: 1-5 (when scored)- 1 = Critical issues, fundamentally broken
- 2 = Major issues, significant rework needed
- 3 = Acceptable, typical for rapid prototype
- 4 = Good, minor improvements only
- 5 = Excellent, production-ready quality
findings[]: Specific issues with severity and evidence referencerecommendations[]: Actionable improvements with priority
Step 4: Triangulated Review (Optional)
If Codex MCP is available, request independent review:
mcp__codex__codex(
prompt: "Review the following disposable prototype for {axis}.
Focus on: {axis-specific criteria}.
Report findings as JSON array with id, severity, description, evidenceRef fields.
Files: {file list}",
model: "gpt-5.4",
config: { "model_reasoning_effort": "xhigh" },
cwd: "{project_root}"
)
Merge Codex findings with Claude findings:
- Findings reported by both → increase confidence (severity stays or escalates)
- Findings reported by only one → keep but flag as single-source
- Contradictions → note in findings, use Claude's judgment for final score
Step 5: Determine Verdict
Apply quality gates from references/quality-gates.md:
- Calculate
averageScorefrom allscoredaxes - Check each gate condition against metrics and scores
- Assign verdict:
PASS|CALIBRATE|FAIL
Step 6: Generate Report
Construct autopsy report following references/autopsy-schema.json:
{
"schemaVersion": "1.0.0",
"rubricVersion": "1.0.0",
"cycleId": "cycle_{N}",
"timestamp": "{ISO 8601}",
"metricsRef": "spike-complete.json",
"axes": { ... },
"summary": {
"verdict": "PASS|CALIBRATE|FAIL",
"strengths": [...],
"criticalIssues": [...],
"averageScore": N.N
}
}
Step 7: Save, Validate & Mask
- Save report to
.disposable/cycles/cycle_{N}/autopsy-report.json - Validate report against schema:
node {plugin_root}/scripts/dist/validate-report.mjs \ .disposable/cycles/cycle_{N}/autopsy-report.json \ --schema {plugin_root}/skills/disposable-cycle/references/autopsy-schema.json - If validation fails: fix report structure and re-validate (max 2 retries)
- Mask sensitive data:
node {plugin_root}/scripts/dist/mask-sensitive.mjs \ .disposable/cycles/cycle_{N}/autopsy-report.json --in-place - Return to original branch:
git checkout -
Step 8: Report to User
Present summary:
- Verdict with confidence level
- Top 3 strengths
- Critical issues requiring attention
- Axis scores table
- Recommendation for next step:
/disposable-distillor/disposable-cycleto iterate
Output
.disposable/cycles/cycle_{N}/autopsy-report.json— validated autopsy report- Ready for
/disposable-distill
Error Handling
- If metrics file is missing: check data completeness. If tests are unavailable, set verdict to FAIL per quality-gates.md. For lint/coverage only, mark affected axes as
insufficient-evidenceand continue - If Codex MCP is unavailable: proceed with Claude-only analysis, note in report
- If schema validation fails: fix report structure, re-validate (max 2 retries)
Similar Skills
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.