From crucible
Performs adversarial audits of code subsystems or non-code artifacts (designs, plans, concepts) using parallel analytical lenses tailored to artifact type. Synthesizes findings and offers issue tracker filing.
npx claudepluginhub raddue/crucibleThis skill uses the workspace's default tool permissions.
Adversarial review of code subsystems or non-code artifacts. Dispatches parallel analysis agents across four lenses adapted to the artifact type, synthesizes findings, and offers to file them in the user's issue tracker.
Runs comprehensive codebase audits with mechanical verification (build, lint, tests, secrets scan, git status) and specialist reviewers, producing scored reports across 7+ axes. Quick modes skip reviewers.
Launches parallel agents for iterative review of plans or code across 4 escalating rounds: broad, multisample, focused, focused+multisample. Targets issues in plans >500 lines or with >3 components.
Runs cross-codebase quality audits using parallel specialist agents (QA, Hacker, Performance presets) for systemic bug detection after major features or periodically.
Share bugs, ideas, or general feedback.
Adversarial review of code subsystems or non-code artifacts. Dispatches parallel analysis agents across four lenses adapted to the artifact type, synthesizes findings, and offers to file them in the user's issue tracker.
Announce at start: "Running audit on [target name] (type: [artifact type])."
Skill type: Rigid -- follow exactly, no shortcuts.
Purpose: Review existing subsystems in a repo and report findings. Distinct from quality-gate (which fixes artifacts in a loop) -- audit is find-and-report only.
Model: Opus (orchestrator and analysis agents). Sonnet (scoping exploration). If the orchestrator session is not running Opus, warn: "Audit requires Opus-level reasoning for synthesis. Results may be degraded."
All subagent dispatches use disk-mediated dispatch. See shared/dispatch-convention.md for the full protocol.
Audit supports 4 artifact types, each with tailored analytical lenses:
| Artifact Type | Lens 1 | Lens 2 | Lens 3 | Lens 4 |
|---|---|---|---|---|
code (default) | Correctness | Robustness | Consistency | Architecture |
design | Technical Soundness | Integration Impact | Edge Cases | Scope Clarity |
plan | Feasibility | Risk & Dependencies | Completeness | Assumptions |
concept | Problem-Solution Fit | Feasibility & Cost | Stakeholder Alignment | Blind Assumptions |
/audit save/load # code (default)
/audit docs/plans/2026-04-01-auth-design.md # auto-detects design
/audit docs/plans/2026-04-01-plan.md artifact_type: plan # explicit type
/audit "We should build a CLI tool that..." # auto-detects concept
Parameters:
target (required) — subsystem name, file path, or freeform textartifact_type (optional) — code | design | plan | concept. Auto-detected if omitted.Priority chain when artifact_type is not provided:
code (existing behavior).py, .ts, .go, etc.) → codesource: "design" or source: "spec" → designsource: "plan" or title contains "implementation plan" → planconceptLimitation: Frontmatter-based detection relies on Crucible's source field convention. Repos without this convention will hit the "ambiguous → ask user" fallback more often. The explicit artifact_type parameter is the reliable path for any repo.
design — Design Documents| Lens | Core Question | Focus Areas | Exclusions |
|---|---|---|---|
| Technical Soundness | "Are the technical decisions well-reasoned?" | Trade-off analysis quality, constraint identification, decision-evidence alignment, alternative exploration depth | Integration concerns (Integration Impact lens), boundary conditions (Edge Cases lens), scope questions (Scope Clarity lens) |
| Integration Impact | "How does this design interact with existing systems?" | Breaking changes identified, migration path, dependency awareness, blast radius assessment | Decision quality (Technical Soundness lens), boundary conditions (Edge Cases lens), scope questions (Scope Clarity lens) |
| Edge Cases | "What happens at the boundaries?" | Failure modes addressed, boundary conditions, concurrent usage, data edge cases, degraded-mode behavior | Decision quality (Technical Soundness lens), integration concerns (Integration Impact lens), scope questions (Scope Clarity lens) |
| Scope Clarity | "Is the scope well-defined and appropriate?" | Non-goals stated, scope-to-problem fit, YAGNI compliance, acceptance criteria testability | Decision quality (Technical Soundness lens), integration concerns (Integration Impact lens), boundary conditions (Edge Cases lens) |
plan — Strategic Plans, Implementation Plans, PRDs| Lens | Core Question | Focus Areas | Exclusions |
|---|---|---|---|
| Feasibility | "Can this actually be executed as described?" | Resource requirements vs availability, timeline realism, skill/capability assumptions, tooling prerequisites | Risk identification (Risk & Dependencies lens), missing sections (Completeness lens), environmental assumptions (Assumptions lens) |
| Risk & Dependencies | "What could derail execution?" | External dependency risks, sequencing risks, single points of failure, rollback provisions, blast radius of partial failure | Execution feasibility (Feasibility lens), missing sections (Completeness lens), environmental assumptions (Assumptions lens) |
| Completeness | "What's missing from this plan?" | Phases covered, milestones defined, success criteria measurable, testing strategy present, communication plan | Execution feasibility (Feasibility lens), risk identification (Risk & Dependencies lens), environmental assumptions (Assumptions lens) |
| Assumptions | "What's being taken for granted?" | Environmental assumptions, team capacity assumptions, technical assumptions, timeline assumptions, stakeholder alignment assumptions | Execution feasibility (Feasibility lens), risk identification (Risk & Dependencies lens), missing sections (Completeness lens) |
concept — Product Concepts, Proposals, Early-Stage Ideas| Lens | Core Question | Focus Areas | Exclusions |
|---|---|---|---|
| Problem-Solution Fit | "Does this concept solve a real problem?" | Problem definition clarity, target audience identified, value proposition specificity, differentiation from existing solutions | Build feasibility (Feasibility & Cost lens), stakeholder concerns (Stakeholder Alignment lens), hidden assumptions (Blind Assumptions lens) |
| Feasibility & Cost | "Is this achievable and worth the investment?" | Build vs buy analysis, resource requirements, timeline expectations, opportunity cost, maintenance burden | Problem-solution fit (Problem-Solution Fit lens), stakeholder concerns (Stakeholder Alignment lens), hidden assumptions (Blind Assumptions lens) |
| Stakeholder Alignment | "Who needs to agree and will they?" | Decision-makers identified, conflicting incentives surfaced, adoption path realistic, organizational readiness | Problem-solution fit (Problem-Solution Fit lens), build feasibility (Feasibility & Cost lens), hidden assumptions (Blind Assumptions lens) |
| Blind Assumptions | "What is this concept taking for granted?" | Market assumptions, user behavior assumptions, technical assumptions, competitive landscape assumptions, sustainability assumptions | Problem-solution fit (Problem-Solution Fit lens), build feasibility (Feasibility & Cost lens), stakeholder concerns (Stakeholder Alignment lens) |
Non-code findings use the same severity levels (Fatal/Significant/Minor) but replace code-specific fields:
| Field | Code | Non-Code |
|---|---|---|
| Location | file + line_range | section (nearest markdown heading, e.g., ## Key Decisions > DEC-3) |
| Lens-specific | scenario, failure_scenario, convention_violated, impact | concern |
| Evidence | Code quotes | Document text quotes |
For artifacts without markdown headings, section uses a brief quoted phrase from the opening of the relevant paragraph.
When auditing non-code artifacts, the blind-spots agent hunts for document-level gaps:
Per-task quality gates (red-team, inquisitor) review artifacts produced during development. But the bugs that accumulate in stable code -- the ones nobody's looked at critically in months -- live in subsystems that passed their original review but have drifted, accrued inconsistencies, or developed subtle failure modes. The audit skill performs a focused adversarial review of any existing subsystem on demand.
| Skill | Reviews | When | Fixes? | Scope |
|---|---|---|---|---|
| red-team | A single artifact just produced | During creation | Yes (loop) | One doc/plan/impl |
| inquisitor | A complete implementation diff | During build phase 4 | Yes (automated fix cycle) | Changes only (diffs) |
| audit | Existing code subsystems or non-code artifacts | On demand | No (reports only) | Existing codebase or documents |
Between every agent dispatch and every agent completion, output a status update to the user. This is NOT optional -- the user cannot see agent activity without your narration.
Every status update must include:
After compaction: Re-read the scratch directory and current state before continuing. See Compaction Recovery below.
Examples of GOOD narration:
"Phase 2: Correctness and Robustness lenses complete (4 findings, 2 findings). Architecture still in flight. Consistency Agent A returned -- flagged 6 files, dispatching Agent B."
"Phase 2 complete. All 4 lenses reported: 14 total findings. Moving to Phase 3 synthesis."
"Phase 2 (design audit): Technical Soundness and Integration Impact complete (3 findings, 1 finding). Edge Cases and Scope Clarity still in flight."
Write a status file to ~/.claude/projects/<hash>/memory/pipeline-status.md at every narration point. This file is overwritten (not appended) and provides ambient awareness for the user in a second terminal.
Write the status file at every point where the Communication Requirement mandates narration: before dispatch, after completion, phase transitions, health changes, escalations, and after compaction recovery.
The status file uses this structure (overwritten in full each time):
# Pipeline Status
**Updated:** <current timestamp>
**Started:** <timestamp from first write — persisted across compaction>
**Skill:** audit
**Phase:** <current phase, e.g. "2 — Analysis">
**Health:** <GREEN|YELLOW|RED>
**Suggested Action:** <omit when GREEN; concrete one-sentence action when YELLOW/RED>
**Elapsed:** <computed from Started>
## Recent Events
- [HH:MM] <most recent event>
- [HH:MM] <previous event>
(last 5 events, newest first)
Append after the shared header:
## Lenses (code audit)
- Correctness: DONE (4 findings)
- Robustness: DONE (2 findings)
- Architecture: IN PROGRESS
- Consistency: PENDING
- Blind-spots: PENDING
## Lenses (design audit — example)
- Technical Soundness: DONE (3 findings)
- Integration Impact: DONE (1 finding)
- Edge Cases: IN PROGRESS
- Scope Clarity: PENDING
- Blind-spots: PENDING
Use the lens names matching the current artifact type.
Health transitions are one-directional within a phase: GREEN -> YELLOW -> RED. Phase boundaries reset to GREEN.
When health is YELLOW or RED, include **Suggested Action:** with a concrete, context-specific sentence (e.g., "Architecture lens running >10 minutes. May be processing a large subsystem — check if scope needs narrowing.").
Output concise inline status alongside the status file write:
Phase 2 [3/5 lenses] Robustness complete (2 findings) | GREEN | 22m--- separatorsAfter compaction, before re-writing the status file:
pipeline-status.md to recover Started timestamp and Recent Events bufferStored in ~/.claude/projects/<project-hash>/memory/audit/preferences.md:
## Issue Tracker
- Tracker: github
- Project: owner/repo
First audit run: ask the user which tracker and project. Persist for future runs. Update if user indicates a change.
Canonical path: ~/.claude/projects/<project-hash>/memory/audit/scratch/<run-id>/
The <run-id> is a timestamp generated at the start of Phase 1 (e.g., 2026-03-15T14-30-00). This same identifier is used for all scratch files and session logs throughout the run.
All relative paths in this document (e.g., scratch/<run-id>/manifest.md) are relative to ~/.claude/projects/<project-hash>/memory/audit/.
Stale cleanup: At the start of each audit run, delete scratch directories whose timestamps are older than 1 hour. Do not delete recent directories (could belong to concurrent sessions).
/tmp/crucible-audit-metrics-<run-id>.log/tmp/crucible-audit-decisions-<run-id>.logThe <run-id> is the same timestamp used for the scratch directory.
After context compaction, the orchestrator must first determine whether this is a code or non-code audit:
Read scratch/<run-id>/artifact-type.md. If present and not code, follow non-code recovery. If absent, follow code recovery (existing behavior).
scratch/<run-id>/ to determine current state:
manifest.md exists → Phase 1 scoping is complete (whether produced by recon's subsystem-manifest or the fallback scoping agent -- both write the same format)gate-approved.md exists → user confirmed scope, Phase 2 can proceed<lens>-partition.md files → those lenses' Tier 2 source partitions are recorded<lens>-findings.md files → those lenses have reportedconsistency-a-findings.md without consistency-b-findings.md → Agent B still neededblindspots-findings.md exists → Phase 2.5 is completereport.md exists → Phase 3 synthesis is complete, proceed to Phase 4Phase-specific recovery (code):
manifest.md exists but gate-approved.md does not, re-present the manifest to the user for confirmation.blindspots-findings.md does not, rebuild the coverage map from partition records and findings files (see Coverage Map Construction), then dispatch the blind-spots agent. If blindspots-findings.md exists, Phase 2.5 is complete.report.md exists, re-read it and continue with cross-referencing/filing.artifact-type.md to recover the artifact typeartifact-type.md exists but gate-approved.md does not, re-present the scope summary to the user for confirmation<lens-name-kebab>-findings.md files matching the type's lens names (e.g., technical-soundness-findings.md for design). Dispatch any lenses that don't have findings files.noncode-blindspots-findings.md does not, build the lens summary and dispatch the non-code blind-spots agent. If noncode-blindspots-findings.md exists, Phase 2.5 is complete.User names a subsystem ("save/load", "UI", "networking")
Consult cartographer data if it exists for subsystem boundaries
Dispatch recon with subsystem-manifest module:
/recon
task: "Subsystem manifest for audit: <subsystem name>"
scope: "<subsystem-path or cartographer-identified boundary>"
modules: ["subsystem-manifest"]
Parse the subsystem manifest from recon's brief to produce the file list + role descriptions for the USER GATE. Write to scratch/<run-id>/manifest.md in the same format the scoping agent produces (file paths + brief role descriptions). This format compatibility ensures all downstream code (Phase 2, compaction recovery) works without modification.
On recon failure: "Recon failed: [reason]. Falling back to scoping exploration agent." Dispatch the fallback scoping agent: Agent tool (subagent_type: Explore, model: sonnet) using audit-scoping-prompt.md (existing behavior).
If the subsystem cannot be cleanly scoped (files share no common dependency chain, naming convention, or functional cohesion), report the scoping difficulty to the user and ask for clarification or a file list.
Output: A manifest of files belonging to the subsystem (paths + brief role descriptions). Write to scratch/<run-id>/manifest.md.
USER GATE: Present the manifest to the user. Do not proceed to Phase 2 until the user confirms the scope is correct. User may add/remove files or refine the boundary. When the user approves, write scratch/<run-id>/gate-approved.md (contents: timestamp + user confirmation) as a compaction recovery marker.
If the user removes all files or the manifest is empty: abort cleanly with "No files in scope -- audit cancelled."
No scoping agent needed — the artifact IS the scope. The orchestrator:
artifact_type.scratch/<run-id>/artifact-type.md containing the detected type. This file is the compaction recovery marker for non-code audits.[text](path))path/to/file.ext)#NNN)gh issue view. Soft cap: 2000 lines total. If exceeded: prioritize files referenced in decision-critical sections (Key Decisions, Risk Areas) over background references. Truncate with note: "[truncated — 2000-line context cap reached]". If no references found: proceed with artifact-only context.scratch/<run-id>/gate-approved.md (same as code path).Dispatch: Task tool (general-purpose, model: opus) per lens, in parallel, using audit-noncode-lens-prompt.md with lens-specific instruction injection.
For each of the 4 lenses matching the artifact type (see Artifact Types table):
{{LENS_NAME}}, {{LENS_QUESTION}}, {{LENS_FOCUS_AREAS}}, {{LENS_EXCLUSIONS}}, {{ARTIFACT_TYPE}}, {{ARTIFACT_CONTENT}}, {{SUPPORTING_CONTEXT}}scratch/<run-id>/<lens-name-kebab>-findings.md (e.g., technical-soundness-findings.md)Key differences from code path:
section instead of file + line_range, and concern instead of lens-specific code fieldsAfter all 4 lenses complete, proceed to Phase 2.5 (non-code blind-spots).
Dispatch: Task tool (general-purpose, model: opus) per lens, in parallel (matching inquisitor pattern). Fallback if parallel dispatch fails: dispatch sequentially via Task tool (general-purpose, model: opus), with a one-time note to user: "Parallel dispatch unavailable -- running analysis lenses sequentially."
Write-on-complete: As each agent completes, immediately write its findings to scratch/<run-id>/<lens>-findings.md. Do not wait for Phase 3. For the Consistency lens, use distinct filenames: consistency-a-findings.md for Agent A's triage output, consistency-b-findings.md for Agent B's confirmed findings.
Write partition records: Before dispatching each lens, write the list of files sent as full source (not overflow summaries) to scratch/<run-id>/<lens>-partition.md (one file path per line). For Consistency, write only consistency-b-partition.md (Agent A receives the Tier 1 overview, not a Tier 2 source partition, so no partition record is needed for Agent A). These records are used by Phase 2.5 to build the coverage map and must survive compaction. Files sent as 2-3 line overflow summaries are NOT included in partition records -- those files count as never-examined for blind-spots purposes.
Note on Consistency Agent A triage: Agent A reads the Tier 1 overview and triages all manifest files, flagging some for Agent B. Files Agent A did not flag appear as "never-examined" in the coverage map. This is intentional -- overview-level triage (reading a 1-line role description) is categorically different from source-level examination. The blind-spots agent examining those files for security, performance, and concurrency issues is valuable regardless of Consistency triage.
Tier 1 -- Overview: The orchestrator builds a condensed summary of the subsystem: file manifest with role descriptions, key public interfaces/contracts, dependency graph. Target: 500 lines. Flexible up to 800 lines for subsystems with complex API surfaces. If the subsystem exceeds what can be summarized in 800 lines, chunk the subsystem (see Chunking below).
Tier 2 -- Deep dive: The orchestrator partitions source files across agents by relevance to their lens. Hard cap: 1500 lines of total prompt content per agent (Tier 1 overview + Tier 2 source + prompt template). If a lens requires more files than fit, the orchestrator generates brief summaries of overflow files (2-3 lines per file: path, responsibility, key interfaces) and includes those instead of full source. If an agent's findings reference a summarized file, the orchestrator may dispatch a follow-up agent for that lens with the flagged files at full source.
If the subsystem is too large to summarize within the 800-line Tier 1 cap:
Each lens is dispatched as a parallel agent using its prompt template.
All lenses output structured findings with these common fields: {severity, file, line_range, evidence, description}. Individual lenses add lens-specific fields (e.g., Correctness adds scenario, Robustness adds failure_scenario, Architecture adds impact, Consistency adds convention_violated). The orchestrator's Phase 3 deduplication uses the common fields for matching; lens-specific fields are preserved in the final report.
Prompt: audit-correctness-prompt.md
Question: "What's actually broken or will break?"
Looks for: Bugs, race conditions, edge cases, logic errors, off-by-one, null dereferences, unreachable code paths.
Gets: Files with core logic, state management, data flow.
Dispatch: Single agent.
Prompt: audit-robustness-prompt.md
Question: "What happens when things go wrong?"
Looks for: Missing error handling at boundaries, unhandled failure modes, missing validation, silent data corruption, resource leaks.
Gets: Files at system boundaries, I/O, serialization.
Dispatch: Single agent.
Prompt: audit-consistency-prompt.md
Question: "Does this code follow its own patterns?"
Looks for: Pattern violations, naming drift, convention breaks, inconsistent error handling styles, mixed paradigms.
Dispatch: Two sequential agents (orchestrator dispatches Agent A, reads results, then dispatches a separate Agent B).
Prompt: audit-architecture-prompt.md
Question: "Is this well-structured?"
Looks for: Coupling issues, abstraction leaks, missing contracts, dependency direction violations, god objects, circular dependencies.
Gets: Tier 1 overview + public API surfaces.
Dispatch: Single agent.
Dispatch: Task tool (general-purpose, model: opus) using audit-noncode-blindspots-prompt.md. Runs AFTER all Phase 2 non-code lenses have reported, BEFORE Phase 3 synthesis.
No coverage map needed — all lenses see the full artifact. Instead, the orchestrator builds a lens summary with this format:
## Lens Summary
- **[Lens Name]** — [Core Question]. Findings: N (Fatal: N, Significant: N, Minor: N). Focus areas: [brief list].
[repeat for each lens]
The blind-spots agent receives the full artifact content + lens summary and hunts for document-level gaps (see Non-Code Blind-Spots Categories above). Write findings to scratch/<run-id>/noncode-blindspots-findings.md.
No follow-up dispatches for non-code (the artifact is fully visible to the blind-spots agent — there are no "never-examined files").
Dispatch: Task tool (general-purpose, model: opus) using audit-blindspots-prompt.md. Runs AFTER all Phase 2 lenses have reported (including Consistency Agent B), BEFORE Phase 3 synthesis.
Purpose: The four lenses share structural blind spots -- issues that fall between lenses, emerge from combinations of findings, or belong to categories no single lens covers (security, performance, concurrency, silent failures). A fresh agent hunts specifically in those gaps.
Write-on-complete: Write findings to scratch/<run-id>/blindspots-findings.md.
The blind-spots agent does NOT receive raw findings from the other lenses. Instead, the orchestrator builds a coverage map -- a condensed summary of where the other lenses looked, without the evidence details that cause anchoring. This preserves independent judgment while directing attention to uncovered areas.
Coverage map format (orchestrator generates this from the lens findings files and Tier 2 partition records):
## Coverage Map
### Files Examined by Lens (included in Tier 2 source)
- path/to/file.ext: Correctness (2 findings), Architecture (1 finding)
- path/to/other.ext: Robustness (1 finding), Correctness (0 findings)
- path/to/examined-clean.ext: Architecture (0 findings)
### Files Never Examined (in manifest but not in any Tier 2 source)
- path/to/genuinely-unseen.ext
- path/to/another-unseen.ext
Target: 30-50 lines. No finding summaries, no concern category descriptions (the agent already knows the four lenses' domains from its prompt). Just the file-to-lens mapping and the examined/never-examined distinction. This maximizes source code budget.
To build the coverage map:
correctness-partition.md, robustness-partition.md, consistency-b-partition.md, architecture-partition.md. These list the files each lens received as full source (written during Phase 2). Union of all partition files = the examined set.The blind-spots agent receives:
Priority order (strict -- not a judgment call):
If there are no never-examined files (every manifest file was in at least one Tier 2 partition), allocate the full budget to multi-lens interaction points.
Narration: Status update when dispatching ("Phase 2.5: All 4 lenses complete. Dispatching blind-spots agent to hunt cross-cutting concerns.") and when it completes ("Phase 2.5 complete. Blind-spots agent found N additional findings. Moving to Phase 3 synthesis.").
If the blind-spots agent lists files in "Files Needing Deeper Inspection" AND the audit is under the ~20 agent budget, dispatch one follow-up blind-spots agent with those files at full source. The follow-up receives the same coverage map but new source files. Write follow-up findings to scratch/<run-id>/blindspots-followup-findings.md. Phase 3 synthesis reads this file if it exists.
If the audit is at or near the agent budget, skip the follow-up and include the "Files Needing Deeper Inspection" list in the Phase 3 report as "Areas not fully covered."
For chunked subsystems, the blind-spots agent runs once per chunk (not once for all chunks), receiving that chunk's coverage map + cross-chunk interface section. This keeps each dispatch within the 1500-line hard cap.
Cross-chunk blind spots: After all per-chunk blind-spots agents complete, dispatch one additional cross-chunk blind-spots agent. This agent receives a purpose-built cross-chunk overview (NOT all individual coverage maps stacked):
Per-chunk interior coverage is irrelevant to cross-chunk analysis -- keep it out. This agent targets issues that span chunk boundaries (e.g., one chunk deserializes input, another trusts it without validation). Write findings to scratch/<run-id>/blindspots-crosschunk-findings.md. Skip this dispatch if the subsystem is single-chunk.
Cross-chunk boundary overview construction (orchestrator):
path/file.ext: Chunk A [Correctness (1), Robustness (0)], Chunk B [Architecture (2)]After all blind-spots agents complete, findings from all chunks (including cross-chunk) flow into Phase 3 synthesis.
The blind-spots agent does NOT analyze compounding risks from existing findings. That responsibility belongs to Phase 3 synthesis, which already reads all findings and deduplicates. Adding a synthesis step for compounding is natural and costs zero additional agents. See Phase 3 below.
Code audits: Read correctness-findings.md, robustness-findings.md, consistency-b-findings.md, architecture-findings.md, blindspots-findings.md, and if they exist: blindspots-followup-findings.md, blindspots-crosschunk-findings.md. Do NOT read consistency-a-findings.md (triage data, not confirmed findings).
Non-code audits: Read <lens-name-kebab>-findings.md for each of the 4 type-specific lenses (e.g., technical-soundness-findings.md, integration-impact-findings.md, edge-cases-findings.md, scope-clarity-findings.md for design), plus noncode-blindspots-findings.md.
file + line_range. For non-code audits, match on identical section headings. Use common fields (severity, evidence, description) for similarity comparison. Preserve lens-specific fields from both. Tie-breaking rule: When in doubt, keep both findings as separate items but note they may be related. Err on the side of presenting more findings rather than silently merging.scratch/<run-id>/report.md.Present the ranked, grouped findings to user.
Cross-reference existing issues: Using whatever tools are available in the environment (MCP servers, CLIs, etc.), search for existing open issues using specific file paths and error descriptions from findings as search terms.
Ask user: "File as individual issues, one umbrella issue with checklist, or skip filing?"
Record to cartographer (code audits only): After completion, dispatch cartographer recorder (Mode 1) with the Phase 1 manifest only. The manifest was deliberately scoped during exploration and is reliable structural data. Do NOT feed incidental observations from Phase 2 bug-hunting agents to cartographer -- those are unverified structural inferences. Skip for non-code audits — no subsystem manifest to record.
Cleanup: Delete the scratch/<run-id>/ directory only after ALL Phase 4 actions are complete (issue filing, cartographer recording). Do not clean up prematurely -- the report on disk is needed for compaction recovery during Phase 4.
audit-scoping-prompt.md -- Phase 1 subsystem scoping dispatch (Agent tool, subagent_type: Explore, model: sonnet)Analysis lens templates (all use Task tool, general-purpose, model: opus):
audit-correctness-prompt.md -- Correctness lens dispatchaudit-robustness-prompt.md -- Robustness lens dispatchaudit-consistency-prompt.md -- Consistency lens dispatch (documents two-agent protocol)audit-architecture-prompt.md -- Architecture lens dispatchBlind-spots template (Task tool, general-purpose, model: opus):
audit-blindspots-prompt.md -- Phase 2.5 gap-hunting dispatch (receives coverage map)audit-noncode-lens-prompt.md -- Parameterized lens dispatch for all non-code artifact types. Orchestrator fills {{LENS_NAME}}, {{LENS_QUESTION}}, {{LENS_FOCUS_AREAS}}, {{LENS_EXCLUSIONS}}, {{ARTIFACT_TYPE}}, {{ARTIFACT_CONTENT}}, {{SUPPORTING_CONTEXT}}.audit-noncode-blindspots-prompt.md -- Non-code blind-spots dispatch (receives lens summary, not coverage map)Each analysis template includes:
Task tool (general-purpose, model: opus)severity, file, line_range, evidence, description) plus lens-specific fieldsAnalysis agents must NOT:
The orchestrator must NOT:
| Skill | How Used | When |
|---|---|---|
crucible:recon | Subsystem-manifest module | Phase 1 Code Path (subsystem scoping via structured manifest). Fallback: dispatch scoping agent via audit-scoping-prompt.md. |
crucible:cartographer | Consult mode | Phase 1 (subsystem scoping and conventions) |
crucible:cartographer | Record mode | Phase 4 (Phase 1 manifest only) |
audit-scoping-prompt.md (fallback).crucible:forge -- audit findings could inform retrospective if they reveal systemic patternscrucible:quality-gate (audit is not a fix loop), crucible:red-team (designed for single artifacts), crucible:assay (audit is find-and-report, not decision evaluation)