From feature-workflow
Audits runtime code behavior by injecting tracked logs, capturing execution data, generating reports, and verifying dynamic flows for debugging and process audits.
npx claudepluginhub schuettc/claude-code-plugins --plugin feature-workflowThis skill uses the workspace's default tool permissions.
You are executing the **RUNTIME AUDIT** workflow - a process that bridges static analysis with actual execution observation. Unlike static code analysis, this command actively injects logs, captures runtime data, and produces verifiable reports.
Loads active runtime audit context from docs/audits/registry.json, including injections and findings, when editing audited code or discussing audit sessions.
Coordinates parallel agent audits for codebase health, evaluation (12-pillar scoring), technical debt, and documentation drift, producing intake docs for /pipeline.
Mentally executes code, skills, commands, configs line-by-line with concrete values to find bugs, logic errors, edge cases, contract violations, AI hallucinations. Verifies external calls using tools.
Share bugs, ideas, or general feedback.
You are executing the RUNTIME AUDIT workflow - a process that bridges static analysis with actual execution observation. Unlike static code analysis, this command actively injects logs, captures runtime data, and produces verifiable reports.
$ARGUMENTS
If no specific process was provided above, you will help the user identify what they want to audit.
Static analysis (code-archaeologist): Reads code, infers behavior Runtime audit (this command): Injects logs, observes actual behavior, confirms expectations
Key capability: "I think this code does X" → run audit → "Confirmed: this code actually does X"
This provides evidence-based verification rather than inference.
All audit artifacts are stored in:
docs/audits/
├── registry.json # Index of all audits
└── [audit-id]/
├── report.md # Final audit report
├── session.json # Audit metadata
├── injections.json # Track all injected logs for cleanup
└── logs/
└── captured-[timestamp].log
Key Principles:
injections.jsonThis command orchestrates a 7-phase workflow:
| Phase | Name | Purpose |
|---|---|---|
| 1 | Target Identification | User describes process to audit, identify entry points |
| 2 | Code Exploration | Map execution paths, identify strategic log points |
| 3 | Injection Strategy | Plan non-invasive logs, get user approval |
| 4 | Log Injection | Add approved log statements (tracked for cleanup) |
| 5 | Runtime Capture | User executes process, capture log output |
| 6 | Analysis & Report | Analyze data, verify behavior, generate report |
| 7 | Cleanup | Remove injected logs, restore code to pre-audit state |
See: target.md
See: exploration.md
See: injection-active.md
// AUDIT-INJECTED commentinjections.jsonSee: runtime-capture.md
See: analysis.md
See: cleanup.md
injections.json for reliable cleanup| Language | Log Pattern | Marker |
|---|---|---|
| TypeScript/JS | console.log('[AUDIT:id:N]', data); | // AUDIT-INJECTED |
| Python | print(f'[AUDIT:id:N] {data}') | # AUDIT-INJECTED |
| Go | fmt.Printf("[AUDIT:id:N] %v\n", data) | // AUDIT-INJECTED |
| Rust | println!("[AUDIT:{}:{}] {:?}", id, n, data); | // AUDIT-INJECTED |
| Java | System.out.println("[AUDIT:id:N] " + data); | // AUDIT-INJECTED |
Language is detected from file extension and appropriate template applied.
Every modification is tracked in injections.json:
{
"auditId": "auth-flow-001",
"injections": [
{
"id": 1,
"file": "src/auth/login.ts",
"line": 42,
"originalContent": "",
"injectedContent": "console.log('[AUDIT:auth-flow-001:1]', user);",
"purpose": "Log user object at login entry"
}
]
}
Cleanup phase uses this manifest to restore exact original state.
User chooses per-audit:
| Method | Use When | How It Works |
|---|---|---|
| Paste output | Complex environments, CI/CD, remote systems | User runs process externally, pastes logs back |
| Direct execution | Local development, simple commands | Command runs via Bash, captures stdout/stderr |
| Error | Resolution |
|---|---|
| Cannot identify entry point | Ask user for more specific process description |
| Injection causes compile error | Rollback that injection, try different approach |
| No logs captured | Verify process was executed with injected code |
| Partial cleanup failure | Show remaining injections, offer manual cleanup |
| Audit directory missing | Create docs/audits/ if needed |
/feature-plan → /feature-implement → /feature-audit → /feature-submit → /feature-shipStatic analysis tells you what code should do. Runtime auditing shows you what code actually does.
This command helps you:
Let's identify what you want to audit!