From vini-workflow
Audits Claude workflow system by analyzing error patterns from evolution data, model performance metrics, rule effectiveness, and configuration staleness. Run monthly or on phrases like 'audit workflow'.
npx claudepluginhub vinicius91carvalho/.claude --plugin vini-workflowThis skill uses the workspace's default tool permissions.
Audit the workflow system for effectiveness, staleness, and optimization opportunities.
Captures tool failures via PostToolUseFailure, detects error patterns in lessons-learned.md, promotes to permanent rules, and rotates files for Claude Code self-healing.
Distills patterns from Claude Code work history, git logs, lessons, and memory to suggest new agents/skills, review roster quality, prune redundancies, or consolidate feedback into rules.
Audits Claude Code configurations for best practices in skills, instructions, MCP servers, hooks, plugins, security, over-engineering, and context efficiency via file scans and focused checks. Invoke with /claudit [focus-area].
Share bugs, ideas, or general feedback.
Audit the workflow system for effectiveness, staleness, and optimization opportunities.
This skill reads from the evolution data (~/.claude/evolution/) and evaluates whether
the system is improving, stagnating, or accumulating dead weight.
Read these files:
~/.claude/evolution/error-registry.json — cross-project error patterns~/.claude/evolution/model-performance.json — model success rates~/.claude/evolution/workflow-changelog.md — recent system changes~/.claude/evolution/session-postmortems/ — recent session reports~/.claude/CLAUDE.md — current system rules~/.claude/projects/-root/memory/MEMORY.md — cross-project memory indexFrom error-registry.json:
auto_preventable: true?
From model-performance.json:
Success rate table:
| Model | Task Type | Attempts | 1st Try Success | Rate | Recommendation |
|--------|------------------|----------|-----------------|-------|----------------|
| sonnet | implementation | 25 | 20 | 80% | Keep |
| sonnet | bug_fix | 12 | 7 | 58% | Upgrade→opus |
| haiku | simple_fixes | 30 | 28 | 93% | Keep |
Adaptation proposals (only if min_samples threshold met):
first_try_success_rate < 70% → propose upgradefirst_try_success_rate > 90% → propose downgrade to save costCost estimate: Based on approximate token costs per model, estimate monthly savings from proposed downgrades.
From CLAUDE.md and memory files:
From workflow-changelog.md:
From session-postmortems/:
Present findings as a structured report:
## Workflow Audit Report — [date]
### Health Score: [1-10]
### Error Patterns
- Total registered: N
- Recurring (not auto-prevented): N — ACTION NEEDED
- Top 3 categories: [list]
### Model Performance
- Upgrade candidates: [list with data]
- Downgrade candidates (cost savings): [list with data]
- Estimated monthly token savings: $X
### Rule Health
- Total rules: N
- Rules with evidence: N
- Potentially stale (>90 days, never triggered): N — REVIEW
- Conflicts found: N
### Evolution Velocity
- Changes last 30 days: N
- Assessment: [healthy / stagnating / thrashing]
### Session Trends
- Compound completion rate: N%
- Average retries: N
- Most blocked phase: [phase]
### Recommendations (prioritized)
1. [action] — [evidence] — [expected impact]
2. ...
For each recommendation:
workflow-changelog.md