From melodic-software
Applies fixes from /audit or /review findings by scanning conversation for actionable items, prioritizing by severity, categorizing Claude Code vs external content, and implementing with tools or research.
npx claudepluginhub melodic-software/claude-code-plugins --plugin melodic-softwareThis skill is limited to using the following tools:
Apply all fixes, improvements, recommendations, and suggestions from audit/review findings in the current conversation.
Audits Claude Code configurations for best practices in skills, instructions, MCP servers, hooks, plugins, security, over-engineering, and context efficiency via file scans and focused checks. Invoke with /claudit [focus-area].
Audits .claude/ config for cross-references, permissions, inventory drift, model tiers, docs freshness. Auto-fixes issues at high/medium/all severity levels or upgrades with verification and A/B testing.
Analyzes session friction with Claude Code skills, agents, permissions, hooks; gathers config from settings.json, CLAUDE.md; proposes targeted improvements.
Share bugs, ideas, or general feedback.
Apply all fixes, improvements, recommendations, and suggestions from audit/review findings in the current conversation.
$ARGUMENTS may contain:
--dry-run: Show what would be implemented without making changes--research-first: Force MCP research even for Claude Code content (adds extra validation layer)Scan the current conversation for actionable items:
Create a prioritized list of items to implement, grouped by:
If no actionable items are found, report "No actionable findings found in current conversation" and STOP.
If --dry-run is specified, report the prioritized list and STOP here without making changes.
Categorize each finding as either:
Detection criteria for Claude Code content:
plugins/*/, .claude/, CLAUDE.mdResearch is automatic by default - no flag needed. The approach varies by content type:
Try claude-ecosystem skills first:
Attempt to invoke docs-management skill for official documentation
Spawn claude-code-guide subagent in parallel for live web verification:
Task(claude-code-guide): "WebFetch https://code.claude.com/docs/en/claude_code_docs_map.md
to find relevant pages about [detected component types]. Then WebFetch those specific pages.
Return key findings with source URLs. Do NOT use Skill tool."
Load relevant development skills based on component type:
skill-developmentsubagent-developmentskill-developmenthook-managementoutput-customizationmcp-integrationmemory-managementFallback to MCP if:
docs-management skill is not available (claude-ecosystem plugin not installed)--research-first flag was specified (adds MCP as extra validation layer)Fallback uses mcp-research agent with query: "Claude Code [component type] best practices [specific topic]"
Use MCP research when findings involve:
Skip MCP research for trivial fixes:
When research is needed, delegate to mcp-research agent:
Task(mcp-research): "Research best practices for [implementation topics from findings].
Use microsoft-learn for .NET/Azure, context7+ref for libraries, perplexity for general validation.
Return actionable guidance with citations."
This validates concepts and approaches against current documentation without over-researching mechanical fixes.
For each item in priority order (CRITICAL -> HIGH -> MEDIUM -> LOW -> INFO):
State the item being implemented with its source (e.g., "Audit finding #3: Missing description field")
Apply the fix/improvement using appropriate tools:
Edit for code modificationsWrite for new filesBash for commands/scriptsVerify the change:
Mark item status:
Continue through all items, logging each result.
After all items processed, provide a summary:
## Apply Findings Summary
**Total items found:** X
**Implemented:** Y ✅
**Skipped:** Z ⏭️
**Failed:** W ❌
### Implemented
- [item 1 description]
- [item 2 description]
### Skipped (if any)
- [item] - Reason: [why skipped]
### Failed (if any)
- [item] - Error: [what went wrong]
### Follow-up Actions (if any)
- [suggested manual steps for failed items]
- [verification commands to run]
/claude-ecosystem:audit-skills my-skill
# ... audit output with findings ...
/apply-findings
/apply-findings --dry-run
# When you want MCP to double-check Claude Code skill/docs-management guidance
/apply-findings --research-first
# Review finds issues...
/apply-findings
--research-first adds MCP as an extra validation layer for Claude Code content