Fix ALL issues via parallel agents with zero tolerance quality enforcement. Use when user says "fix", "fix issues", "fix errors", "fix all", "fix bugs", "fix lint", "fix tests", or wants to resolve code problems.
From dev-workflownpx claudepluginhub alexei-led/cc-thingz --plugin dev-workflowThis skill is limited to using the following tools:
SKILL.codex.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Captures architectural decisions in Claude Code sessions as structured ADRs. Auto-detects choices between alternatives and maintains a docs/adr log for codebase rationale.
Execute until clean. Parallel analysis, sequential fixes.
Parse $ARGUMENTS:
investigate → 5-Why root cause analysis before fixing (for recurring or mysterious bugs)team → Agent team mode: Analysts compete to find root causes and debate solutionsUse TodoWrite to track these 6 phases:
investigate)make lint 2>&1 | head -100
make test 2>&1 | head -100
No Makefile? Detect language and run:
golangci-lint run ./... 2>&1 | head -100 && go test -race ./... 2>&1 | head -100ruff check . 2>&1 | head -100 && pytest 2>&1 | head -100bun lint 2>&1 | head -100 && bun test 2>&1 | head -100bunx html-validate "**/*.html" 2>&1 | head -50 && bunx stylelint "**/*.css" 2>&1 | head -50 && bunx eslint "**/*.js" 2>&1 | head -50If all pass: Report "All checks pass" → stop.
If mcp__plugin_claude-mem_mcp-search__search is available, query for known issues on failing files:
search({ query: "<failing file paths>", type: "gotcha OR problem-solution", limit: 5 })
If relevant past observations exist, fetch with get_observations and attach findings to Phase 2 agent prompts. Skip silently if unavailable.
If team NOT in $ARGUMENTS (Subagent mode - default):
Spawn ALL relevant language agents IN ONE MESSAGE for parallel execution.
If team in $ARGUMENTS (Team mode):
Create an agent team where analysts compete to find root causes and debate solutions. This mode is better for complex or mysterious issues where multiple perspectives help.
Based on detected languages with issues, spawn analysis agents:
Task(
subagent_type="go-qa",
run_in_background=true,
description="Go issue analysis",
prompt="Analyze these Go issues. DO NOT FIX - analysis only.
Issues:
{lint/test output}
Return structured analysis:
- Root cause for each issue
- Suggested fix approach
- File:line references
- Priority (critical/important/minor)"
)
Task(
subagent_type="py-qa",
run_in_background=true,
description="Python issue analysis",
prompt="Analyze these Python issues. DO NOT FIX - analysis only.
Issues:
{lint/test output}
Return structured analysis:
- Root cause for each issue
- Suggested fix approach
- File:line references
- Priority (critical/important/minor)"
)
Task(
subagent_type="web-qa",
run_in_background=true,
description="Web frontend issue analysis",
prompt="Analyze these web frontend issues. DO NOT FIX - analysis only.
Issues:
{lint/test output}
Return structured analysis:
- Root cause for each issue
- Suggested fix approach
- File:line references
- Priority (critical/important/minor)"
)
team in Arguments)Create an agent team for competing analysis:
Create an agent team to analyze these issues. Spawn analysts for each language:
{If Go issues}:
- go-qa: Analyze Go issues from security/performance angle
- go-impl: Analyze from implementation/architecture angle
- go-tests: Analyze from testability angle
{If Python issues}:
- py-qa: Analyze Python issues from security/performance angle
- py-impl: Analyze from implementation angle
- py-tests: Analyze from test perspective
{If TypeScript issues}:
- ts-qa: Analyze TypeScript issues from security/performance angle
- ts-impl: Analyze from implementation angle
- ts-tests: Analyze from test perspective
{If Web issues}:
- web-qa: Analyze from security/performance/a11y angle
- web-impl: Analyze from implementation angle
Have analysts:
1. Independently diagnose root causes
2. Compete to find the most likely explanation
3. Debate proposed solutions
4. Challenge each other's assumptions
5. Converge on consensus root cause + fix approach
Return prioritized list with confidence levels and dissenting opinions.
The team lead will synthesize competing analyses into prioritized action plan.
investigate in Arguments)Skip this phase unless investigate is in $ARGUMENTS.
For each critical issue from Phase 2, apply 5-Why root cause analysis:
Output per issue:
design | process | knowledge | toolingAttach the 5-Why analysis to the issue before proceeding to fixes. This ensures fixes address root causes, not just symptoms.
TaskOutput(task_id=<go_qa_id>, block=true)
TaskOutput(task_id=<py_qa_id>, block=true)
TaskOutput(task_id=<web_qa_id>, block=true)
Merge and prioritize issues:
Fix issues ONE AT A TIME to avoid conflicts:
For each issue (prioritized order):
make lint && make test or equivalentIf fix causes new issues: Revert and try alternative approach.
make lint && make test
Loop back to Phase 2 if issues remain.
FIX COMPLETE
============
Mode: {Subagent Analysis | Team Analysis}
Analysis: {N} issues identified by {M} agents
Fixed: {X} issues
Remaining: {Y} non-blocking (if any)
Status: CLEAN | NEEDS ATTENTION
Changes:
- file1.go:42 - Fixed null pointer check
- file2.py:15 - Added missing type hint
/fixing-code # Subagent mode: parallel analysis
/fixing-code investigate # Deep root cause analysis with 5-Why
/fixing-code team # Team mode: competing analysis with debate
/fixing-code investigate team # Both: 5-Why + competing analysts
Execute validation now.