From godmode
Use when repeated fix attempts fail, the agent appears stuck in a loop, or complexity is increasing without progress
npx claudepluginhub noobygains/godmode --plugin godmodeThis skill uses the workspace's default tool permissions.
AI agents get stuck. They try the same approach repeatedly, add complexity to fix complexity, and rationalize "one more attempt" long past the point of diminishing returns. This skill detects stuck patterns **proactively** and forces recovery before wasted effort compounds.
Implements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
AI agents get stuck. They try the same approach repeatedly, add complexity to fix complexity, and rationalize "one more attempt" long past the point of diminishing returns. This skill detects stuck patterns proactively and forces recovery before wasted effort compounds.
Core principle: Track every failed attempt. Escalate at defined thresholds. Never allow unbounded retries.
NO CONTINUED ATTEMPTS WITHOUT ACKNOWLEDGING FAILURE COUNT
Before EVERY fix attempt, state: "This is attempt N of the current issue." If N >= 3, you are not authorized to continue without user direction.
This skill activates automatically when any of these patterns appear:
This skill overrides optimism. When triggered, it takes priority over whatever strategy is currently failing.
You MUST maintain an internal failure counter for each distinct issue:
FAILURE LOG
Issue: [description]
Attempt 1: [what you tried] -> [result]
Attempt 2: [what you tried] -> [result]
Attempt 3: [what you tried] -> [result] <- STOP HERE
Rules:
digraph recovery {
rankdir=TB
node [shape=box]
start [label="Fix attempt fails"]
count [label="Increment failure counter"]
check [label="How many failures?"]
yellow [label="YELLOW: Log concern\nTry fundamentally different approach"]
orange [label="ORANGE: STOP\nRe-analyze from scratch\nPresent revised analysis"]
red [label="RED: HALT\nPresent honest assessment\nWait for user direction"]
resolved [label="Issue resolved\nReset counter"]
user [label="User provides direction\nReset to Yellow"]
start -> count
count -> check
check -> yellow [label="2"]
check -> orange [label="3"]
check -> red [label="4+"]
yellow -> start [label="Next attempt fails"]
yellow -> resolved [label="Fix works"]
orange -> start [label="New hypothesis\nattempt fails"]
orange -> resolved [label="Fix works"]
red -> user [label="User responds"]
user -> start [label="New attempt"]
}
When a fix fails, apply these in order:
Re-read the actual error message. Not your interpretation of it -- the literal text. Errors frequently contain the answer.
Try a fundamentally different approach. Not a variation. If you were editing config, try code. If you were patching, try replacing. If you were adding, try removing.
Simplify ruthlessly. Strip the problem to its smallest reproducible case. Remove everything non-essential. Test the simplest possible version.
Verify your assumptions. Print values. Check types. Confirm the file you think you are editing is the one actually being executed. Confirm the function you think is being called is actually being called.
Escalate with honesty. Tell the user: "I have tried N approaches. None worked. Here is what I know and what I recommend."
Rollback to last known-good state. If you have made things worse, undo all changes and return to where things last worked. Start fresh from there.
These patterns indicate the agent is stuck and MUST trigger this skill:
| Anti-Pattern | What to Do Instead |
|---|---|
| "Let me try one more thing" (after 3+ failures) | STOP. You said that last time. Escalate. |
| Adding complexity to fix complexity | Simplify. Remove code. Reduce moving parts. |
| Ignoring the error message, trying random changes | Read the error. It is telling you something specific. |
| Blaming the environment ("must be a cache issue") | Verify the claim. Run a clean build. Check actually. |
| Widening scope instead of narrowing it | Focus on the smallest failing case, not the whole system. |
| Editing the same file repeatedly | Step back. The problem may not be in that file. |
| "It should work" without verifying | Run it. Check output. Trust evidence, not expectations. |
| Fixing the test instead of the code | The test is probably right. Fix what it is testing. |
| Rationalization | Reality |
|---|---|
| "This is a different issue" | If it appeared while fixing the original, it is the same issue. Count it. |
| "I almost had it last time" | Almost does not count. Two near-misses is a pattern, not progress. |
| "The approach is right, just needs tweaking" | Three tweaks of the same approach is not three different attempts. |
| "I need to understand the whole system first" | You need to understand the failing part. Scope down, not up. |
| "Let me refactor first, then fix" | Refactoring while debugging creates two problems. Fix first. |
| "This is an edge case" | If it blocks completion, it is a primary case. |
| "One more log statement will reveal it" | If three log statements did not help, you are looking in the wrong place. |
STOP and activate this skill if you observe: