From superpowers-plus
Breaks AI coding assistant spirals by consulting a fresh sub-agent with zero context. Triggers on stuck signals like repeated errors, circular reasoning, or phrases such as 'stuck in a loop'.
npx claudepluginhub bordenet/superpowers-plus --plugin superpowers-plusThis skill uses the workspace's default tool permissions.
> **Wrong skill?** Research a topic → `perplexity-research`. Brainstorm solutions → `brainstorming`. Debug a specific error → `systematic-debugging`.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Wrong skill? Research a topic →
perplexity-research. Brainstorm solutions →brainstorming. Debug a specific error →systematic-debugging.Break through blockers by consulting a fresh perspective.
Generate consultation prompt (see references/consultation-prompt-template.md):
Problem statement, technical context, what was tried + outcomes, exact error messages, minimal code snippet, constraints, specific ask. Must be self-contained, <2000 tokens.
Ask user: "Want to review the prompt before I dispatch, or send now?"
Dispatch (in priority order):
sub-agent-explore) — free, instantreason — only if THINK_TWICE_USE_PERPLEXITY=true in .env (~$0.01/query)Score response: Relevance (30%) + Novelty (25%) + Specificity (25%) + Feasibility (20%). Report score, key recommendations, suggested next step.
If score <50: Offer retry with refined prompt (max 1 retry) or proceed with best suggestion.
Continuously monitor for these signals. When cumulative score ≥ 7, invoke think-twice automatically:
| Signal | Weight |
|---|---|
| Same fix tried 3+ times | 3 |
| Circular reasoning (referencing own failed output) | 3 |
| Same error 3+ times after fixes | 3 |
| Exhaustion language ("I've tried everything") | 3 |
| Uncertainty hedging ("I'm not sure why") | 2 |
| Approach change without rationale | 2 |
| Problem Type | First | Escalate To |
|---|---|---|
| Reasoning (logic, approach, design) | think-twice | perplexity-research |
| Knowledge (API docs, error codes, facts) | perplexity-research | think-twice for fresh reasoning |
| Both (stuck + need facts) | think-twice | perplexity-research with refined query |
⚠️ Cost gate:
perplexity-researchcalls a paid API. Before escalating, confirm the knowledge gap cannot be resolved with a web search or by re-reading existing context. If escalating: inform the user a paid API call is being made.
The prompt sent to the sub-agent determines outcome quality. MUST include:
| Element | Required? | Why |
|---|---|---|
| Problem statement | ✅ | What's broken or stuck |
| Technical context | ✅ | Stack, versions, constraints |
| What was tried + outcomes | ✅ | Prevents re-trying failed approaches |
| Exact error messages | ✅ | Enables pattern matching |
| Minimal code snippet | ✅ | Concrete not abstract |
| Constraints | ✅ | What CAN'T change |
| Specific ask | ✅ | "What else could cause X?" not "help" |
Total: <2000 tokens. Self-contained. No references to "above" or "earlier."
references/consultation-prompt-template.md — Prompt templatereferences/scoring-rubric.md — Scoring dimensionsprompts/consultant-persona.md — Sub-agent persona| Failure | Fix |
|---|---|
| Sub-agent inherits same flawed assumptions | Provide raw symptoms only, not prior conclusions |
| Agent ignores stuck signals and keeps looping | Enforce cumulative score threshold — 7+ is mandatory |
| Fresh perspective is too shallow | Sub-agent must produce root-cause hypothesis, not just "try X" |
# Example: invoke think-twice when stuck
node ~/.codex/superpowers-augment/superpowers-augment.js use-skill think-twice