From superpowers-plus
EXPERIMENTAL: Writes comprehensive context-free prompts for code analysis to reset perspective, reduce bias from context pollution. Validated in 20-round experiment; always verify outputs manually.
npx claudepluginhub bordenet/superpowers-plus --plugin superpowers-plusThis skill uses the workspace's default tool permissions.
> **Wrong skill?** Getting unstuck → `think-twice`. Research → `perplexity-research`. Brainstorming → `brainstorming`.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Wrong skill? Getting unstuck →
think-twice. Research →perplexity-research. Brainstorming →brainstorming.WARNING: This skill is EXPERIMENTAL. It has been validated in a controlled experiment but is NOT production-ready. Expect ~20% false positive rate. ALWAYS verify outputs manually before acting on findings.
Winner: Condition B (Reframe-Self) - Write prompt, answer yourself (no external model)
| Condition | VH | HR | Avg VH/Round | HR Rate |
|---|---|---|---|---|
| A: Direct | 19 | 1 | 3.8 | 20% |
| B: Reframe-Self | 21 | 1 | 4.2 | 20% ← WINNER |
| C: Direct-External | 23 | 4 | 4.6 | 80% |
| D: Reframe-External | 18 | 6 | 3.6 | 100% ← WORST |
Key Insight: Reframing helps Claude (+10% VH), but HURTS external models (+400% HR).
Condition D (reframe + Gemini) had 100% hallucination rate.
Every single round had at least one false positive. The detailed prompts give external models more rope to hallucinate confidently.
Even with Claude answering its own prompts, expect 1 in 5 findings to be wrong. Always verify findings before acting on them.
This skill was validated on 5 genesis-tools projects:
It may not generalize to other codebases.
| Trigger | Description |
|---|---|
| Complex system review | Multi-component systems with alignment concerns |
| Adversarial analysis | Looking for gaming vulnerabilities or edge cases |
| Independent verification | Verify claims from external sources (Gemini, GPT) |
| Pre-commit review | Final check before major commits |
Explicit invocation required:
Use the experimental-self-prompting skill to analyze [system]
Create a context-free prompt that any engineer could pick up cold:
You are an expert [ROLE] performing [TASK TYPE] on [SYSTEM].
## CONTEXT
[Explain the system, its components, and their relationships]
## THE PROBLEM
[What misalignment/issue pattern you're looking for]
## YOUR TASK
[Specific things to check]
## VERIFICATION REQUIREMENTS
For EACH finding:
1. State the claim
2. Cite exact file and line number
3. Show evidence (grep/code)
4. Categorize: VERIFIED | FALSE POSITIVE | NEEDS INVESTIGATION
## FILES TO EXAMINE
[List specific files with full paths]
Focus on ACTIONABLE findings with EVIDENCE. No speculation.
Treat the prompt as if you've never seen the code before. Answer systematically.
CRITICAL: Never trust findings without verification.
For each finding:
Create summary with:
This skill will be promoted to production when:
superpowers-plus/docs/plans/experiment-results-v2/superpowers-plus/docs/plans/experiment-results-v2/STATISTICAL_ANALYSIS.mdsuperpowers-plus/docs/SKILL_COMPARISON_self-prompting_vs_think-twice.md# Generate a context-free prompt for fresh analysis
echo "Analyze this codebase for [specific concern].
Constraints: [list constraints].
Output format: [specify format].
Do NOT reference prior analysis." > /tmp/self-prompt.md