Deep analysis debugging mode for complex issues. Activates methodical investigation protocol with evidence gathering, hypothesis testing, and rigorous verification. Use when standard troubleshooting fails or when issues require systematic root cause analysis.
/plugin marketplace add glittercowboy/taches-cc-resources/plugin install taches-cc-resources@taches-cc-resourcesThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/debugging-mindset.mdreferences/hypothesis-testing.mdreferences/investigation-techniques.mdreferences/verification-patterns.mdreferences/when-to-research.mdThe skill emphasizes treating code you wrote with MORE skepticism than unfamiliar code, as cognitive biases about "how it should work" can blind you to actual implementation errors. Use scientific method to systematically identify root causes rather than applying quick fixes. </objective>
<context_scan> Run on every invocation to detect domain-specific debugging expertise:
# What files are we debugging?
echo "FILE_TYPES:"
find . -maxdepth 2 -type f 2>/dev/null | grep -E '\.(py|js|jsx|ts|tsx|rs|swift|c|cpp|go|java)$' | head -10
# Check for domain indicators
[ -f "package.json" ] && echo "DETECTED: JavaScript/Node project"
[ -f "Cargo.toml" ] && echo "DETECTED: Rust project"
[ -f "setup.py" ] || [ -f "pyproject.toml" ] && echo "DETECTED: Python project"
[ -f "*.xcodeproj" ] || [ -f "Package.swift" ] && echo "DETECTED: Swift/macOS project"
[ -f "go.mod" ] && echo "DETECTED: Go project"
# Scan for available domain expertise
echo "EXPERTISE_SKILLS:"
ls ~/.claude/skills/expertise/ 2>/dev/null | head -5
Present findings before starting investigation. </context_scan>
<domain_expertise>
Domain-specific expertise lives in ~/.claude/skills/expertise/
Domain skills contain comprehensive knowledge including debugging, testing, performance, and common pitfalls. Before investigation, determine if domain expertise should be loaded.
<scan_domains>
ls ~/.claude/skills/expertise/ 2>/dev/null
This reveals available domain expertise (e.g., macos-apps, iphone-apps, python-games, unity-games).
If no expertise skills found: Proceed without domain expertise (graceful degradation). The skill works fine with general debugging methodology. </scan_domains>
<inference_rules> If user's description or codebase contains domain keywords, INFER the domain:
| Keywords/Files | Domain Skill |
|---|---|
| "Python", "game", "pygame", ".py" + game loop | expertise/python-games |
| "React", "Next.js", ".jsx/.tsx" | expertise/nextjs-ecommerce |
| "Rust", "cargo", ".rs" files | expertise/rust-systems |
| "Swift", "macOS", ".swift" + AppKit/SwiftUI | expertise/macos-apps |
| "iOS", "iPhone", ".swift" + UIKit | expertise/iphone-apps |
| "Unity", ".cs" + Unity imports | expertise/unity-games |
| "SuperCollider", ".sc", ".scd" | expertise/supercollider |
| "Agent SDK", "claude-agent" | expertise/with-agent-sdk |
If domain inferred, confirm:
Detected: [domain] issue → expertise/[skill-name]
Load this debugging expertise? (Y / see other options / none)
</inference_rules>
<no_inference> If no domain obvious, present options:
What type of project are you debugging?
Available domain expertise:
1. macos-apps - macOS Swift (SwiftUI, AppKit, debugging, testing)
2. iphone-apps - iOS Swift (UIKit, debugging, performance)
3. python-games - Python games (Pygame, physics, performance)
4. unity-games - Unity (C#, debugging, optimization)
[... any others found in build/]
N. None - proceed with general debugging methodology
C. Create domain expertise for this domain
Select:
</no_inference>
<load_domain> When domain selected, READ all references from that skill:
cat ~/.claude/skills/expertise/[domain]/references/*.md 2>/dev/null
This loads comprehensive domain knowledge BEFORE investigation:
Announce: "Loaded [domain] expertise. Investigating with domain-specific context."
If domain skill not found: Inform user and offer to proceed with general methodology or create the expertise. </load_domain>
<when_to_load> Domain expertise should be loaded BEFORE investigation when domain is known.
Domain expertise is NOT needed for:
Important: If you wrote or modified any of the code being debugged, you have cognitive biases about how it works. Your mental model of "how it should work" may be wrong. Treat code you wrote with MORE skepticism than unfamiliar code - you're blind to your own assumptions. </context>
<core_principle> VERIFY, DON'T ASSUME. Every hypothesis must be tested. Every "fix" must be validated. No solutions without evidence.
ESPECIALLY: Code you designed or implemented is guilty until proven innocent. Your intent doesn't matter - only the code's actual behavior matters. Question your own design decisions as rigorously as you'd question anyone else's. </core_principle>
<quick_start>
<evidence_gathering>
Before proposing any solution:
A. Document Current State
B. Map the System
C. Gather External Knowledge (when needed)
See references/when-to-research.md for detailed guidance on research strategy.
</evidence_gathering>
<root_cause_analysis>
A. Form Hypotheses
Based on evidence, list possible causes:
B. Test Each Hypothesis
For each hypothesis:
See references/hypothesis-testing.md for scientific method application.
C. Eliminate or Confirm
Don't move forward until you can answer:
</root_cause_analysis>
<solution_development>
Only after confirming root cause:
A. Design Solution
B. Implement with Verification
C. Test Thoroughly
See references/verification-patterns.md for comprehensive verification approaches.
</solution_development>
</quick_start>
<critical_rules>
</critical_rules>
<success_criteria>
Before starting:
During investigation:
If you can't answer "yes" to all of these, keep investigating.
CRITICAL: Do NOT mark debugging tasks as complete until this checklist passes.
</success_criteria>
<output_format>
## Issue: [Problem Description]
### Evidence
[What you observed - exact errors, behaviors, outputs]
### Investigation
[What you checked, what you found, what you ruled out]
### Root Cause
[The actual underlying problem with evidence]
### Solution
[What you changed and WHY it addresses the root cause]
### Verification
[How you confirmed this works and doesn't break anything else]
</output_format>
<advanced_topics>
For deeper topics, see reference files:
Debugging mindset: references/debugging-mindset.md
Investigation techniques: references/investigation-techniques.md
Hypothesis testing: references/hypothesis-testing.md
Verification patterns: references/verification-patterns.md
Research strategy: references/when-to-research.md
</advanced_topics>
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.