Classify problems into Cynefin Framework domains (Clear, Complicated, Complex, Chaotic, Confusion) and recommend appropriate response strategies. Use when unsure how to approach a problem, facing analysis paralysis, or needing to choose between expert analysis and experimentation.
Classifies problems into Cynefin domains and recommends appropriate response strategies for analysis paralysis or uncertain approaches.
/plugin marketplace add rjmurillo/ai-agents/plugin install project-toolkit@ai-agentsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Classify problems into the correct Cynefin domain and recommend the appropriate response strategy. This prevents applying the wrong cognitive approach to problems.
Activate when the user:
classify this problemcynefin analysiswhat approach should we takeshould we analyze or experimentis this complex or complicatedUse this skill when:
Use decision-critic instead when:
UNORDERED ORDERED
┌─────────────────────────────────┬─────────────────────────────────┐
│ COMPLEX │ COMPLICATED │
│ │ │
│ Cause-effect visible only │ Cause-effect discoverable │
│ in retrospect │ through expert analysis │
│ │ │
│ Response: PROBE-SENSE-RESPOND │ Response: SENSE-ANALYZE-RESPOND│
│ • Safe-to-fail experiments │ • Expert consultation │
NOVEL │ • Emergent practice │ • Root cause analysis │ KNOWN
│ • Amplify what works │ • Good practice │
├─────────────────────────────────┼─────────────────────────────────┤
│ CHAOTIC │ CLEAR │
│ │ │
│ No discernible cause-effect │ Cause-effect obvious to all │
│ No time for analysis │ │
│ │ Response: SENSE-CATEGORIZE- │
│ Response: ACT-SENSE-RESPOND │ RESPOND │
│ • Stabilize first │ • Apply best practice │
│ • Novel practice │ • Follow procedures │
│ • Then move to complex │ • Standardize │
└─────────────────────────────────┴─────────────────────────────────┘
CONFUSION (center)
Domain unknown - gather information
Ask: "Can we predict the outcome of an action?"
| If... | Then Domain is Likely... |
|---|---|
| Anyone can predict outcome | Clear |
| Experts can predict outcome | Complicated |
| Outcome only knowable after action | Complex |
| No one can predict, crisis mode | Chaotic |
| Insufficient information to determine | Confusion |
Problems can move between domains:
Clear Domain Indicators:
Complicated Domain Indicators:
Complex Domain Indicators:
Chaotic Domain Indicators:
## Cynefin Classification
**Problem**: [Restate the problem concisely]
### Domain: [CLEAR | COMPLICATED | COMPLEX | CHAOTIC | CONFUSION]
**Confidence**: [HIGH | MEDIUM | LOW]
### Rationale
[2-3 sentences explaining why this domain based on cause-effect relationship]
### Response Strategy
**Approach**: [Sense-Categorize-Respond | Sense-Analyze-Respond | Probe-Sense-Respond | Act-Sense-Respond | Gather Information]
### Recommended Actions
1. [First specific action]
2. [Second specific action]
3. [Third specific action]
### Pitfall Warning
[Domain-specific anti-pattern to avoid]
### Related Considerations
- **Temporal**: [Will domain likely shift? When?]
- **Boundary**: [Is this near a domain boundary?]
- **Compound**: [Are sub-problems in different domains?]
When you see it: Bug with known fix, style violation, typo, standard CRUD operation.
Response: Apply best practice immediately. Don't over-engineer.
Pitfall: Over-complicating simple problems. Creating abstractions where none needed.
Software Examples:
When you see it: Performance issue, security vulnerability assessment, architecture evaluation.
Response: Gather experts, analyze thoroughly, then act decisively.
Pitfall: Analysis paralysis OR acting without sufficient expertise.
Software Examples:
When you see it: User behavior prediction, team dynamics, new technology adoption, architectural decisions with uncertainty.
Response: Run safe-to-fail experiments. Probe, sense patterns, respond. Amplify what works.
Pitfall: Trying to fully analyze before acting. Expecting predictable outcomes.
Software Examples:
When you see it: Production outage, data breach, critical security incident.
Response: Act immediately to stabilize. Restore order first. Analyze later.
Pitfall: Forming committees. Waiting for consensus. Deep analysis during crisis.
Software Examples:
When you see it: Insufficient information to classify. Contradictory signals. Unknown unknowns.
Response: Gather information. Break problem into smaller pieces. Reclassify components.
Pitfall: Assuming a domain without evidence. Paralysis from uncertainty.
Software Examples:
| Skill | Integration Point |
|---|---|
| decision-critic | After classifying as Complicated, use decision-critic to validate analysis |
| milestone-planner | After classifying as Complex, use milestone-planner to design experiments |
| architect | Complicated architectural decisions benefit from ADR process |
| analyst | Confusion domain benefits from analyst investigation |
When a problem spans multiple domains:
Structured classification with validation.
python3 .claude/skills/cynefin-classifier/scripts/classify.py \
--problem "Description of the problem" \
--context "Additional context about constraints, environment"
Exit Codes:
Escalate to human or senior decision-maker when:
Input: "Tests pass locally but fail randomly in CI"
Classification: COMPLEX
Rationale: Multiple interacting factors (timing, environment, dependencies, parallelism) make cause-effect unclear. Analysis alone won't solve this.
Strategy: Probe-Sense-Respond
Pitfall: Don't spend weeks trying to "root cause" before experimenting.
Input: "Production database is unresponsive, customers cannot access the site"
Classification: CHAOTIC
Rationale: Active harm occurring. No time for analysis. Stabilization required.
Strategy: Act-Sense-Respond
Pitfall: Don't form a committee. Don't start analyzing before acting.
Input: "Should we use React or Vue for our new frontend?"
Classification: COMPLEX
Rationale: Team dynamics, learning curves, ecosystem fit, and long-term maintainability only emerge through experience. Trade-off analysis alone is insufficient.
Strategy: Probe-Sense-Respond
Pitfall: Don't try to "perfectly analyze" all trade-offs in spreadsheet.
Input: "Application memory grows steadily over 24 hours"
Classification: COMPLICATED
Rationale: Cause-effect is discoverable through expert analysis. Heap dumps, profiling, and code review will reveal the source.
Strategy: Sense-Analyze-Respond
Pitfall: Don't guess and patch. Systematic analysis will find root cause.
Input: "The app feels slow sometimes"
Classification: CONFUSION
Rationale: Insufficient information to determine domain. Could be Clear (known fix), Complicated (needs profiling), or Complex (user perception).
Strategy: Gather Information
Next Step: Reclassify once information gathered.
After classification:
| Anti-Pattern | Description | Consequence |
|---|---|---|
| Complicated-izing Complexity | Applying analysis to emergent problems | Analysis paralysis, wasted effort |
| Simplifying Complicated | Skipping expert analysis for nuanced problems | Rework, technical debt |
| Analyzing Chaos | Forming committees during crisis | Prolonged outage, increased damage |
| Experimenting on Clear | Running A/B tests on solved problems | Wasted time, unnecessary risk |
| Guessing Confusion | Assuming domain without evidence | Wrong approach, compounded problems |
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.