From dstoic
Sense-making before action. Classify problem using Cynefin to route to the right skill chain. Use when: frame, what approach, how should I start, which skill, where to begin, unsure what to do. NOT for known tasks — just do them.
npx claudepluginhub digital-stoic-org/agent-skills --plugin dstoicThis skill is limited to using the following tools:
Classify → route to right skill chain. Domain determines agent pattern, not just skill.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Classify → route to right skill chain. Domain determines agent pattern, not just skill.
Framing: $ARGUMENTS
Read $ARGUMENTS. Attempt Cynefin domain classification using constraint language.
If confidence ≥80%: Propose classification — do NOT decide unilaterally.
🎯 Auto-classified: [Domain] (constraint: [type])
→ Verb: [probe|analyze|execute|act|decompose]
→ Suggested route: [skill chain]
Confirm? [Yes / Re-classify manually]
If confidence <80% or no $ARGUMENTS: Skip to Q1 (full qualification).
For Complicated domain with ≥80% confidence: also determine evolving vs degraded from context if possible. If clear → skip Q1.1 and route directly.
CRITICAL: After EVERY AskUserQuestion call, check if answers are empty/blank. Known Claude Code bug: outside Plan Mode, AskUserQuestion silently returns empty answers without showing UI.
If answers are empty: DO NOT proceed with assumptions. Instead:
Question Refinement (before classifying): If $ARGUMENTS is vague or broad, generate 2-3 clarifying sub-questions that sharpen the problem statement. Present them inline before the AskUserQuestion call. Skip if $ARGUMENTS is already specific.
AskUserQuestion — both questions in one call:
Q1 "Constraints" (5 options, constraint-based):
Q1.1 "Complicated sub-question" (only if Q1 = Governing/Complicated):
/investigate/troubleshoot(Skip Q1.1 if auto-classify already determined evolving vs degraded)
Q2 "Scale" (3 options, skip if Chaotic):
Map constraint type → domain → verb → skill chain:
| Domain | Constraint | Verb | Scale | Route | OpenSpec? |
|---|---|---|---|---|---|
| Clear | Rigid | execute | Pebble | Just code it | No |
| Clear | Rigid | execute | Boulder | /openspec-develop directly | Yes |
| Complicated | Governing/Evolving | analyze | Any | /investigate → /openspec-plan | Boulder: yes |
| Complicated | Governing/Degraded | analyze | Any | /troubleshoot → stabilize → re-frame | No |
| Complex | Enabling/no hypothesis | probe | Any | /brainstorm → /probe → /openspec-plan | Yes |
| Complex | Enabling/has hypothesis | probe | Any | /probe → sense → /openspec-plan | Yes |
| Chaotic | Absent | act | — | /experiment → stabilize → /frame-problem | No |
| Confused | Unknown | decompose | Any | /frame-problem (re-classify after clarifying) | — |
Present: 🎯 [Domain] → [Verb] → [skill chain] | OpenSpec: [yes/no] | Scale: [boulder/pebble]
AskUserQuestion "Proceed?": Start chain / Re-frame / Skip framing.
On confirm → invoke first skill with $ARGUMENTS.