Recommends epistemic protocols from recent sessions or guides users through scenario-based learning, trials, and quizzes for quick onboarding.
From epistemic-cooperativenpx claudepluginhub jongwony/epistemic-protocols --plugin epistemic-cooperativeThis skill uses the workspace's default tool permissions.
references/advanced-usage.mdreferences/scenarios.mdreferences/workflow.mdEnables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Start with a quick recommendation based on recent sessions, then optionally continue to guided learning — so users experience value first, learn second.
Invoke this skill when:
Skip when:
/report)Quick Proof: ENTRY → QUICKSCAN → PICK-1 → EVIDENCE → TRIAL → INSIGHT → NEXT
Targeted: ENTRY → QUICKSCAN → MAP → SCENARIO → TRIAL → QUIZ → GUIDE
Targeted + std: ENTRY → SCENARIO → TRIAL → QUIZ → GUIDE
| Phase | Owner | Tool | Purpose |
|---|---|---|---|
| 0. Entry | Main | Gate | Path selection: quick/targeted |
| 1. Quick Scan | Main | Glob, Read | User Context Profile extraction |
| 2a. Pick-1 | Main | — | Quick path: select 1 recommendation |
| 2b. Evidence | Main | — | Quick path: show 1 evidence card |
| 2. Map | Main | — | Targeted path: Profile → Protocol matching |
| 3. Scenario | Main | Gate | Targeted path: context-personalized intervention point |
| 4. Trial | Main | Gate | Real protocol execution (quick: mini trial, targeted: full trial) |
| 4→Q. Insight | Main | — | Quick path: post-trial insight card |
| 4→Q. Next | Main | Gate | Quick path: simplified navigation |
| 4→5 LOOP | Main | Gate | Targeted path: post-trial navigation |
| 5. Quiz | Main | Gate | Targeted path: Socratic protocol recognition quiz |
| 6. Guide | Main | Gate | Targeted path: summary + /report CTA |
Compact mapping for inline use. For full Primary/Secondary/Tertiary tables with detection methods and rationale, refer to /report SKILL.md.
| Protocol | Cluster | When to Use | Key Patterns |
|---|---|---|---|
Hermeneia /clarify | Planning | AI keeps misunderstanding your intent | Same file 3+ edits (same intent), misunderstood_request friction |
Telos /goal | Planning | You have a desire but no clear goal | Vague first prompts ("improve", "optimize", "ideas for"), wrong_approach friction |
Aitesis /inquire | Planning | AI is about to execute without enough context | context_loss friction |
Prothesis /frame | Analysis | Unsure which analytical perspective to use | Exploration ratio 3:1+ (Read+Grep+Glob vs Edit+Write) |
Analogia /ground | Analysis | Checking if abstract advice fits your situation | Abstract pattern application without domain validation |
Syneidesis /gap | Decision | Right before committing, checking for blind spots | Same file 3+ edits (different concerns), excessive_changes friction |
Prosoche /attend | Execution | Checking execution readiness and controlling risky actions | Bash deploy/push/apply keywords, upstream deficit signals, wrong_file_edited friction |
Epharmoge /contextualize | Verification | Output is correct but doesn't fit the context | Post-execution environment mismatch |
Horismos /bound | Cross-cutting | Deciding what to delegate to AI | Boundary probe, domain classification, BoundaryMap |
Katalepsis /grasp | Cross-cutting | You approved AI work but didn't fully understand it | Verification keywords in firstPrompt ("explain", "what did you do") |
Do NOT present the full protocol catalog upfront. Start with a concise welcome and path selection.
Gate #1:
If Quick recommendation: set path = quick, proceed to Phase 1.
If Browse all: Present the protocol catalog (check installation status via Glob ~/.claude/plugins/cache/epistemic-protocols/*/, then render the 10 protocols from Data Sources as a numbered list grouped by Cluster with name + "When to Use" + installation badge). After catalog, present:
Then proceed based on selection.
If Targeted + Other contains protocol name: proceed directly to session source question.
If Targeted + no protocol specified:
Present a condensed catalog as text output: render the Data Sources table grouped by Cluster, each protocol as /command — When to Use description.
Then Gate #2:
Gate #3 (Targeted only, session source):
State after Phase 0:
path: quick | targetedtarget_protocol: (targeted only) selected protocol namesession_source: (targeted only) scan | standard — Quick path always runs Quick ScanSkip rule: If targeted + standard → skip Phases 1-2, jump to Phase 3 with preset scenarios from references/scenarios.md.
Build a User Context Profile from recent session metadata. Runs inline with Glob + Read (no subagent delegation). Both Quick and Targeted paths share this phase.
Step 1: Collect session metadata
Glob ~/.claude/projects/*/sessions-index.json (exclude directories containing -worktrees-). Read the 2-3 most recently modified indexes. For each, parse entries and extract the 5 most recent entries' firstPrompt and summary fields.
Step 2: Infer User Context Profile
From collected metadata, infer:
If no sessions-index.json files found: Quick path proceeds to Pick-1 with fallback (/goal); Targeted path falls back to Onboarding Pool (/goal, /gap, /frame).
Output for Phase 2: User Context Profile (work domains, conversation patterns, task types). Quick Scan infers user context for protocol matching and scenario personalization — behavioral pattern extraction and session diagnostics belong in /report.
Quick path only. Select exactly 1 protocol recommendation from the auto-recommend pool.
Onboarding Pool: /goal (Telos), /gap (Syneidesis), /frame (Prothesis). These three are chosen because users can quickly experience their value. Protocols like /clarify and /grasp are user-initiated by nature and should not be proactively suggested in the first encounter.
Recommendation rules (applied to Quick Scan Profile):
| Protocol | Signal patterns | Priority |
|---|---|---|
/goal | Vague first prompts ("improve", "optimize", "ideas for", "make it better", "help me plan"); desire without success criteria | Highest (also fallback) |
/gap | Multiple revisions on same topic in summary; finalization language ("wrap up", "ready", "finalize", "ship", "merge") | Medium |
/frame | Exploration/comparison language ("approach", "options", "tradeoffs", "compare", "architecture", "which way") | Medium |
Decision logic:
firstPrompt and summary fields/goal > /gap > /frame/goalOutput: Present recommendation as a single sentence. Do NOT present multiple recommendations or a ranked list.
Format: Present as a single sentence stating which protocol is most likely to help right now.
Quick path only. Present exactly 1 evidence card explaining why this recommendation was made.
Evidence generation (per protocol, referencing Data Sources table):
Fallback (no session data): State that no patterns were detected, then cite the protocol's core value proposition from Data Sources "When to Use."
Rules:
After presenting evidence, present:
Branch: Try it now → Phase 4 (quick trial), Learn more about this recommendation → show Data Sources row for the recommended protocol then re-ask, See a different recommendation → pick next from pool and re-present from Phase 2a, Go to full learning path → set path = targeted and go to Phase 0 targeted flow.
Targeted path only. Apply User Context Profile to match protocols to the user's context.
/goal, /gap, /frame)./goal, /gap, /frame). Proceed immediately without blocking the onboarding flow.For detailed mapping logic (Primary/Secondary/Tertiary tables, session diagnostics, anti-pattern detection), refer to /report SKILL.md.
Targeted path only. Present a concrete scenario showing where the protocol would have helped.
Scenario construction (2-tier fallback):
references/scenarios.md using Profile data.references/scenarios.md.Present scenarios for each of the top 2-3 protocols sequentially.
Scenario format:
Scenario: /X (Protocol Name)
[Situation]: [Concrete situation grounded in user's work context — or preset from scenarios.md]
[Intervention]: If you had called /X at this point:
- [what the protocol would have done — step 1]
- [step 2]
Expected outcome: [e.g., reduced rework, clearer direction]
Clarity rule: Scenarios must present clear-cut protocol fits where the mapping is unambiguous. If a situation could plausibly map to multiple protocols (e.g., "exploration" could be /goal or /frame), do NOT use it as a scenario — reserve it for Phase 5 quiz material instead. The scenario phase builds confidence through recognition; the quiz phase builds discrimination through ambiguity.
Anti-pattern: Scenarios must be self-contained (situation + intervention) with unambiguous protocol fit. Ambiguous patterns belong in Phase 5 quiz.
Present each scenario as regular text output (Tier 1/2 format above). Then present for navigation only:
Gate (per scenario):
Guide the user through a real, abbreviated protocol experience.
Mini practice prompt: Present a single realistic request (one sentence) that naturally triggers the selected protocol's deficit. When User Context Profile is available, adapt the domain to match the user's work context. Source from references/scenarios.md Trial prompt field, or generate from Data Sources context. Follow with gate interaction:
Execution: The user invokes the actual protocol (e.g., type /goal). The protocol runs in the same session with the mini prompt as context. Trial ends when the invoked protocol reaches its natural termination (convergence or user Esc). After protocol termination, proceed to Quick Post-Trial below.
Quick Post-Trial Insight (2 lines max):
Generate from the protocol just experienced:
references/scenarios.md Philosophy field)Epistemic Ink Tip (1 line, after Post-Trial Insight): "Tip: Run /config to enable Epistemic Ink — an Output Style that enhances protocol interactions with structured formatting."
Quick Post-Trial Navigation:
Present via gate interaction:
Branch: That's enough for today → end session with brief closing (include text mention: For deeper analysis, try /report.), Try a different protocol → check pool exhaustion: if unrecommended protocols remain in Onboarding Pool, pick next and restart from Phase 2a; if pool exhausted (all 3 recommended in session), present You've experienced all core recommendations and offer Targeted transition, Continue to full onboarding → set path = targeted and go to Phase 2 MAP with Quick Scan results.
"Try it" selection from Phase 3 already signals intent — enter trial directly without additional confirmation.
Mini practice prompts (scoped for 2-3 exchanges): Use the Trial prompt field from references/scenarios.md for the target protocol. Present the trial guidance as regular text output.
Execution: Prompt the user to invoke the actual protocol (e.g., type /clarify). The protocol runs in the same session with the mini prompt as context. Trial ends when the invoked protocol reaches its natural termination (convergence or user Esc). After protocol termination, present Post-Trial Insight and LOOP.
Offer trial for the top-recommended protocol first. If user completes it, optionally offer trial for the second recommendation.
Post-Trial Insight (presented after trial completion):
After each trial, present a brief insight card sourced from the Philosophy field in references/scenarios.md. Structure:
Protocol Insight: /X (Greek name)
[Core principle — one sentence]
[Workflow position — where this protocol sits and why]
[Game feel — the experiential pattern you just went through]
Post-Trial LOOP:
After the Post-Trial Insight, present:
Branch: Quiz → Phase 5, Another scenario → Phase 3, Different protocol → Phase 3 with next MAP protocol or Phase 0 with cached MAP, Guide → Phase 6.
Test protocol recognition through situation-based questions. Question format differs by path.
Question sourcing (in priority order):
/goal or /frame)references/scenarios.mdType 1 — Binary recognition (2-3 questions):
Present via gate interaction for each:
/X situation?"Type 2 — Reverse recognition (1 question):
Present via gate interaction:
/X situations?"Type 3 — Design thinking (1 question):
Present via gate interaction:
Type 1 — Situation recognition (3-4 questions):
Present via gate interaction for each:
Type 2 — Design thinking (1 question):
Same format as Targeted Path Type 3.
Immediate feedback after each question:
Correct: Reinforce with the core principle + why the distinction matters. "Correct — /gap surfaces blind spots at decision points (what you haven't considered), while /attend checks execution readiness (Phase -1 upstream scan) and gates execution risks (what could go wrong when you act). /gap audits before action (decision quality), /attend ensures readiness + gates during action (execution safety)."
Incorrect (reasoning inquiry → targeted correction):
/inquire catches missing context before execution (User→AI), while /contextualize checks context fit after (AI→User)."Reasoning inquiry cap: Apply reasoning inquiry for the first 2 incorrect answers per quiz session. Subsequent incorrect answers receive direct targeted correction (step 2 only) without the reasoning inquiry step.
Distinction depth: Quiz feedback should go beyond "A, not B" to explain the design dimension that separates confused pairs. Reference the distractor pairs from Quiz Design section. The goal is that even wrong answers teach — the user leaves understanding why two protocols that sound similar serve different purposes.
Summarize the learning experience, connect it to the broader epistemic workflow, and provide actionable next steps.
Learning summary:
/X formalizes this"Epistemic Map (connect the dots):
Present the Epistemic Concern Clusters from references/workflow.md. Highlight protocols the user experienced with emphasis (e.g., bold or ★).
Report CTA: "Run /report for a comprehensive analysis with evidence-backed recommendations and an HTML profile."
Next protocol suggestion: Based on quiz results and MAP data, suggest the next protocol to explore — preferring related protocols in the same cluster.
Advanced Usage (bonus tips after main guide):
Present 3-5 tips from references/advanced-usage.md (protocol chaining, multi-protocol sessions, invocation techniques, etc.), prioritizing tips related to protocols from TRIAL and QUIZ. If they quizzed on /gap vs /attend, show the three-step chain: context → decision audit → risk-aware execution (inquire → gap → attend).
Continue exploring (when MAP results contain unexplored protocols):
Present via gate interaction:
If "Yes" → return to Phase 3, using the next recommended protocol from MAP results.
Difficulty progression: Start with high-contrast pairs (e.g., /goal vs /attend), progress to subtle distinctions (e.g., /clarify vs /goal, /gap vs /attend).
Distractor selection: Choose protocols that share surface similarity with the correct answer:
/clarify ↔ /gap: both surface "something wrong" but different targets — /clarify fixes expression before work begins (Planning: "I said X but meant Y"), /gap audits blind spots at a decision point (Decision: "Am I overlooking something?")/clarify ↔ /goal: both about "unclear starting point" but different deficits (expression vs. existence)/gap ↔ /attend: both about risk awareness but /gap audits decision quality before committing, /attend checks execution readiness (Phase -1 upstream scan) and gates execution-time risks/inquire ↔ /contextualize: both about "context" but different timing (pre vs. post execution)/frame ↔ /ground: both about structuring how to think about a problem, but different operations (lens selection vs. mapping validation)/bound ↔ /inquire: both pre-execution and AI-directed, but different targets (ownership boundaries vs. missing context)Path-specific question counts:
Quick path targets 3-4 calls. Targeted path targets 6-12 calls.
| Phase | Calls (Quick) | Calls (Targeted) | Purpose |
|---|---|---|---|
| 0. Entry | 1-2 | 2-3 | Path + protocol + session source |
| 2b. Evidence | 1 | — | Trial confirmation |
| 3. Scenario | — | 1-2 | Navigation after scenario text |
| 4. Trial | 1 | 0 | Quick: situation choice. Targeted: direct entry |
| 4→Q. Next | 1 | — | Quick: post-trial navigation |
| 4→5 LOOP | — | 1 | Targeted: post-trial navigation |
| 5. Quiz | — | 4-7 | MC/design or binary/reverse/design + reasoning inquiry |
| 6. Guide | — | 0-1 | Optional continue exploring |
/goal, /gap, /frame are the unified recommendation set for both Quick path auto-recommend and Targeted path fallback. User-initiated protocols (/clarify, /grasp, /attend) and specialized protocols (/contextualize) are excluded. When pool is exhausted in Quick path, transition to Targeted path./report./report.sessions-index.json via Glob + Read. Parse entries for firstPrompt and summary fields only. Never Read entire session JSONL files.references/scenarios.md ensures every user gets a complete experience regardless of session history availability.