By junjslee
Run safe, long-running AI agent workflows for coding projects using skills, agents, and hooks that enforce epistemic posture: structured reasoning traces, risk pre-mortems, governance gates, progress handoffs in memory docs, parallel git branches, strict code reviews, and failure-mode counters for reliable automation across sessions.
npx claudepluginhub junjslee/episteme --plugin epistemeUpdate project memory docs so the next agent or session can resume cleanly.
Define and maintain structural layers, entity boundaries, invariants, and vocabulary so execution stays conceptually coherent.
Anchor work to real domain outcomes, user utility, and adoption metrics so the system delivers value beyond infrastructure.
Enforce operational governance, risk policy, promotion gates, and rollback readiness before high-impact changes.
Execute bounded implementation work with clear file ownership and a concrete verification plan.
Coordinate multi-agent execution while preserving macro-context and shared objectives.
Use proactively for multi-step or ambiguous work. Produces phased plans, risks, likely files, and verification strategy before implementation.
Audit decision quality by enforcing known/unknown/assumptions/disconfirmation before implementation or promotion.
Investigate unknown territory before implementation decisions. Uses primary sources to form hypothesis-first conclusions. Distinguish verified facts from inferences clearly.
Review changes for bugs, regressions, risky assumptions, missing tests, and migration issues.
Run targeted verification commands, summarize failures, and identify the smallest next fix.
Design safe unattended or long-running loops with explicit limits, checkpoints, and handoff artifacts.
Update progress and next-step docs so the next agent or session can resume with minimal context loss.
Bootstrap a repository with the standard episteme scaffold and verify the core memory files exist.
Turn project requirements into a staged implementation plan with verification and handoff criteria.
Synthesize external sources into actionable project decisions, docs, or implementation guidance.
Apply a strict review gate before merge, release, or project handoff.
Decompose parallelizable work into bounded branches and create safe git worktrees.
Private Claude-only lab skill for iteratively improving a copied skill candidate with explicit evaluation goals and manual promotion.
Create a Product Requirements Document using a comprehensive 8-section template covering problem, objectives, segments, value propositions, solution, and release planning. Use when writing a PRD, documenting product requirements, preparing a feature spec, or reviewing an existing PRD.
Run a pre-mortem risk analysis on a PRD or launch plan. Categorizes risks as Tigers (real problems), Paper Tigers (overblown concerns), and Elephants (unspoken worries), then classifies as launch-blocking, fast-follow, or track. Use when preparing for launch, stress-testing a product plan, or identifying what could go wrong.
Reference guide to 9 prioritization frameworks with formulas, when-to-use guidance, and templates — RICE, ICE, Kano, MoSCoW, Opportunity Score, and more. Use when selecting a prioritization method, comparing frameworks like RICE vs ICE, or learning how different prioritization approaches work.
Generate user-facing release notes from tickets, PRDs, or changelogs. Creates clear, engaging summaries organized by category (new features, improvements, fixes). Use when writing release notes, creating changelogs, announcing product updates, or summarizing what shipped.
Facilitate a structured sprint retrospective — what went well, what didn't, and prioritized action items with owners and deadlines. Use when running a retrospective, reflecting on a sprint, creating action items from team feedback, or learning how to run effective retros.
Plan a sprint with capacity estimation, story selection, dependency mapping, and risk identification. Use when preparing for sprint planning, estimating team capacity, selecting stories, or balancing sprint scope against velocity.
Prevents premature execution on ambiguous requests. Analyzes request clarity using 5W1H decomposition, surfaces hidden assumptions, and generates structured clarifying questions before work begins. Use at the start of any non-trivial task, or when a request could be interpreted multiple ways. Triggers on "뭘 원하는건지", "요구사항 정리", "clarify", "what exactly", "scope", "requirements", "정확히 뭘", "before we start".
Prospective failure analysis using Gary Klein's swing-mortem technique. Assumes complete failure, works backward to identify risks, leading indicators, and circuit breakers. Counters optimism bias by forcing systematic exploration of failure modes before they materialize. Use for project plans, architecture decisions, technology adoption, business strategy, or feature launches. Triggers on "리스크", "위험", "실패하면", "swing-mortem", "뭐가 잘못될 수 있어", "risk", "what could go wrong", "걱정되는 점", "failure modes", "리스크 분석", "위험 분석".
Generate probability-weighted alternative options that challenge default thinking. Forces unconventional alternatives and exposes hidden assumptions behind the "obvious" choice. For decision-point analysis, NOT full design exploration (use brainstorming for that). Triggers on "대안", "alternatives", "옵션 뽑아", "options", "어떤 방법이", "아이디어", "다른 방법", "선택지".
Deep research with cross-verification and source tiering. Use when investigating technologies, comparing tools, fact-checking claims, evaluating architectures, or any task requiring verified information. Triggers on "조사해줘", "리서치", "research", "investigate", "fact-check", "비교 분석", "검증해줘".
Devil's Advocate stress-testing for code, architecture, PRs, and decisions. Surfaces hidden flaws through structured adversarial analysis with metacognitive depth. Use for high-stakes review, stress-testing choices, or when the user wants problems found deliberately. NOT for routine code review (use engineering:code-review). Triggers on "스트레스 테스트", "stress test", "devil's advocate", "반론", "이거 괜찮아", "문제 없을까", "깊은 리뷰", "critical review", "adversarial".
Exposes Claude's reasoning chain as an auditable, decomposable artifact. Quick mode (default) gives assumption inventory + weakest-link in 2 stages. Full mode (--full) adds decision branching, confidence decomposition, and falsification conditions. Triggers on "왜 그렇게 생각해", "reasoning", "근거", "show your work", "어떻게 그 결론이", "trace", "판단 근거", "why do you think that".
Create comprehensive test scenarios from user stories with test objectives, starting conditions, user roles, step-by-step actions, and expected outcomes. Use when writing QA test cases, creating test plans, defining acceptance tests, or preparing for feature validation.
Executes bash commands
Hook triggers when Bash tool is used
Modifies files
Hook triggers on file write and edit operations
Share bugs, ideas, or general feedback.
Thoughtbox observability, protocol enforcement, and CLI for Claude Code
The operational layer for coding agents. Bookkeeping, validation, and flows that compound knowledge between sessions.
Implementation of the Ralph Wiggum technique - continuous self-referential AI loops for interactive iterative development. Run Claude in a while-true loop with the same prompt until task completion.
Self-improving AI workflow system. Crystallize requirements before execution with Socratic interview, ambiguity scoring, and 3-stage evaluation.
Self-evolving Claude Code system that learns from corrections, manages context, and improves every session
Uses power tools
Uses Bash, Write, or Edit tools
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.