By ether-moon
A knowledge distillation system that delivers only verified knowledge to AI coding agents. 3-layer architecture with convention-based air gap.
npx claudepluginhub ether-moon/knowledge-distillery --plugin knowledge-distilleryOrchestrates the Stage B distillation pipeline: discovers merged PRs labeled knowledge:pending, runs per-PR evidence collection → candidate extraction → quality gate, writes a changeset file for accepted entries, and creates a report PR for human review. Triggered on schedule (weekly/biweekly) or manual dispatch. Use when you need to process accumulated knowledge from merged PRs, run the refinement pipeline, or manually trigger a batch distillation cycle.
Collects the actual content of all evidence sources identified in a PR's Evidence Bundle Manifest and produces a structured Evidence Bundle. Stage B step 1 — transforms identifier references into full content for downstream candidate extraction. Called by batch-refine orchestrator per PR.
Processes reviewer feedback on Report PRs to selectively accept, reject, or modify changeset entries. Triggered by /curate comment on knowledge/batch-* PRs. Reads PR comments, interprets natural language feedback, updates the changeset file, regenerates report, and commits.
Analyzes an Evidence Bundle and extracts knowledge candidates — the core LLM extraction step of the distillation pipeline. Stage B step 2. Transforms raw evidence into structured vault entry candidates by identifying confirmed team decisions, anti-patterns from incidents, and established conventions.
Queries team-verified knowledge from the Knowledge Vault. A UserPromptSubmit hook reminds you when active vault entries exist — use this skill to query and interpret them before planning code changes.
Extracts evidence identifiers from a merged PR and posts an Evidence Bundle Manifest comment. Stage A of the distillation pipeline — lightweight, identifier-only, no content fetching. Triggered on PR merge or manual invocation. Use after a PR merge to begin knowledge tracking, or manually with a specific PR number to retroactively mark evidence.
Commits changes with an auto-generated message and attaches a structured session summary as a git note for the Knowledge Distillery evidence pipeline. Replaces the default commit workflow in knowledge-distillery-enabled projects. Use whenever the user says 'commit', 'save changes', 'git commit', or invokes any commit action — this ensures every commit carries session context for downstream knowledge extraction.
Generates a structured session summary for git notes on refs/notes/commits. Extracts decisions, problems, constraints, and open questions from an AI coding session transcript for use as evidence in the Knowledge Distillery refinement pipeline. Use when generating a memento summary for a commit, or when the post-commit hook requests a session summary.
Validates knowledge candidates against quality rules before vault insertion. Stage B step 3. Two-layer verification: deterministic rule checks (schema, R3, R5) followed by LLM-based semantic judgment (R1 evidence sufficiency, R6 duplicate detection, R7 directly-derivable heuristic).
Records a project decision as a committed markdown file under .knowledge/decisions/ for the knowledge distillery pipeline. Auto-triggered when clear project decisions are detected during a session — scope decisions, architectural choices, confirmed constraints, or direction after deliberation all qualify. Do not wait for the user to ask; invoke proactively when a decision moment is observed.
Sets up or updates Knowledge Distillery in a project. Creates vault, workflows, directive sections, and permissions — always converging to the latest expected state. Safe to re-run after plugin upgrades. Use when setting up, updating, or troubleshooting a Knowledge Distillery installation — any mention of 'initialize', 'set up', 'install', 'bootstrap', or 'update' knowledge distillery should trigger this.
Executes bash commands
Hook triggers when Bash tool is used
Share bugs, ideas, or general feedback.
Knowledge base skills for Claude Code — capture, search, and synthesize project knowledge
Shared knowledge commons for AI agents; find, share, and confirm collective knowledge to stop rediscovering the same failures.
The epistemic-posture layer for AI coding agents. Ships the Reasoning Surface protocol, named failure-mode counters, operator profile schema, and workflow loop as Claude Code skills, agents, and hooks. Posture first. Kernel outlives the tooling.
Clone repos, crawl docs, search locally. Fast, authoritative answers for AI coding agents.
Complete developer toolkit for Claude Code