Turn Claude Code into a structured development partner — session lifecycle, named code reviewers, research pipelines, and 23 workflow skills that enforce quality from plan through execution.
npx claudepluginhub oduffy-delphi/coordinator-claudeBootstrap or refresh the architecture atlas via multi-phase agent pipeline (Haiku scouts → Sonnet analysts → Opus synthesizer)
Run the weekly architecture audit rotation — score systems, audit the highest-priority target, apply findings, and update the health ledger
Toggle autonomous execution mode — suppresses /handoff nudges from the context pressure hook when the PM wants the EM to continue through compaction
Systematic codebase bug hunt — find and fix all AI-fixable bugs in-session, defer blocked ones to backlog
Night-shift code health review — scans today's commits, dispatches a reviewer, applies findings, and updates health tracking for next session-start
Dispatch enriched stubs to executor agents
Distill accumulated session artifacts (plans, handoffs, completed work) into evergreen wiki documents (docs/guides/, docs/decisions/), then delete source material. Extract knowledge before pruning — the pipeline that bridges artifact-consolidation and wiki maintenance.
Run enrichment pipeline on chunk directories
Execute a PM-approved implementation plan directly in the coordinator session
Generate a ranked repository map for LLM context injection
Save session state for next session handoff
Autonomous backlog execution — gathers all ready work items, builds a compaction-proof flight recorder, executes sequentially without stopping for input, then tails with /update-docs (or /update-docs + hibernate in overnight mode)
Resume work from a handoff — grab the baton and run
Route artifacts to the right reviewer
Wrap up finished work — capture lessons, update docs
Orient session — preflight, load context, choose work
Staff session — Agent Teams-based collaborative planning and review. Two modes: plan (craft detailed plan from objectives) and review (critique existing artifact). Configurable tiers: lightweight (single reviewer), standard (2 debaters + synthesizer), full (3-5 debaters + synthesizer).
Create and run structured research campaigns — batch research across multiple subjects with the same topics, acceptance criteria, and output schema. Use when the PM asks for research on N entities with repeating structure (teams, companies, tools, etc.), when you need schema-conforming data (not prose), when existing data needs re-verification against fresh sources, or when a research spec already exists and needs execution.
Repo-wide documentation maintenance and sync
End-of-day orchestration — update docs, consolidate branches, run health survey
Morning orientation — triage handoffs, surface staleness, align priorities
Use this agent when Patrik's review recommends conservative approaches (patching, deferring, YAGNI) and a backstop challenge is warranted. Zolí challenges whether we should be more ambitious given AI execution capacity. He is NOT a standalone reviewer — he operates only as a backstop to Patrik. Examples: <example> Context: Patrik recommends patching a camera system issue rather than refactoring. user: "Patrik suggests patching the camera controls again. Let me get Zolí's perspective." assistant: "Since Patrik is recommending a conservative fix on an issue that's been patched before, I'll dispatch Zolí as backstop to challenge whether refactoring is warranted." <commentary> Patrik's recommendation involves another incremental patch on a system with accumulated patches. Zolí should challenge whether a clean refactor is now the better investment given AI implementation capacity. </commentary> </example> <example> Context: Patrik's review at High effort — mandatory backstop. user: "This is a High effort architectural review. Zolí backstop is mandatory." assistant: "Dispatching Zolí for mandatory backstop on Patrik's High effort review." <commentary> At High effort, the backstop is mandatory per protocol. </commentary> </example>
Use this agent to verify API references in artifacts (plans, code, stubs) against authoritative documentation before dispatching expensive Opus reviewers. The docs-checker systematically scans an artifact, identifies every external API claim (class names, function signatures, header includes, library APIs), and verifies each against holodeck-docs (UE) or Context7 (other libraries). Returns a structured verification table — not a review. Use as a pre-review pass to let Patrik/Sid skip mechanical verification and focus on architecture. Examples: <example> Context: A camera system implementation needs review but Sid hasn't been dispatched yet. user: "Run docs-checker on the camera system before Sid reviews it" assistant: "Dispatching docs-checker to verify all UE API references in the camera system before routing to Sid." <commentary> Pre-review API verification pass — docs-checker catches incorrect headers, wrong signatures, and nonexistent functions before the expensive Opus reviewer sees the artifact. </commentary> </example> <example> Context: An enriched stub for the movement system is ready for review. user: "Verify the API claims in the enriched movement system stub" assistant: "Dispatching docs-checker to scan the stub for external API claims and verify each one." <commentary> Enriched stubs often contain AI-generated API references that may be hallucinated. Docs-checker validates these before they reach a reviewer. </commentary> </example> <example> Context: A payment module uses the Stripe SDK heavily. user: "Check the Stripe SDK usage in the payment module against Context7" assistant: "Dispatching docs-checker to verify Stripe SDK API usage via Context7." <commentary> Non-UE library verification — docs-checker uses Context7 for external SDK documentation rather than holodeck-docs. </commentary> </example>
Zolí — Director of Engineering synthesizer for staff sessions. Spawned as a teammate by the /staff-session command. Blocked until all debater tasks complete, then reads their position documents from disk, cross-references across perspectives, and writes the final plan (plan mode) or synthesized findings (review mode) through Zolí's ambition-calibrated lens. Represents all positions fairly but resolves contested topics with an eye toward what's achievable with AI execution capacity. Examples: <example> Context: A Patrik+Sid staff session in plan mode has completed. Both debaters have written position documents and sent DONE. user: "Synthesize the debater positions into a consensus plan" assistant: "I'll wait for all DONE messages, read the position documents, cross-reference where they agreed and diverged, and produce the final plan. Where positions conflict, I'll assess which approach best serves our ambition given AI execution capacity — representing both sides fairly but not defaulting to the conservative option." <commentary> Zolí reads all positions, represents them faithfully, but when resolving dissent, applies his calibration: the old YAGNI/defer heuristics were set for human implementation costs. AI execution changes the calculus. </commentary> </example> <example> Context: A staff session in review mode has completed. Patrik and Sid reviewed an existing plan and sent DONE. user: "Produce synthesized review findings from the debater positions" assistant: "I'll check TaskList for all completed debater tasks, read each position document, identify reinforced findings, unique catches, and contested findings. For contested items, I'll assess whether the conservative position reflects genuine engineering prudence or legacy-calibrated caution." <commentary> In review mode, Zolí still organizes findings into Reinforced/Unique/Contested sections. His lens applies specifically to contested findings and the advisory — not to reinforced or unique findings, which stand on their evidence. </commentary> </example> <example> Context: A debater crashed mid-session and did not write a position document. user: "One debater appears to have crashed. Should the session be abandoned?" assistant: "I'll check how many debater tasks completed. If the majority are present, I'll synthesize from the available positions and note the missing perspective. Only if the majority crashed would I escalate to the EM for fallback." <commentary> Synthesizer works with partial positions when a minority of debaters failed. It notes 'Missing perspective: {persona}.' in the output. Majority failure triggers EM escalation, not self-synthesis. </commentary> </example>
Use this agent when plan stub documents need enrichment with research findings. The enricher reads codebases, surveys assets, traces dependencies, and writes findings back into stub documents in-place. It does NOT make architectural decisions — it flags them for the Coordinator. Examples: <example> Context: A stub document has 'Enrichment Needed' items requiring codebase research. user: "Enrich chunk-2A with file paths and implementation steps" assistant: "This requires codebase research to fill in the stub spec. Let me dispatch the enricher agent." <commentary> The stub needs factual research (file paths, code patterns, dependency mapping) that the enricher handles. </commentary> </example> <example> Context: A stub needs an asset survey of an external marketplace pack. user: "Survey the content of the BigBuy marketplace pack for chunk-0J" assistant: "Asset surveying is enricher work. Let me dispatch the enricher to inventory the pack contents." <commentary> Surveying external assets (file counts, asset types, Blueprint inventory) is the enricher's survey sub-phase. </commentary> </example> <example> Context: Multiple independent stubs need enrichment. user: "Enrich all Phase 2 stubs" assistant: "I'll dispatch enricher agents in parallel for the independent Phase 2 stubs." <commentary> Multiple independent stubs can be enriched in parallel by separate enricher agents. </commentary> </example>
Use this agent when enriched and reviewed stub specifications are ready for implementation. The executor follows specs precisely, runs validation after each edit, and stops to report back if specs are unclear or validation fails. It is the typist, not the architect. Examples: <example> Context: A stub has been enriched and reviewed, ready for implementation. user: "Execute chunk-2A — it's been enriched and reviewed" assistant: "This stub is ready for implementation. Let me dispatch the executor agent to implement it." <commentary> The stub has been through enrichment and review. The executor can implement it directly. </commentary> </example> <example> Context: Multiple independent stubs are ready for execution. user: "Execute all Phase 2 stubs — they're all enriched and reviewed" assistant: "I'll dispatch executor agents in parallel for the independent stubs." <commentary> Independent stubs can be executed in parallel by separate executor agents. </commentary> </example> <example> Context: An executor has reported a block and the spec has been updated. user: "Re-execute chunk-3A — I've updated the spec to resolve the ambiguity" assistant: "The spec has been updated. Let me re-dispatch the executor." <commentary> After the Coordinator resolves a block by updating the spec, the executor can be re-dispatched. </commentary> </example>
Use this agent to apply reviewer findings to artifacts after a review dispatch. The review-integrator receives structured findings from any reviewer (Patrik, Sid, Camelia, Palí, Fru) and applies them to the target artifact with annotations explaining the reviewer's reasoning. It escalates disagreements rather than silently skipping findings. Distinct from the 'Opus tech lead' pattern in delegate-execution (which decomposes large stubs). Examples: <example> Context: Patrik has returned findings from a code review. user: "Patrik returned 8 findings on the auth module. Apply them." assistant: "Dispatching the review-integrator to apply Patrik's findings to the auth module." <commentary> Reviewer findings need to be applied to code. The review-integrator applies all findings with annotations, escalating any disagreements. </commentary> </example> <example> Context: Sequential review pipeline — Reviewer 1 findings need application before Reviewer 2. user: "Sid reviewed the camera system. Apply findings before sending to Patrik." assistant: "Dispatching the review-integrator to apply Sid's findings. Once clean, I'll route to Patrik." <commentary> Between sequential reviewers, the review-integrator ensures the next reviewer sees a clean artifact. </commentary> </example>
Use this agent when you need rigorous, uncompromising review from the perspective of a senior staff engineer with exacting standards. Patrik reviews code, plans, architectural decisions, documentation, and any artifact where quality matters. He is the generalist reviewer — equally at home critiquing an implementation plan as a pull request. Particularly valuable when working on LLM-assisted projects where the bar for quality should be higher since AI can handle the overhead. Examples: <example> Context: The user has just written a new utility function and wants it reviewed before committing. user: "I just wrote this helper function to parse configuration files" assistant: "Let me have Patrik review this code to ensure it meets our quality standards." <commentary> New code was written that should be reviewed for quality — launch the staff-eng agent. </commentary> </example> <example> Context: A staff session needs a generalist debater for an implementation plan. user: "We need to plan the auth middleware rewrite" assistant: "Patrik will bring architectural rigor and quality standards to the planning session." <commentary> Patrik is a generalist reviewer used in staff sessions for planning, not just code review. </commentary> </example> <example> Context: The user asks for a code quality assessment. user: "Can you review the code I just pushed?" assistant: "Absolutely. I'll invoke Patrik for a thorough, uncompromising review." <commentary> Explicit code review request — launch the staff-eng agent. </commentary> </example> <example> Context: Documentation has been written or updated. user: "I updated the README with the new API endpoints" assistant: "Let me have Patrik review the documentation to ensure it's comprehensive and precise." <commentary> Documentation changes should be reviewed with the same rigor as code. </commentary> </example>
Use when artifact directories are bloated, during periodic maintenance, or when disk usage from session debris is excessive. Prunes and consolidates accumulated session artifacts — plans/, archive/handoffs/, stale task dirs. Supports dry-run mode. Standalone invocation only — not part of /update-docs.
Check today's changed files against the architecture atlas file-index.md — flags unmapped files as potential new systems. This skill should be used when verifying that new or changed files are mapped in the architecture atlas, or after adding new modules or directories. Invoked by /update-docs (Phase 11) or standalone.
Use when the repo has multiple stale branches that need cleaning up — inventories all branches, absorbs unique commits into the current branch, deletes stale branches, and merges to main. This skill should be used when the user asks to "clean up branches", "consolidate branches", "consolidate git", "merge all branches", or mentions stale/old branches that need cleanup.
Use when the technical debt backlog needs review and prioritization, on demand, or when the backlog exceeds 20 open items. This is an EM-PM conversation, not a dispatched agent — the EM reads the backlog, applies judgment, and presents recommendations. Note: weekly-architecture-audit will increasingly insist as the count grows — mild concern at >20, visible disappointment at >30, and a full coffee-down stare-down at >40.
Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies
Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting structured options for merge, PR, or cleanup
Archive consumed handoffs — moves tasks/handoffs/*.md files older than 48 hours to archive/handoffs/ and checks .gitignore safety. This skill should be used when cleaning up old handoffs, when the handoffs directory is cluttered, or as part of periodic maintenance. Invoked by /update-docs (Phase 8) or standalone.
Periodic maintenance of lessons files — trims stale entries, merges duplicates, and deletes exhausted feature-scoped files. Implements the 'Periodic trim' rule from CLAUDE.md's Self-Improvement Loop. This skill should be used when lessons.md is getting long, when a feature is complete and its lessons file should be cleaned up, or when periodic housekeeping is needed. Invoked by /update-docs (Phase 6) or standalone.
Use when work on a branch is ready to merge to main — creates PR, waits for CI, merges, cleans up.
Use when starting work in a new project repository, when /update-docs reports tracker_missing, or when a marketplace user runs the coordinator plugin for the first time.
Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation
Use when completing tasks, implementing major features, or before merging to verify work meets requirements
Use when the PM asks for staff input on a plan or review, when the EM needs multi-perspective planning or critique, or when deciding between /staff-session and /review-dispatch. Guides tier selection, team composition, and scoping.
This skill should be used when unsure which skill or command applies to a task, or when the user asks 'what skills exist', 'what commands are available', or 'help me find the right tool'.
This skill should be used when detecting repetitive actions, oscillating between approaches, or stalling without progress — the three stuck patterns. Referenced by agent prompts for self-monitoring.
This skill should be used when encountering any bug, test failure, or unexpected behavior — before proposing fixes. Triggers on: 'something is broken', 'test is failing', 'unexpected behavior', 'debug this'.
Use when implementing any feature or bugfix, before writing implementation code
Maintain the unified project tracker at docs/project-tracker.md — marks completion, archives shipped work, updates dependencies, and sweeps for untracked commits. This skill should be used when the user asks to clean up or update the project tracker, archive completed work, check for untracked commits, or resolve stale dependencies. Invoked by /update-docs (Phase 5) or standalone.
Use when starting feature work that needs true branch-level isolation (separate PRs, different base branches, long-lived parallel features) - NOT for avoiding file conflicts during parallel agent dispatch (use sequential execution instead)
Use when about to commit, before /merge-to-main, before /workday-complete, or when the user asks to validate the repo state. Runs all CI validation checks locally.
Use when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output before making any success claims; evidence before assertions always
This skill should be used when requirements are clear and the task needs decomposition into executable chunks — before touching code. Triggers on: 'write a plan', 'break this down', 'plan the implementation'.
Use when creating new skills, editing existing skills, or verifying skills work before deployment
Team-oriented workflow plugin with role agents, 27 specialist agents, ECC-inspired commands, layered rules, and hooks skeleton.
Matches all tools
Hooks run on every tool call, not just specific ones
Executes bash commands
Hook triggers when Bash tool is used
Uses power tools
Uses Bash, Write, or Edit tools
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Manus-style persistent markdown files for planning, progress tracking, and knowledge storage. Works with Claude Code, Kiro, Clawd CLI, Gemini CLI, Cursor, Continue, Hermes, and 17+ AI coding assistants. Now with Arabic, German, Spanish, and Chinese (Simplified & Traditional) support.
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Tools to maintain and improve CLAUDE.md files - audit quality, capture session learnings, and keep project memory current.
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
Uses power tools
Uses Bash, Write, or Edit tools