By cianos95-dev
The orchestration hub for AI-assisted software delivery — ownership boundaries, adversarial review gates, multi-agent routing, drift prevention, and evidence-based closure from spec to ship.
npx claudepluginhub cianos95-dev/claude-command-centre --plugin claude-command-centreRe-anchor to the active spec by re-reading source artifacts and checking for drift. Rebuilds ground truth from the spec, git state, issue state, and review comments rather than relying on accumulated session context. Use when sessions run long, after context compaction, before resuming paused work, when implementation feels misaligned, or before claiming completion. Trigger with phrases like "anchor to spec", "re-read the spec", "am I drifting", "check alignment", "reload context".
Capture session state for structured handoff before a context boundary or session split. Writes progress to .ccc-progress.md, optionally snapshots git state, updates the Linear issue in place, and prints a continuation prompt for the next session. Use when approaching context limits, stepping away mid-task, or explicitly handing off to another session. Integrates with /compact and /resume as a CCC-layer complement — /compact reduces context, /ccc:checkpoint preserves task state. Trigger with phrases like "checkpoint", "save progress", "hand off session", "save state", "continuation prompt", "prepare for session split", "pre-exit checkpoint".
Evaluate and execute evidence-based issue closure following the agent ownership protocol. Universal closure entry point — called by humans, session-exit, branch-finish (via closure-ready flag), webhooks, and Factory. Use when implementation is complete and you need to close an issue, verify closure conditions are met, or check if an issue qualifies for auto-close vs requires human confirmation. Trigger with phrases like "close this issue", "is this ready to close", "mark as done", "closure evidence for", "can I auto-close this", "wrap up this task".
Manage CCC preferences for the current project. Customise gates, execution, prompts, planning, eval, style, session, review, scoring, Cowork, and replan behavior. Use to view current config, reset to defaults, or set individual preferences. In Cowork: generates an interactive artifact with live YAML preview and preset buttons. Trigger with phrases like "configure ccc", "show ccc preferences", "set gate preferences", "ccc config", "customise workflow", "change execution defaults", "set prioritization framework", "configure eval", "change scoring format".
Break an epic or spec into atomic, implementable tasks with execution mode assignments. Use when a spec is approved and needs to be broken into work items, an epic needs task decomposition, or you need to plan implementation order with dependency tracking. Trigger with phrases like "break this into tasks", "decompose this epic", "create subtasks for", "plan the implementation of", "what tasks do I need for", "split this into work items".
Manage and visualize issue dependency relations. View dependencies for an issue, generate milestone dependency graphs, add/remove relations safely, and detect implicit dependencies from descriptions. Use for dependency management, relation creation, blocker visualization, and dependency detection. Trigger with phrases like "show dependencies", "add blocker", "dependency graph", "what blocks this", "detect dependencies", "remove relation", "milestone graph".
Unified entry point for the CCC workflow. Auto-detects context and routes to the correct funnel stage. Use to start new work, resume in-progress tasks, check status, or enter quick mode for small fixes. Trigger with phrases like "let's go", "what should I work on", "resume work", "start building", "quick fix", "show status", "where was I", "continue working".
Audit project tracker issues for label consistency, stale items, and missing metadata. Use when running periodic issue health checks, cleaning up stale backlog items, fixing missing labels or project assignments, or triaging old issues interactively. Trigger with phrases like "audit my issues", "issue hygiene check", "clean up stale issues", "fix missing labels", "triage the backlog", "project health score".
Scan and index the current repository to produce a structured map of modules, patterns, and integration points. The index feeds into spec writing and prevents redundant implementations. Use when onboarding to a new codebase, before writing a PR/FAQ, after major refactors, or when you need to understand what already exists. Trigger with phrases like "index the codebase", "scan the repo", "what patterns exist", "map the modules", "what's in this repo".
Archive a Claude Code Insights report and extract actionable patterns. Use when you receive a new Insights report from Anthropic, want to review past insights, or want to check improvement trends. Trigger with phrases like "archive my insights", "process insights report", "what did my last insights say", "insights trend".
Manage CCC plans: promote session plans to durable Linear Documents, list promoted plans. Use --promote to elevate the current session plan to a Linear Document accessible from Code, Cowork, and Linear. Use --list to see all promoted plan documents for the active project. Trigger with phrases like "promote plan", "save plan", "list plans", "plan --promote", "plan --list".
Trigger an adversarial review of a spec using one of four review architecture options. Use when a spec is ready for critical evaluation, you want structured pushback before implementation, or you need multi-perspective analysis of assumptions and risks. Trigger with phrases like "review this spec", "challenge my proposal", "adversarial review of", "is this spec solid", "find weaknesses in this plan", "stress test this design".
Run zero-cost in-session plugin validation. Enumerates all skills, agents, and commands from the plugin manifest, verifies file existence, validates frontmatter, checks trigger phrase quality, flags ambiguous overlaps, and generates synthetic test prompts. Use when validating plugin health, checking component coverage, auditing trigger descriptions, or verifying no components are missing or broken. Trigger with phrases like "self-test", "validate plugin", "check plugin health", "plugin coverage report", "test plugin components", "audit plugin triggers".
Begin implementation of a task with automatic execution mode routing and status tracking. Use when starting work on a specific issue, picking up the next unblocked task, or beginning a coding session with proper status tracking and context loading. Trigger with phrases like "start working on", "begin implementation of", "pick up the next task", "implement this issue", "start coding", "what should I work on next".
Post project or initiative status updates to Linear using the two-tier architecture. Project updates go to the native Updates tab via GraphQL projectUpdateCreate. Initiative updates go to the native Updates tab via save_status_update MCP tool. Use when you want to post a status update, check project health, generate an initiative roll-up, or preview what an update would contain before posting. Trigger with phrases like "post status update", "project status", "initiative roll-up", "status report", "what changed today".
Bootstrap a fresh Linear workspace with all CCC templates, labels, and project structure. Reads template manifests from templates/*.json, resolves symbolic names to workspace IDs, creates templates via GraphQL, and updates manifests with returned linearIds. Idempotent — safe to run on already-configured workspaces (skips existing). Use when setting up a new workspace, verifying workspace completeness, or after resetting templates. Trigger with phrases like "bootstrap workspace", "provision workspace", "setup templates", "sync templates".
Sync template manifest files to the Linear workspace. Reads all templates/*.json manifests, resolves symbolic names to workspace UUIDs, compares against live templates, creates missing templates, and detects/fixes drift. The canonical one-way sync: manifests are the source of truth. Trigger with phrases like "sync templates", "push templates", "template sync", "update workspace templates".
Validate Linear templates against current workspace state. Queries all templates via GraphQL, cross-references labelIds/stateId/teamId against live workspace data, reports stale references and drift. Use when checking template health, auditing label references, or after modifying workspace labels. Trigger with phrases like "validate templates", "check template health", "template audit", "template drift check".
Draft a PR/FAQ spec using the Working Backwards method, selecting the appropriate template based on scope. Use when starting a new feature spec, writing a press release for a proposed change, creating acceptance criteria, or structuring a proposal with pre-mortem and FAQ sections. Trigger with phrases like "write a spec for", "draft a PR/FAQ", "new feature proposal", "working backwards document for", "spec this idea", "create acceptance criteria for".
Reconciliation agent for structured adversarial debates. Reads all Round 1 (independent review) and Round 2 (cross-examination) outputs from the 4 persona reviewers, then produces a unified synthesis that maps consensus, surfaces genuine disagreements, and escalates unresolvable splits for human decision. This agent does NOT add new findings — it only consolidates, reconciles, and scores. <example> Context: Four persona reviewers have completed Round 1 and Round 2 on a spec for a new plugin hook system. The Security Skeptic flagged credential exposure; the Architectural Purist flagged coupling; both agree the hook lifecycle is underspecified. The Performance Pragmatist and UX Advocate disagree on whether lazy loading is worth the complexity. user: "Synthesize the debate results for CIA-294." assistant: "The debate-synthesizer produces a reconciled report: 2 UNANIMOUS findings (hook lifecycle underspecified, missing error recovery), 1 MAJORITY finding (3/4 agree on credential isolation), 1 SPLIT finding (Performance Pragmatist argues lazy loading prevents startup cost at scale; UX Advocate argues lazy loading creates unpredictable latency that hurts discoverability — both cite spec section 3.2 but draw opposite conclusions). 1 item ESCALATED: whether hooks should have access to the full application context or a scoped subset (Architectural Purist and Security Skeptic fundamentally disagree on trust boundary placement). Overall quality: 3.8/5 (CI: 3.4-4.2)." <commentary> The synthesizer identified genuine disagreement (lazy loading trade-off) vs. consensus (lifecycle gaps) and correctly escalated the unresolvable trust boundary question rather than forcing a premature resolution. The confidence interval reflects uncertainty from the split finding. </commentary> </example> <example> Context: A straightforward spec where all 4 personas largely agree but differ on severity ratings. user: "Synthesize the debate on CIA-391." assistant: "The debate-synthesizer finds strong consensus: 4 UNANIMOUS findings, 2 MAJORITY findings, 0 SPLIT findings. Severity calibration disagreement on finding I2: Security Skeptic rates it Critical (attack vector exists), Performance Pragmatist rates it Consider (low probability at current scale). Synthesizer assigns IMPORTANT as median severity with a note that severity should be re-evaluated if scale increases. No escalations needed. Overall quality: 4.2/5 (CI: 3.9-4.5)." <commentary> When personas agree on substance but disagree on severity, the synthesizer takes the median and documents the range — it does not simply pick the highest severity (that would be bias toward caution rather than accuracy). </commentary> </example> <example> Context: A complex architectural spec where Round 2 cross-examination changed positions. user: "Run synthesis on the architecture debate." assistant: "The debate-synthesizer notes 2 position changes between Round 1 and Round 2: (1) The UX Advocate initially rated the API design as acceptable but reversed after the Architectural Purist's cross-examination showed the API naming creates a false mental model — changed from AGREE to CONTRADICT with evidence. (2) The Performance Pragmatist initially flagged caching as Critical but downgraded to Consider after the Security Skeptic demonstrated that the caching strategy creates a data staleness window that undermines the security guarantees — a COMPLEMENT response that revealed a deeper trade-off. These position changes are the highest-value debate outcomes and are highlighted in the synthesis." <commentary> The synthesizer specifically tracks position changes between rounds — these represent genuine value-add from the debate format over independent review. Position changes are the primary signal that cross-examination worked. </commentary> </example>
Use this agent when a spec is ready for adversarial review (CCC Stage 4). The reviewer challenges assumptions, identifies gaps, rates findings by severity, and produces a structured review that the spec must pass before implementation begins. This is Gate 2 of the CCC workflow. <example> Context: A spec has been marked spec:ready and needs adversarial review before implementation. user: "CIA-312 spec is ready. Run adversarial review." assistant: "I'll use the reviewer agent to perform a structured adversarial review of CIA-312's spec, challenging assumptions, identifying gaps, and producing a severity-rated findings report." <commentary> The spec is at spec:ready, which is the exact trigger for adversarial review. The reviewer agent performs Gate 2 evaluation with structured findings. </commentary> </example> <example> Context: User wants to validate a spec's technical feasibility before committing to implementation. user: "Before we build the real-time sync feature, can you poke holes in the spec?" assistant: "I'll use the reviewer agent to adversarially review the sync feature spec — challenging the architecture, identifying edge cases, and flagging risks before we commit to implementation." <commentary> "Poke holes" is a natural language trigger for adversarial review. The reviewer agent systematically challenges the spec rather than casually commenting. </commentary> </example> <example> Context: A reviewed spec was sent back to draft and has been revised. It needs re-review. user: "I've updated CIA-289 based on the review findings. Can you re-review it?" assistant: "I'll use the reviewer agent to re-review CIA-289, focusing on whether the previous findings have been adequately addressed and checking for any new issues introduced by the revisions." <commentary> Re-review after spec:draft return is part of the review cycle. The reviewer checks previous findings resolution and scans for new issues. </commentary> </example>
Use this agent when work needs to move from idea to approved specification (CCC Stages 0-3). This covers intake from any surface (voice memos, cowork sessions, code sessions, direct input), normalization into verb-first issues, PR/FAQ drafting, research grounding, and spec completion through Gate 1 approval. <example> Context: User has a rough idea from a brainstorming session that needs to become a tracked, specced issue. user: "I had an idea during our cowork session about adding a safety monitor to the agent hub. Can you spec this out?" assistant: "I'll use the spec-author agent to intake this idea, create a properly formatted Linear issue, select the right PR/FAQ template, and draft a spec for review." <commentary> The user has an unstructured idea that needs the full intake-to-spec pipeline. This is the spec-author agent's core workflow: normalize, classify, template-select, draft, and ground in research. </commentary> </example> <example> Context: A batch of voice memo transcriptions needs to be processed into Linear issues with specs. user: "I have 5 voice memos from my commute. Process them into issues." assistant: "I'll use the spec-author agent to process each voice memo through the intake pipeline: extract intent, deduplicate against existing issues, create verb-first issues, and draft specs with appropriate PR/FAQ templates." <commentary> Batch intake from voice memos is a classic spec-author task. The agent handles deduplication, normalization, template selection, and initial spec drafting for each item. </commentary> </example> <example> Context: An existing draft spec needs research grounding before it can pass Gate 1. user: "CIA-234 has a draft spec but no research citations. Can you ground it?" assistant: "I'll use the spec-author agent to search for relevant literature, add citations to the spec's Research Base section, and advance the research label from needs-grounding toward literature-mapped." <commentary> Research grounding is a pre-Gate 1 requirement. The spec-author agent handles literature search, citation formatting, and research label progression. </commentary> </example>
Use this agent when a major project step has been completed and needs to be reviewed against the original plan and coding standards. The code-reviewer operates at CCC Stage 6: it reviews actual code against the spec's acceptance criteria, detects implementation drift from spec promises, and produces structured findings with severity ratings and spec references. Dispatched by the pr-dispatch skill, with feedback handled by the review-response skill. <example> Context: Implementation of a feature is complete and the developer wants a spec-aware code review before merge. user: "CIA-312 implementation is done. Review the code against the spec." assistant: "I'll use the code-reviewer agent to review CIA-312's implementation against its spec acceptance criteria, checking each AC is satisfied, detecting any drift from spec promises, and producing structured findings." <commentary> The implementation is complete and ready for Stage 6 review. The code-reviewer reads the spec's acceptance criteria and evaluates the diff against each one, producing findings categorized by severity with spec references. </commentary> </example> <example> Context: A PR has been opened and needs spec-aware review before merge. The pr-dispatch skill has assembled the review context. user: "Review this PR for CIA-445 — the dispatch context is ready." assistant: "I'll use the code-reviewer agent to evaluate the PR diff against CIA-445's acceptance criteria, verify each criterion is addressed, check for scope drift, and report findings in CCC severity format." <commentary> PR dispatch has assembled the spec context, git diff, and verification evidence. The code-reviewer consumes this structured input and produces a spec-anchored review, not a generic code quality pass. </commentary> </example> <example> Context: After implementation, the user suspects the code may have drifted from what the spec promised. user: "I think the search feature drifted from the spec during implementation. Can you check the code against the acceptance criteria?" assistant: "I'll use the code-reviewer agent to perform a drift check — comparing the current implementation against each acceptance criterion in the spec, flagging any divergence where the code does more, less, or different than what the spec promised." <commentary> Drift detection is a core code-reviewer capability. The agent systematically compares implementation against spec rather than just checking general code quality. Each drift finding cites the specific AC that was violated or exceeded. </commentary> </example>
Use this agent when a spec has passed review and is ready for implementation (CCC Stages 5-7.5). The implementer handles execution mode routing, the build-test-verify loop, drift prevention against the spec, issue status management, and closure with evidence. This is the Gate 3 pathway. <example> Context: A spec passed adversarial review and is ready for implementation. user: "CIA-345 passed review. Implement it." assistant: "I'll use the implementer agent to execute CIA-345. It will route to the correct execution mode based on the estimate, run the implementation loop with spec drift checks, and manage the issue through to closure." <commentary> Post-review implementation is the implementer's primary trigger. It handles the full Stages 5-7.5 pipeline: mode selection, implementation, verification, and closure. </commentary> </example> <example> Context: An implementation is in progress but appears to have drifted from the spec. user: "I think the auth implementation has drifted from what the spec says. Can you check?" assistant: "I'll use the implementer agent to perform a drift check — comparing the current implementation against the spec's acceptance criteria and flagging any divergence." <commentary> Drift detection is a core implementer responsibility. The agent compares implementation state against spec acceptance criteria and flags gaps. </commentary> </example> <example> Context: Implementation is complete and needs to go through the closure protocol. user: "The PR for CIA-367 is merged and deployed. Close the issue." assistant: "I'll use the implementer agent to verify closure criteria (PR merged, deploy green, acceptance criteria met) and close the issue with proper evidence in the closing comment." <commentary> Issue closure (Stage 7.5) requires evidence-based closing comments. The implementer verifies all criteria before auto-closing or proposing closure per the ownership rules. </commentary> </example>
Adversarial spec review methodology with multiple reviewer perspectives and architecture options for automated review pipelines. Use when a spec needs critical evaluation before implementation, when you want structured pushback on assumptions, or when setting up automated multi-perspective review. Trigger with phrases like "review my spec", "adversarial review", "challenge this proposal", "devil's advocate analysis", "security review of spec", "is this spec ready for implementation".
Parse and route intents from Linear agent dispatch events — @mention comments, delegateId handoffs, and assignee-based triggers. Defines the v2 intent schema (with mechanism detection, trigger block, and issue-state inference), parsing rules, routing table (review, implement, gate2, dispatch, status, expand, help, close, spike, spec-author), and integration points for Factory and Claude Code consumers. Use when building or extending webhook handlers that respond to any Linear agent dispatch mechanism. Works with the mechanism-router skill for unified entry-point routing. Trigger with phrases like "agent session webhook", "parse @mention intent", "route agent intent", "webhook intent parsing", "linear agent dispatch", "implement intent handler", "review intent handler", "agent-session event", "mechanism detection", "delegateId intent", "assignee dispatch", "state-based inference".
Complete a CCC development branch with git operations and pre-completion verification. Handles 4 completion modes (merge, PR, park, abandon) with 8 pre-completion checks. Merge mode marks "closure-ready" — actual issue closure is handled by `/close`. Use when finishing implementation work and transitioning from code to project management. Trigger with phrases like "finish branch", "branch done", "wrap up branch", "complete this branch", "merge and close", "create PR and close", "park branch", "abandon branch", "branch cleanup", "finish implementation", "ready to merge", "ship this branch", "close out the work", "branch-finish".
Repository scanning and indexing protocol that produces a structured map of modules, patterns, and integration points. Feeds into spec writing to prevent redundant implementations. Use when onboarding to a new codebase, before writing a PR/FAQ for a new feature, when the codebase index is stale, or when you need to understand existing patterns before implementation. Trigger with phrases like "index the codebase", "scan the repo", "what patterns does this project use", "map the modules", "update the codebase index", "what exists already".
Context window management strategies for multi-tool AI agents. Covers a 3-tier delegation model for controlling what enters the main conversation, context budget thresholds, subagent return discipline, and model mixing recommendations. Prevents context exhaustion during complex sessions. Use when planning subagent delegation, managing long sessions, deciding what to delegate vs handle directly, or choosing model tiers for subtasks. Trigger with phrases like "context is getting long", "should I delegate this", "subagent return format", "model mixing strategy", "context budget", "session splitting", "when to use haiku vs opus".
Spec-aware systematic debugging methodology for CCC Stage 5-6 implementation. Uses acceptance criteria and .ccc-state.json task context to scope root cause investigation and prevent shotgun debugging. Enforces a 4-phase loop: scope, hypothesize, test, verify — anchored to the active spec rather than ad hoc guessing. Use when a test fails during implementation, when behavior diverges from spec expectations, when a bug is discovered during code review, or when debugging a regression. Trigger with phrases like "debug this", "systematic debugging", "root cause analysis", "why is this failing", "spec-aware debugging", "hypothesis test verify", "shotgun debugging", "narrow the scope", "what's the root cause".
Detect issues that have become unblocked and are ready for dispatch. Uses an inverted scan protocol: starts from recently-completed (Done) issues, looks outward to find downstream issues whose blockers are now all cleared. Enforces a 20-call API budget and 30-minute result cache. Use when checking for newly-unblocked work, during session start, or when planning next tasks. Trigger with phrases like "scan for unblocked issues", "dispatch readiness", "what's ready to work on", "check blocked issues", "unblocked scan", "ready for dispatch", "--scan".
Manage Linear document lifecycle: create structural documents, detect staleness, update on triggers, and enforce safety rules for document content. Wires up create_document, update_document, get_document, and list_documents MCP tools with validation, pagination, and dry-run support. Use when creating project documents, checking document freshness, updating Key Resources or Decision Log, rotating Decision Log entries, or auditing document health during hygiene runs. Trigger with phrases like "create project documents", "check document staleness", "update key resources", "rotate decision log", "document hygiene", "list project documents", "create decision log", "stale documents", "missing documents".
Session anchoring protocol that prevents spec drift in long-running implementation sessions. Re-reads active spec, git state, issue state, and review comments to rebuild ground truth from source artifacts rather than relying on accumulated session context. Use when sessions exceed 30 minutes, after context compaction, before resuming paused work, when implementation feels misaligned with acceptance criteria, or when switching between tasks. Trigger with phrases like "anchor to spec", "re-read the spec", "am I drifting", "check alignment", "reload context", "what was I working on", "session too long".
Autonomous task execution loop powered by a stop hook. Dispatches decomposed tasks one at a time with fresh context per task, respects human approval gates, syncs Linear status, and maintains an append-only progress log. Use when understanding the task loop, debugging execution state, resuming interrupted sessions, or configuring execution parameters. Trigger with phrases like "how does the execution loop work", "task loop configuration", "stop hook behavior", "resume execution", "execution state", "progress tracking", "TASK_COMPLETE signal", "task iteration budget".
Taxonomy of 5 execution modes for AI-assisted development. Provides a decision heuristic for selecting the right mode based on scope clarity, risk level, parallelizability, and testability. Covers model routing for subagent delegation. Use when deciding how to implement a task, choosing between TDD and direct coding, routing work to subagents, or determining if a task needs human-in-the-loop pairing. Trigger with phrases like "what execution mode should I use", "should I use TDD or quick mode", "how should I implement this task", "is this a swarm task", "pair programming setup", "which model for subagents".
Dispatch a well-specified CCC issue to Factory for background agent execution. Handles dispatch surface selection (Linear delegation vs REST API), Cloud Template routing, and post-dispatch monitoring. Use when an issue is well-specified (exec:quick or exec:tdd), the task doesn't require interactive pairing, and you want async background execution. Trigger with phrases like "dispatch to factory", "factory this", "send to factory", "background implement", "delegate to factory", "async dispatch", "factory dispatch", "run this in background", "factory execute".
Documents the Claude Code hooks shipped with the CCC plugin and what each one enforces. Covers session-start checks, pre/post-tool-use gates, stop hygiene, circuit breaker, conformance auditing, prompt enrichment, style injection, and Agent Teams hooks. Use when configuring hooks, understanding what a hook enforces, debugging hook failures, or choosing which hooks to enable for a project.
Guide for archiving Claude Code Insights HTML reports as structured Markdown and extracting actionable patterns to improve CLAUDE.md and workflows. Use when archiving an Insights report, reviewing past archives, extracting CLAUDE.md improvement candidates, or comparing trends across reports. Trigger with phrases like "archive insights report", "review insights", "insights trend", "what did insights suggest", "insights to CLAUDE.md".
Unified issue and project lifecycle management. Defines agent/human ownership boundaries, closure rules, session hygiene, spec lifecycle, project maintenance, status updates, and dependency management. Use when determining what the agent can change vs what requires human approval, closing issues, updating issue status, managing labels, handling session-end cleanup, maintaining project descriptions, posting project updates, managing project resources, cleaning up projects, posting status updates, managing dependencies between issues, detecting duplicates, or performing bulk operations. Trigger with phrases like "can I close this issue", "who owns priority", "issue ownership rules", "session cleanup protocol", "what labels should I set", "closure evidence requirements", "project description stale", "post project update", "add resource to project", "update project summary", "clean up this project", "normalize project issues", "apply CCC conventions", "post status update", "project health", "initiative update", "add dependency", "blocks", "blocked by", "dependency graph", "detect dependencies", "link issues", "show blockers", "duplicate issues", "bulk update".
Unified entry point for all Linear agent dispatch mechanisms (delegateId, @mention, assignee). Detects how dispatch was triggered, extracts or infers intent, validates preconditions, and routes to the appropriate handler. Defines the handler registration contract and agent selection tree. Source of truth: CIA-575 Unified Agent Dispatch Architecture. Use when processing any Linear event that should trigger agent action. Trigger with phrases like "mechanism router", "dispatch routing", "agent selection", "delegateId handler", "assignee dispatch", "intent routing", "handler registration".
Velocity-based milestone completion date projection using Linear cycle history. This skill should be used when the user asks to "forecast milestone completion", "predict when milestone finishes", "project milestone dates", "estimate milestone timeline", "milestone velocity report", "when will this milestone be done", "milestone ETA", or mentions velocity-based date projection for Linear milestones.
Milestone lifecycle automation using Linear MCP tools. Handles assignment inference, completion checks, health reporting, carry-forward, and orphan detection. Use when creating issues that need milestone assignment, when issues transition to Done, when reporting milestone health at session exit, or when milestone target dates pass. Trigger with phrases like "assign milestone", "milestone health", "check milestone", "carry forward", "milestone report", "orphaned issues", "milestone status".
Stage 7 verification tool selection, three-layer plugin monitoring stack, and structural validation integration. Covers when to use PostHog vs Sentry vs Honeycomb vs Vercel Analytics, how cc-plugin-eval gates releases, and how runtime /insights data feeds the adaptive methodology loop. Use when choosing observability tools for Stage 7 verification, setting up plugin structural validation, configuring CI gates for plugin releases, or understanding the monitoring stack layers. Trigger with phrases like "which monitoring tool", "Stage 7 verification", "observability setup", "plugin validation", "cc-plugin-eval", "structural validation", "monitoring stack", "analytics vs error tracking", "release gates", "plugin health check".
Layer 6 business outcome validation using sequential persona passes. Evaluates whether a completed feature actually delivered its intended business outcome before closure. Four personas — Customer Advocate, CFO Lens, Product Strategist, and Skeptic — each produce an independent sub-verdict with evidence. The final consolidated verdict (ACHIEVED / PARTIALLY_ACHIEVED / NOT_ACHIEVED / UNDETERMINABLE) feeds into quality-scoring and appears in the closing comment. Integration point: runs between Stage 7 (Verification) and Stage 7.5 (Closure) in /ccc:close. Automatically skipped for type:chore, type:spike, --quick flag, and exec:quick issues <=2pt. Use when closing a feature or bug issue that went through the full CCC funnel. Trigger with phrases like "outcome validation", "business outcome check", "did this achieve its goal", "Layer 6 validation", "persona review", "outcome verdict", "validate business outcome".
Rules for dispatching and coordinating multiple parallel Claude Code sessions from a master plan. Covers the decision tree for parallel vs. sequential phasing, session mode mapping, dispatch prompt templates, naming conventions, feedback routing, and coordination protocol. Use when launching parallel sessions from a master plan, deciding whether phases can run concurrently, writing dispatch prompts for new sessions, or coordinating outputs across concurrent sessions. Trigger with phrases like "dispatch parallel sessions", "can these phases run in parallel", "launch sessions from master plan", "session dispatch template", "parallel vs sequential", "coordinate multiple sessions", "multi-session dispatch".
Cross-session friction pattern aggregation for the CCC Insights Platform. Parses archived insights reports, normalizes friction types, calculates trends, and produces a structured patterns.json for downstream consumers. Graduated approach: Phase 0 (flat archives), Phase 1 (patterns.json), Phase 2 (SQLite). Use when analyzing cross-session friction trends, running pattern aggregation, checking which friction types recur, or preparing data for the adaptive-methodology skill. Trigger with phrases like "aggregate patterns", "cross-session patterns", "friction trends", "pattern aggregation", "run pattern aggregation", "what keeps going wrong", "recurring friction", "pattern trends", "insights patterns".
Automated context gathering protocol that runs before planning phases. Produces a Planning Context Bundle with codebase state, issue overlap detection, strategic zoom-out, and timeline validation. Use before writing specs, during plan mode, or when entering any planning phase. Trigger with phrases like "preflight check", "what exists already", "check for overlapping issues", "planning context", "landscape scan".
Recommends the optimal Claude platform (Code, Cowork, Desktop Chat) for each CCC workflow stage. Provides hook-free exit checklists for non-CLI contexts and Desktop Chat project patterns for client context routing. Use when starting a new workflow stage, asking where to do something, beginning spec drafting, triage, or implementation, ending a session in Cowork or Desktop Chat, or setting up a new client or project context. Trigger with phrases like "where should I do this", "which platform for spec drafting", "should I use Cowork or Code", "set up a Desktop Chat project", "what's the exit checklist for Cowork".
CCC Stage 6 PR review dispatch with spec context injection and code-reviewer agent orchestration. Gathers git SHAs, spec acceptance criteria, and .ccc-state.json task context, then dispatches the CCC code-reviewer agent with a structured, spec-aware review prompt. Replaces generic code review dispatch by anchoring every review to the active spec's acceptance criteria and detecting drift. Use when implementation is complete and ready for review in the CCC workflow. Trigger with phrases like "dispatch review", "request code review", "PR review", "send to reviewer", "review my changes", "stage 6 review", "spec-aware review", "run code review", "review dispatch", "CCC review", "is this ready for review", "pre-merge review".
Working Backwards PR/FAQ methodology for spec drafting with 4 templates, interactive drafting guidance, and structured questioning techniques. Use when writing a new spec, choosing a PR/FAQ template, drafting a press release, defining acceptance criteria, or structuring a feature proposal with pre-mortem analysis. Trigger with phrases like "write a spec", "PR/FAQ for this feature", "working backwards document", "which template should I use", "draft a press release", "pre-mortem analysis".
Deterministic quality rubric for evaluating issue completion across test coverage, acceptance criteria coverage, and review resolution. Produces a star-graded score that drives closure decisions. Use when evaluating whether an issue is ready to close, understanding why closure was blocked, calibrating quality expectations for a project, or customizing scoring weights. Trigger with phrases like "quality score", "is this ready to close", "why was closure blocked", "evaluate completion", "score this issue", "quality rubric", "closure criteria".
Research readiness progression for issues that require academic evidence. Defines the needs-grounding to expert-reviewed label hierarchy, grounding requirements for PR/FAQ specs, and citation standards for research-heavy features. Use when writing specs for research-backed features, evaluating research readiness of issues, deciding whether an issue needs literature review, or ensuring PR/FAQs have adequate citations. Trigger with phrases like "is this grounded", "needs literature review", "research readiness", "add citations to spec", "research labels", "grounding requirements", "methodology validation".
End-to-end academic research pipeline: discovery via Semantic Scholar, arXiv, and OpenAlex; supplementary resource discovery via HuggingFace, Kaggle, and CatalyzeX; storage and enrichment via Zotero; literature notes via Obsidian; synthesis via NotebookLM. Use when starting a literature review, finding papers on a topic, discovering code/datasets for a paper, creating literature notes, or understanding the research tool stack. Trigger with phrases like "find papers on", "literature review", "what research tools do we have", "discover datasets for", "find code implementations", "research pipeline", "supplementary resources".
Detect stale resources across the CCC ecosystem: project descriptions, initiative status updates, milestone health, Linear documents, plugin reference docs (README, CONNECTORS, plugin-manifest), and execution context freshness (ctx:* label stale detection for In Progress/In Review issues). Compares actual plugin state from disk against documented state and flags discrepancies. Produces a freshness report with Error/Warning/Info severity ratings. Use when running periodic health checks, auditing resource staleness, checking for drift between plugin state and documentation, detecting stale autonomous agents, or as part of the /ccc:hygiene pipeline. Trigger with phrases like "check resource freshness", "stale resources", "freshness audit", "resource drift", "are my docs stale", "plugin manifest drift", "check project descriptions", "stale agents", "execution context check".
Spec-drift-aware review feedback handling for CCC Stage 6. When receiving PR review comments or adversarial review findings, cross-references each item against the active spec's acceptance criteria to determine if feedback is in-scope, represents drift, or reveals a legitimate spec gap. Follows the READ-UNDERSTAND-VERIFY-EVALUATE-RESPOND-IMPLEMENT protocol. Integrates with adversarial-review output and .ccc-state.json task context. Use when receiving code review comments, adversarial review findings, PR feedback, or any post-implementation critique that may require code changes. Trigger with phrases like "handle review feedback", "respond to review", "PR comments", "review findings", "is this feedback in scope", "spec drift from review", "adversarial review response", "triage review comments", "reviewer suggests".
End-of-session normalization protocol for AI agent sessions. Covers issue status normalization, closing comments with evidence, daily project updates, session summary tables, and context budget warnings. Ensures no session ends with stale issue statuses, missing evidence, or untracked work. Use when ending a working session, preparing session summaries, normalizing issue statuses, writing closing comments, or checking context budget thresholds before session exit. Trigger with phrases like "session exit", "end of session", "session summary", "normalize statuses", "closing comments", "session cleanup", "wrap up session", "context budget check", "session handoff".
Complete 9-stage spec-driven development funnel from ideation through deployment, with 3 approval gates, universal intake protocol, plan promotion to durable documents, and issue closure rules. Use when understanding the full development workflow, checking what stage a feature is in, determining next steps for an issue, promoting plans to Linear Documents, or onboarding to the spec-driven process. Trigger with phrases like "what stage is this in", "development workflow overview", "what are the approval gates", "how does the funnel work", "intake process", "what happens after spec approval", "promote this plan", "save plan to Linear", "make plan durable".
Enforce TDD red-green-refactor discipline during CCC Stage 5-6 implementation. Derives test cases from spec acceptance criteria and PR/FAQ documents rather than generic test suggestions. Blocks implementation code before a failing test exists. Tracks cycle state across the session and integrates with .ccc-state.json task context. Use when implementing features with testable acceptance criteria in the CCC workflow, when the execution mode is exec:tdd, or when you need to enforce test-first discipline. Trigger with phrases like "enforce TDD", "red green refactor", "test first", "write a failing test", "no implementation yet", "TDD cycle", "exec:tdd mode", "derive tests from spec", "acceptance criteria to tests".
Canonical Zotero library management workflow: plugin sequencing, metadata enrichment, Linter/Cita settings, safety rules, JS verification, and anti-patterns. Use when performing any Zotero operation, enriching metadata, resolving DOIs, deduplicating, syncing to Supabase, or diagnosing library health issues. Trigger with phrases like "enrich Zotero metadata", "resolve DOIs", "run Linter", "Zotero health check", "deduplicate library", "sync to Supabase", "Zotero plugin sequence".
Production-ready Claude Code configuration with role-based workflows (PM→Lead→Designer→Dev→QA), safety hooks, 44 commands, 19 skills, 8 agents, 43 rules, 30 hook scripts across 19 events, auto-learning pipeline, hook profiles, and multi-language coding standards
Matches all tools
Hooks run on every tool call, not just specific ones
Executes bash commands
Hook triggers when Bash tool is used
Share bugs, ideas, or general feedback.
Corca Workflow Framework — consolidated hooks and skill orchestration for structured development sessions
Describe your goal, approve the spec, then step away — Claude and Codex loop together until it's right.
Development workflow automation plugin: specify → open → execute pipeline with parallel research agents, hook-based guards, and PR state management
Persona-driven AI development team: orchestrator, team agents, review agents, skills, slash commands, and advisory hooks for Claude Code
Agent Alchemy Dev Tools — dev utilities, debugging, and workflow enhancements
Modifies files
Hook triggers on file write and edit operations
Modifies files
Hook triggers on file write and edit operations
External network access
Connects to servers outside your machine
External network access
Connects to servers outside your machine
Uses power tools
Uses Bash, Write, or Edit tools
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.