By elb-pr
Orchestrate AI-agent-driven software development cycles: iteratively outline plans with human checkpoints, execute tasks in isolated git branches via specialized agents for implementation/review/verification/simplification, enforce strict runtime evidence checks, update docs, and ship via approved PRs.
npx claudepluginhub elb-pr/claudikins-marketplace --plugin claudikins-kernelExecute validated plans with isolated agents and two-stage review
Iterative planning with human checkpoints at every phase
Final shipping gate. PR creation, documentation updates, and merge with human approval.
Post-execution verification gate. Tests, lint, type-check, then see it working.
---
Output verification agent for /claudikins-kernel:verify command. SEES code working by running apps, curling endpoints, capturing screenshots, and executing CLI commands. This is the feedback loop that makes Claude's code actually work. Use this agent during /claudikins-kernel:verify Phase 2 to gather evidence that code works. The agent detects project type, chooses appropriate verification method, captures evidence, and reports structured results. <example> Context: Web app implementation complete, need to verify it renders correctly user: "Verify the login page renders and works" assistant: "I'll spawn catastrophiser to start the dev server, screenshot the login page, and test the flow" <commentary> Web verification. catastrophiser starts server, uses Playwright for screenshots, checks console for errors. </commentary> </example> <example> Context: API endpoints implemented, need to verify responses user: "Check if the auth endpoints work correctly" assistant: "Spawning catastrophiser to curl the auth endpoints and verify response shapes" <commentary> API verification. catastrophiser curls each endpoint, checks status codes, validates response bodies. </commentary> </example> <example> Context: CLI tool implemented, need to verify it runs user: "Make sure the CLI works as expected" assistant: "Spawning catastrophiser to run the CLI commands and capture output" <commentary> CLI verification. catastrophiser runs commands with various inputs, checks exit codes and stdout. </commentary> </example>
Code quality reviewer for /claudikins-kernel:execute command. Reviews code quality, patterns, and maintainability. This is stage 2 of two-stage review - it checks quality, NOT compliance (spec-reviewer handles that). Use this agent after spec-reviewer passes. The agent receives the implementation diff and reviews for quality issues, using confidence scoring to filter noise. <example> Context: Reviewing code quality after spec-reviewer passed user: "Code review task 3 implementation" assistant: "I'll use code-reviewer to assess the code quality and maintainability" <commentary> Second stage of review. code-reviewer uses opus for judgement calls about quality, not mechanical spec checking. </commentary> </example> <example> Context: Implementation passed spec but seems complex user: "The auth middleware passed spec review but looks complicated" assistant: "code-reviewer will evaluate the implementation for unnecessary complexity" <commentary> Quality assessment. Code might meet spec but be overly complex or hard to maintain. </commentary> </example> <example> Context: Checking for security issues in new endpoint user: "Review task 5 for security concerns" assistant: "code-reviewer will check for security vulnerabilities and proper error handling" <commentary> Security review. Even if spec is met, code might have injection vulnerabilities or other issues. </commentary> </example>
Merge conflict resolution agent for /claudikins-kernel:execute command. Analyses git merge conflicts and proposes resolutions. Read-only analysis with proposed patches - does not apply changes directly. Use this agent when merge conflicts are detected during batch merge phase. The agent examines both sides of the conflict, understands intent, and proposes a resolution for human approval. <example> Context: Merge conflict detected during batch merge user: "Conflict in src/services/user.ts during merge" assistant: "I'll use conflict-resolver to analyse the conflict and propose a resolution" <commentary> Merge phase conflict. Agent reads both versions, understands the changes, proposes unified resolution. </commentary> </example> <example> Context: Multiple files have conflicts user: "3 files have merge conflicts after batch 2" assistant: "conflict-resolver will analyse each conflict and propose resolutions" <commentary> Multiple conflicts. Agent handles each file, provides per-file resolution proposals. </commentary> </example> <example> Context: Semantic conflict where both changes are needed user: "Both branches added different functions to the same file" assistant: "conflict-resolver will determine how to combine both additions correctly" <commentary> Additive conflict. Both sides added code - agent proposes keeping both in logical order. </commentary> </example>
Code simplification agent for /claudikins-kernel:verify command. Performs an optional polish pass after verification succeeds. Simplifies code without changing behaviour - tests must still pass after each change. Use this agent during /claudikins-kernel:verify Phase 3 (optional) to clean up implementation. The agent identifies simplification opportunities, makes changes one at a time, verifies tests still pass, and reverts if they don't. <example> Context: Verification passed, code is functional but complex user: "The code works but could be cleaner, run a polish pass" assistant: "I'll spawn cynic to simplify the implementation while preserving behaviour" <commentary> Polish pass. cynic identifies unnecessary abstraction, inlines helpers, improves naming - all while keeping tests green. </commentary> </example> <example> Context: Implementation has dead code and unclear naming user: "Clean up the auth module before we ship" assistant: "Spawning cynic to remove dead code and improve clarity" <commentary> Cleanup task. cynic removes unused functions, renames unclear variables, flattens nesting. </commentary> </example> <example> Context: Code works but has over-engineered abstractions user: "This is way too complicated for what it does" assistant: "Spawning cynic to inline unnecessary abstractions" <commentary> Simplification. cynic inlines single-use helpers, removes wrapper classes, reduces indirection. </commentary> </example>
Documentation perfectionist for /claudikins-kernel:ship command. Updates README, CHANGELOG, and version files using GRFP-style section-by-section approval. This agent CAN write - it's responsible for making docs match the shipped code. Use this agent during /claudikins-kernel:ship Stage 3 to update documentation. The agent reads current docs, identifies gaps from changes, drafts updates section-by-section, and gets human approval for each. <example> Context: Shipping a new authentication feature user: "Update the docs for the auth middleware we're shipping" assistant: "I'll spawn git-perfectionist to update README and CHANGELOG with GRFP-style approval" <commentary> Documentation update. git-perfectionist reads current docs, identifies what needs updating, drafts each section, gets approval. </commentary> </example> <example> Context: CHANGELOG needs new version entry user: "Add the changelog entry for v1.2.0" assistant: "Spawning git-perfectionist to draft the changelog in Keep a Changelog format" <commentary> Changelog update. git-perfectionist follows Keep a Changelog format, categorises changes, gets human approval. </commentary> </example> <example> Context: README is outdated after feature additions user: "The README doesn't mention the new CLI commands" assistant: "git-perfectionist will identify gaps and draft README updates section-by-section" <commentary> README gap analysis. git-perfectionist compares current README against implementation, drafts missing sections. </commentary> </example>
Specification compliance reviewer for /claudikins-kernel:execute command. Verifies implementation matches the plan spec. This is stage 1 of two-stage review - it checks compliance, NOT quality. Use this agent after babyclaude completes a task, before code-reviewer. The agent receives task description, acceptance criteria, and implementation diff, then verifies each criterion is met. <example> Context: Reviewing babyclaude's implementation of auth middleware user: "Review task 3 implementation against spec" assistant: "I'll use spec-reviewer to verify the auth middleware meets all acceptance criteria" <commentary> First stage of two-stage review. spec-reviewer checks compliance with requirements, not code quality. </commentary> </example> <example> Context: Reviewing a refactoring task user: "Verify task 7 - AuthService extraction" assistant: "Using spec-reviewer to confirm the extraction meets the specified criteria" <commentary> Spec review for refactoring. Checks that the refactor achieved its stated goals. </commentary> </example> <example> Context: Implementation seems to have extra features user: "Review task 5 - it looks like more was added than requested" assistant: "spec-reviewer will identify any scope creep beyond the original requirements" <commentary> Scope creep detection. spec-reviewer flags additions that weren't in the spec. </commentary> </example>
Research agent for /claudikins-kernel:outline command. Explores codebase, documentation, or external sources to gather context before planning decisions. This agent is READ-ONLY - it cannot modify files. Use this agent when you need to research before making planning decisions. Spawn 2-3 instances in parallel with different modes for comprehensive coverage. <example> Context: User wants to plan adding OAuth to their application user: "I need to plan adding OAuth support" assistant: "I'll spawn taxonomy-extremist agents to research OAuth patterns in your codebase and current best practices before we design the approach." <commentary> Planning task requires research. taxonomy-extremist gathers context without modifying anything, returns findings for human review at checkpoint. </commentary> </example> <example> Context: User wants to understand existing architecture before refactoring user: "Before we plan the refactor, what's the current state of the auth module?" assistant: "I'll use taxonomy-extremist in codebase mode to map the authentication module structure and dependencies." <commentary> Research task focused on existing code. Agent uses Serena/Grep to map architecture, returns structured findings. </commentary> </example> <example> Context: User is evaluating a library they haven't used before user: "Research Prisma ORM before we plan the database migration" assistant: "I'll spawn taxonomy-extremist in docs and external modes to gather Prisma documentation and community patterns." <commentary> External research needed. Agent uses Context7 for official docs, Gemini for best practices analysis. </commentary> </example>
Use when running claudikins-kernel:outline, brainstorming implementation approaches, gathering requirements iteratively, structuring complex technical plans, or facing analysis paralysis with too many options — provides iterative human-in-the-loop planning with explicit checkpoints and trade-off presentation
Use when running claudikins-kernel:execute, decomposing plans into tasks, setting up two-stage review, deciding batch sizes, or handling stuck agents — enforces isolation, verification, and human checkpoints; prevents runaway parallelization and context death
Use when running claudikins-kernel:ship, preparing PRs, writing changelogs, deciding merge strategy, or handling CI failures — enforces GRFP-style iterative approval, code integrity validation, and human-gated merges
Use when running claudikins-kernel:verify, checking implementation quality, deciding pass/fail verdicts, or enforcing cross-command gates — requires actual evidence of code working, not just passing tests
Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use
Matches all tools
Hooks run on every tool call, not just specific ones
Executes bash commands
Hook triggers when Bash tool is used
Modifies files
Hook triggers on file write and edit operations
Team-oriented workflow plugin with role agents, 27 specialist agents, ECC-inspired commands, layered rules, and hooks skeleton.
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
AI-supervised issue tracker for coding workflows. Manage tasks, discover work, and maintain context with simple CLI commands.
Manus-style persistent markdown files for planning, progress tracking, and knowledge storage. Works with Claude Code, Kiro, Clawd CLI, Gemini CLI, Cursor, Continue, Hermes, and 17+ AI coding assistants. Now with Arabic, German, Spanish, and Chinese (Simplified & Traditional) support.
Context-Driven Development plugin that transforms Claude Code into a project management tool with structured workflow: Context → Spec & Plan → Implement
Modifies files
Hook triggers on file write and edit operations
Uses power tools
Uses Bash, Write, or Edit tools
Uses power tools
Uses Bash, Write, or Edit tools