By duthaho
Automate end-to-end development workflows in Claude Code using 35 skills and 24 agents across 6 phases: plan/review features with multi-perspective scorecards, build via TDD subagents and parallel execution, ship with git commits/PRs/CI-CD, maintain through debugging/security/performance audits, and setup project rules/modes via interactive wizard.
npx claudepluginhub duthaho/claudekit --plugin claudekitDesigns RESTful and GraphQL APIs, creates OpenAPI specifications, and ensures API best practices. <example> Context: User needs to design a new API. user: "I need to design a REST API for our order management system" assistant: "I'll use the api-designer agent to create a well-structured API design with OpenAPI spec" <commentary>API design work goes to the api-designer agent.</commentary> </example>
Use this agent to brainstorm software solutions, evaluate architectural approaches, or debate technical decisions before implementation. <example> Context: User wants to add a new feature. user: "I want to add real-time notifications to my web app" assistant: "Let me use the brainstormer agent to explore the best approaches for real-time notifications" <commentary>The user needs architectural guidance — use the brainstormer to evaluate options.</commentary> </example> <example> Context: User is considering a major refactoring decision. user: "Should I migrate from REST to GraphQL for my API?" assistant: "I'll engage the brainstormer agent to analyze this architectural decision" <commentary>Evaluating trade-offs and debating pros/cons is perfect for the brainstormer.</commentary> </example>
Use when reviewing a written implementation plan for strategic ambition, scope, demand reality, and future-fit. Returns a 5-dimension 0-10 scorecard with concrete fixes. <example> Context: User has written a plan and wants a strategic review. user: "Think bigger on this plan" assistant: "I'll dispatch the ceo-reviewer agent to score ambition and suggest scope expansions" <commentary>Strategic/scope review of a plan doc — use ceo-reviewer.</commentary> </example> <example> Context: User is unsure if a plan is ambitious enough. user: "Is this 10-star or 2-star?" assistant: "Let me run the ceo-reviewer agent to score ambition and future-fit" <commentary>Strategic framing question — dispatch ceo-reviewer.</commentary> </example>
Manages CI/CD pipelines, deployments, and release automation for GitHub Actions and other platforms. <example> Context: User needs to set up a CI pipeline. user: "Set up a GitHub Actions CI pipeline for our Node.js project" assistant: "I'll use the cicd-manager agent to create the CI workflow" <commentary>CI/CD pipeline creation goes to the cicd-manager agent.</commentary> </example>
Comprehensive code review with focus on quality, security, performance, and maintainability. Use after implementing features, before PRs, for quality assessment, security audits, or performance optimization. <example> Context: The user has finished implementing a new feature. user: "I've finished the user authentication system" assistant: "Let me use the code-reviewer agent to review the implementation" <commentary>Since code has been written, use the code-reviewer agent to validate quality, security, and completeness.</commentary> </example> <example> Context: The user wants a security-focused review before merging. user: "Can you review this PR for security issues before I merge?" assistant: "I'll use the code-reviewer agent to perform a security-focused code review" <commentary>Security review requests should go to the code-reviewer agent.</commentary> </example>
Creates marketing copy, release notes, changelogs, product descriptions, and user-facing content. <example> Context: User needs release notes for a new version. user: "Write release notes for v2.3.0 based on the recent commits" assistant: "I'll use the copywriter agent to create polished release notes" <commentary>User-facing content creation goes to the copywriter agent.</commentary> </example>
Handles database schema design, migrations, query optimization, and data modeling for PostgreSQL and MongoDB. <example> Context: User needs to design a new database schema. user: "Design the database schema for our multi-tenant SaaS app" assistant: "I'll use the database-admin agent to design an efficient schema with proper indexing" <commentary>Schema design work goes to the database-admin agent.</commentary> </example>
Use this agent when you need to investigate issues, analyze system behavior, diagnose performance problems, trace root causes, or debug test failures. <example> Context: The user needs to investigate why an API endpoint is returning 500 errors. user: "The /api/users endpoint is throwing 500 errors" assistant: "I'll use the debugger agent to investigate this issue" <commentary>Since this involves investigating an issue, use the debugger agent.</commentary> </example> <example> Context: The user notices test failures after changes. user: "Tests are failing after my refactor but I can't figure out why" assistant: "Let me use the debugger agent to analyze the test failures and trace the root cause" <commentary>Test failure analysis requires the debugger agent.</commentary> </example>
Use when reviewing a written implementation plan for UX and visual design: information hierarchy, visual consistency, state coverage, accessibility, and polish. Returns a 5-dimension 0-10 scorecard with concrete fixes. <example> Context: User has a plan with UI components and wants a design critique before implementation. user: "Review the design in this plan" assistant: "I'll dispatch the design-reviewer agent to audit hierarchy, states, and accessibility" <commentary>Pre-implementation design review of a plan — use design-reviewer.</commentary> </example> <example> Context: User suspects AI-slop design patterns in a plan. user: "Does this look generic?" assistant: "Running the design-reviewer agent — it flags gradient-everywhere and generic patterns" <commentary>Visual-quality audit — dispatch design-reviewer.</commentary> </example>
Use when reviewing a written implementation plan for developer experience: Time to Hello World, API/CLI ergonomics, error copy, docs structure, and magical moments. Returns a 5-dimension 0-10 scorecard with concrete fixes. For plans that ship developer-facing products (APIs, CLIs, SDKs, libraries). <example> Context: User is building a CLI and wants a DX review of the plan. user: "How's the DX of this plan?" assistant: "I'll dispatch the devex-reviewer agent to score TTHW and error copy" <commentary>DX pressure test on a plan — use devex-reviewer.</commentary> </example> <example> Context: User is designing an SDK and wants pre-implementation feedback. user: "Is this SDK ergonomic?" assistant: "Running the devex-reviewer agent — it checks naming, defaults, and error surfaces" <commentary>SDK ergonomics review — dispatch devex-reviewer.</commentary> </example>
Generates and maintains documentation including API docs, READMEs, code comments, and technical specifications. Ensures docs match code reality. <example> Context: User wants to update documentation after code changes. user: "The API has changed, update the docs to match" assistant: "I'll use the docs-manager agent to synchronize documentation with the codebase" <commentary>Documentation maintenance goes to the docs-manager agent.</commentary> </example>
Use when reviewing a written implementation plan for architecture, data flow, failure modes, test matrix, and rollback strategy. Returns a 5-dimension 0-10 scorecard with concrete fixes. <example> Context: User wants an architecture pressure test on a plan. user: "Does this design make sense?" assistant: "I'll dispatch the eng-reviewer agent to score architecture and failure modes" <commentary>Architecture/execution review of a plan — use eng-reviewer.</commentary> </example> <example> Context: User is about to hand off a plan and wants a final check. user: "Lock in this architecture before we start coding" assistant: "Running the eng-reviewer agent to audit data flow, edge cases, and test coverage" <commentary>Pre-implementation architecture audit — dispatch eng-reviewer.</commentary> </example>
Stage, commit, and push code changes with conventional commits. Use when user says "commit", "push", "PR", or finishes a feature/fix.
Maintains development journals, decision logs, and progress documentation with brutal honesty. Use when significant technical failures, difficult debugging sessions, or important architectural decisions occur. <example> Context: A critical bug was found in production. user: "We just found a security hole in the auth system" assistant: "Let me use the journal-writer agent to document this incident with full context" <commentary>Critical incidents should be documented honestly — use journal-writer.</commentary> </example> <example> Context: A major refactoring effort failed. user: "The database migration completely broke order processing, rolling back" assistant: "I'll use the journal-writer to capture what went wrong and lessons learned" <commentary>Significant setbacks need honest documentation for future developers.</commentary> </example>
Designs CI/CD pipeline architectures, optimizes build processes, and implements deployment strategies. Use for pipeline design and optimization (vs cicd-manager for operational pipeline management). <example> Context: User needs to redesign their CI/CD architecture. user: "Our CI pipeline takes 20 minutes, we need to get it under 5" assistant: "I'll use the pipeline-architect agent to redesign the pipeline with optimization" <commentary>Pipeline architecture and optimization goes to pipeline-architect.</commentary> </example>
Performs security audits, reviews code for vulnerabilities, and ensures OWASP compliance. Use for manual security review (vs vulnerability-scanner for automated scanning). <example> Context: User wants a security review before release. user: "We need a security audit before we go to production" assistant: "I'll use the security-auditor agent to perform a comprehensive security review" <commentary>Security audits and compliance reviews go to the security-auditor agent.</commentary> </example>
Use this agent when you need to research, analyze, and create comprehensive implementation plans for features, system architectures, or complex technical solutions. Invoke before starting any significant implementation work. <example> Context: User needs to implement a new authentication system. user: "I need to add OAuth2 authentication to our app" assistant: "I'll use the planner agent to research OAuth2 implementations and create a detailed plan" <commentary>Complex feature requiring research and planning — use the planner agent.</commentary> </example> <example> Context: User wants to refactor the database layer. user: "We need to migrate from SQLite to PostgreSQL" assistant: "Let me invoke the planner agent to analyze the migration requirements and create a plan" <commentary>Database migration requires careful planning.</commentary> </example>
Tracks project progress, manages roadmaps, monitors task completion, and provides status reports. <example> Context: User has completed a major feature and needs progress tracking. user: "I just finished the WebSocket feature. Can you check our progress?" assistant: "I'll use the project-manager agent to analyze progress against the plan" <commentary>Project oversight and progress tracking goes to project-manager.</commentary> </example> <example> Context: Multiple tasks completed, need consolidated status. user: "What's our overall project status?" assistant: "Let me use the project-manager agent to provide a comprehensive status report" <commentary>Consolidated status reports go to project-manager.</commentary> </example>
Use this agent for comprehensive research on technologies, libraries, frameworks, and best practices. Excels at synthesizing information from multiple sources into actionable reports. <example> Context: The user needs to research a new technology. user: "I need to understand React Server Components and best practices" assistant: "I'll use the researcher agent to conduct comprehensive research on RSC" <commentary>In-depth technical research goes to the researcher agent.</commentary> </example> <example> Context: The user wants to compare authentication libraries. user: "Research the top auth solutions for our stack with biometric support" assistant: "Let me deploy the researcher agent to investigate auth libraries" <commentary>Comparative technical research with specific requirements — use researcher.</commentary> </example>
Explores external resources, documentation, APIs, and open-source projects for research and integration. Use for outward-facing exploration (vs scout for internal codebase). <example> Context: User needs to understand an external API. user: "How do I integrate with the Stripe API for subscriptions?" assistant: "I'll use the scout-external agent to research the Stripe subscription API" <commentary>External API research goes to scout-external.</commentary> </example>
Rapidly explores and maps codebases to find files, patterns, dependencies, and answer structural questions. Use for internal codebase exploration. <example> Context: User needs to find where authentication is handled. user: "Where is the auth logic in this codebase?" assistant: "I'll use the scout agent to map the authentication-related code" <commentary>Finding code locations and understanding structure — use scout.</commentary> </example> <example> Context: User needs to understand a module's dependencies. user: "What depends on the UserService?" assistant: "Let me use the scout agent to trace the dependency graph for UserService" <commentary>Dependency tracing goes to the scout agent.</commentary> </example>
Use this agent to validate code quality through testing, including running test suites, analyzing coverage, validating error handling, and verifying builds. Call after implementing features or making significant code changes. <example> Context: The user has just finished implementing a new API endpoint. user: "I've implemented the new user authentication endpoint" assistant: "Let me use the tester agent to run the test suite and validate the implementation" <commentary>Since new code has been written, use the tester agent to ensure everything works.</commentary> </example> <example> Context: The user wants to check test coverage. user: "Can you check if our test coverage is still above 80%?" assistant: "I'll use the tester agent to analyze the current test coverage" <commentary>Coverage analysis requests go to the tester agent.</commentary> </example>
Converts design mockups to production code, generates UI components with Tailwind/shadcn, and implements responsive, accessible layouts. <example> Context: User wants to create a new landing page. user: "I need a modern landing page with hero section, features, and pricing" assistant: "I'll use the ui-ux-designer agent to create a polished landing page design and implementation" <commentary>UI/UX design and implementation goes to ui-ux-designer.</commentary> </example> <example> Context: User has design inconsistencies. user: "The buttons across pages look inconsistent" assistant: "I'll use the ui-ux-designer agent to audit and fix the design system" <commentary>Design system work goes to ui-ux-designer.</commentary> </example>
Scans code and dependencies for security vulnerabilities using automated tools. Provides CVE information and remediation guidance. <example> Context: User wants to check for dependency vulnerabilities. user: "Run a security scan on our dependencies" assistant: "I'll use the vulnerability-scanner agent to scan all dependencies for known CVEs" <commentary>Automated vulnerability scanning goes to vulnerability-scanner.</commentary> </example>
Use when the user wants a full multi-angle review of a written implementation plan — strategy, architecture, UX, and developer experience all at once. Activate for keywords like "autoplan", "auto review", "review everything", "full review", "run all reviews", "auto review this plan", "review from every angle", "run the review gauntlet". Dispatches all 4 reviewer agents (ceo-reviewer, eng-reviewer, design-reviewer, devex-reviewer) in parallel, merges scorecards, and gates all recommended fixes through a single multi-select AskUserQuestion prompt. Applies selected fixes to the plan and saves a consolidated review artifact.
Use when the user wants to design, explore, or ideate on ANY new feature, architecture decision, or unclear requirement. Activate for keywords like "brainstorm", "design", "explore", "what if", "how should we", "options for", "trade-offs", or any open-ended question about implementation approach. Also trigger when requirements are vague, ambiguous, or when multiple valid solutions exist -- err on the side of brainstorming before jumping into code.
Use when waiting on external conditions like CI pipeline runs, deployments, long builds, database migrations, or test suites. Trigger for keywords like "wait for", "check status", "poll", "monitor", "is it done", "build running", "deploy in progress", or when a background process needs to complete before the next step. Also activate when using run_in_background or Monitor tools in Claude Code.
Use when fixing any data-related bug, when building validation for critical data paths, or when a single validation point has already failed in production. Also activate whenever you hear "it slipped through," "the check was bypassed," or "it worked in tests but not production." Apply aggressively to any scenario involving data integrity, input validation across layers, or preventing bug recurrence through structural guarantees rather than single-point fixes.
Use when containerizing applications, configuring CI/CD pipelines, deploying to environments, or deploying to edge — including Docker, Dockerfile, docker-compose, multi-stage builds, GitHub Actions, workflow YAML, matrix builds, workflow_dispatch, Cloudflare Workers, Pages, R2, D1, KV, wrangler, container registries, or deployment workflows (staging, production, health checks, smoke tests).
Use when facing 3 or more independent failures across different domains, when multiple subsystems are broken with no shared state, or when test failures span unrelated modules. Also activate whenever you see independent bugs in auth, cart, user, or other separate domains that can be fixed concurrently. Use for launching parallel background tasks like research, analysis, or code review across independent areas. Activate aggressively for any scenario where parallel work would reduce total resolution time without creating merge conflicts.
Use when there is a written implementation plan ready to execute, or when the user says "execute", "run the plan", "implement the plan", "start building", or references a plan file. Also activate when using subagent-driven development with independent tasks, when the user wants automated execution with quality gates, or when picking up a previously written plan. If a plan document exists and no one is executing it yet, this is the skill to use.
Use when implementing a complete feature end-to-end — from requirements analysis through planning, implementation, testing, and review. Trigger for keywords like "feature", "implement", "build", "add functionality", "end-to-end", or any task that spans planning through delivery. Also activate when the user provides a feature description, issue reference, or requirement spec that needs a structured development workflow.
Use when implementation is complete and all tests pass, when ready to merge a feature branch, create a PR, or clean up after development. Use whenever you hear "ship it," "ready to merge," "branch is done," or "create a PR." Activate at the end of any feature, bugfix, or chore branch lifecycle to ensure proper verification, option presentation, and worktree cleanup.
Use when committing code, creating pull requests, shipping changes, or generating changelogs. Trigger for keywords like "commit", "push", "PR", "pull request", "ship", "merge", "changelog", "release notes", "conventional commits", or any git workflow beyond basic status/diff. Also activate when preparing code for review or automating the commit-to-PR pipeline.
Interactive setup wizard for claudekit. Scaffolds rules, modes, hooks, and MCP server configs into the user's project. Run /claudekit:init to configure. Use when setting up a new project with claudekit or reconfiguring an existing one.
Use when the user wants to switch behavioral modes for the session — adjusting communication style, output format, and problem-solving approach. Trigger for keywords like "mode", "switch mode", "brainstorm mode", "token-efficient", "deep-research mode", "implementation mode", "review mode", "orchestration mode", or any request to change how Claude responds for the remainder of the session.
Use when reviewing code for security vulnerabilities, implementing authentication or authorization flows, handling user input validation, or building web endpoints exposed to untrusted data. Trigger on keywords like XSS, SQL injection, CSRF, input sanitization, password hashing, security headers, "security scan", "vulnerability scan", "npm audit", or "pip-audit". Also apply when auditing existing code for OWASP Top 10 compliance, scanning dependencies for known vulnerabilities, detecting hardcoded secrets, or conducting security-focused code reviews.
Use when analyzing or optimizing code performance — including profiling, benchmarking, fixing N+1 queries, reducing bundle size, eliminating memory leaks, or improving algorithm complexity. Trigger for keywords like "slow", "performance", "optimize", "profiling", "memory leak", "bundle size", "N+1", "re-render", "benchmark", "latency", "throughput", or any request to make code faster. Also activate when investigating production performance issues or when code review flags performance concerns.
Use when the user wants strategic/scope review of a written implementation plan. Activate for keywords like "review my plan", "think bigger", "is this ambitious enough", "scope review", "strategy review", "expand scope", "10-star product", "what should we build", "is this worth building at this scope". Reviews a plan doc on 5 dimensions (ambition, problem clarity, wedge focus, demand reality, future-fit), scores 0-10 each, proposes concrete fixes, and applies user-selected fixes to the plan. Dispatches the ceo-reviewer agent for scoring.
Use when the user wants a UX/visual design review of a written implementation plan with UI components. Activate for keywords like "review the design plan", "design critique", "is the UX right", "check hierarchy", "visual review of the plan", "does this look generic", "avoid AI slop". Reviews a plan doc on 5 dimensions (information hierarchy, visual consistency, state coverage, accessibility, polish vs AI slop), scores 0-10 each, proposes concrete fixes, and applies user-selected fixes. Dispatches the design-reviewer agent.
Use when the user wants a developer-experience review of a written implementation plan for APIs, CLIs, SDKs, libraries, or docs. Activate for keywords like "review the DX", "is this SDK ergonomic", "devex review", "API design review", "time to hello world", "how's the CLI". Reviews a plan doc on 5 dimensions (Time to Hello World, API/CLI ergonomics, error copy, docs structure, magical moments), scores 0-10 each, proposes concrete fixes, and applies user-selected fixes. Dispatches the devex-reviewer agent.
Use when the user wants an architecture/execution review of a written implementation plan. Activate for keywords like "review the architecture", "does this design make sense", "lock in the plan", "engineering review", "architecture review", "audit this plan", "pre-implementation review". Reviews a plan doc on 5 dimensions (data flow, failure modes, edge cases & invariants, test matrix, rollback & migration), scores 0-10 each, proposes concrete fixes, and applies user-selected fixes. Dispatches the eng-reviewer agent for scoring.
Use when writing, debugging, or configuring E2E tests with Playwright. Trigger for any mention of end-to-end testing, browser automation, page objects, visual regression, storageState auth, playwright.config, or cross-browser testing. Also use when setting up E2E in CI, testing critical user flows, or debugging flaky browser tests.
Use when code review feedback is received, whether from human reviewers, automated tools, or PR comments. Use when processing review comments, handling review rejections, iterating on feedback cycles, or deciding how to prioritize critical vs minor issues. Activate aggressively any time review feedback arrives -- categorize, prioritize, fix critical issues first, and re-request review with a clear summary of changes made.
Use when improving code structure, readability, or maintainability without changing behavior. Trigger for keywords like "refactor", "clean up", "extract", "simplify", "rename", "restructure", "code smell", "technical debt", "DRY", or any request to improve code quality without adding features. Also activate when code reviews identify structural issues, when functions are too long, or when duplication needs elimination.
Use when completing any task, implementing a feature, fixing a critical bug, or before merging to a main branch. Use whenever code is ready for feedback, when unsure about an implementation approach, or when changes touch security, authentication, or data handling. Activate before any PR creation or branch merge to ensure reviewers have complete context, clear scope, and focused areas of concern.
Use when a bug manifests far from its origin, when stack traces show multiple layers of indirection, or when data corruption appears with no obvious source. Use for any scenario involving "it was already wrong by the time it got here," deep execution stack errors, constraint violations caused by upstream failures, or mysterious data state issues. Always prefer this over surface-level fixes when the error location differs from the bug location.
Use when facing any complex problem requiring careful step-by-step reasoning, evidence collection, and confidence tracking. Use when debugging has multiple possible causes, when making architecture decisions with trade-offs, during security analysis or audits, for performance investigations, or whenever decisions need explicit documentation. Activate aggressively for any scenario where jumping to conclusions would be risky or where the reasoning chain matters as much as the answer.
Use when managing session state — including saving/restoring checkpoints, generating project structure indexes, loading project components into context, or checking project status. Trigger for keywords like "checkpoint", "save state", "restore", "index", "project structure", "load context", "status", "what's the state", or any request to manage the working session. Also activate when resuming work from a previous session or when needing to understand the current project layout.
Use when executing implementation plans with independent tasks in the current session. Trigger when 3+ independent tasks exist, when a plan is ready to execute with the Agent tool, or when the user says "use subagents", "dispatch agents", "parallel implementation". Also activate when tasks touch different files/modules with no shared state, making them safe to parallelize via Claude Code's Agent tool.
Use when encountering ANY bug, error, test failure, or unexpected behavior. Activate for keywords like "bug", "error", "failing", "broken", "doesn't work", "unexpected", "crash", "exception", "TypeError", "undefined", stack traces, or any error message. Also trigger when tests fail unexpectedly, when behavior differs from expectations, when investigating production incidents, or when flaky/intermittent issues appear. ALWAYS investigate root cause before proposing fixes -- never guess at solutions.
Use when writing new features, fixing bugs, or changing any behavior in production code. Activate for keywords like "implement", "add feature", "fix bug", "write code", "build", "create endpoint", "add functionality", or any task that will result in production code changes. Also trigger when the user asks to refactor existing code, when tests need to be written, or when someone says "TDD". This skill should be the default for ALL implementation work -- no production code without a failing test first.
Use when writing, reviewing, or debugging tests. Activate for keywords like "mock", "stub", "test helper", "flaky test", "test passes but bug ships", "false positive", "test coverage", or when tests seem unreliable. Also trigger when reviewing test code in PRs, when tests pass but production breaks, when someone proposes heavy mocking, or when test failures are intermittent. If any test smells wrong or feels like it is not actually verifying real behavior, this skill applies.
Use when writing, debugging, or configuring unit or integration tests with pytest, Vitest, or Jest. Also activate for fixtures, mocking, coverage, parametrization, jest.mock, vi.mock, jest.fn, vi.fn, conftest.py, vitest.config.ts, jest.config, Testing Library, @jest/globals, or any test configuration.
Use when starting feature work that needs isolation from the current workspace, before executing implementation plans, or when working on multiple branches simultaneously. Trigger for keywords like "worktree", "isolated branch", "parallel branches", "feature isolation", or when dispatching subagents that need separate working directories. Also activate when the user wants to test against main while developing on a feature branch.
Use when about to claim ANY work is complete, fixed, passing, or done. Activate whenever you are tempted to say "done", "fixed", "tests pass", "build succeeds", "deployed", or any completion claim. Also trigger before committing code, before creating PRs, before responding to the user that a task is finished, or when reviewing agent-produced work. This is mandatory -- NEVER claim completion without running verification commands and reading their output. Evidence before assertions, always.
Use this skill when optimizing token usage, reducing response verbosity, or working in high-volume development sessions. Trigger for any mention of token savings, cost optimization, concise output, compressed responses, or the --format=concise/ultra flags. Also applies during repetitive tasks, quick iterations, simple clear requests, or when the user activates token-efficient mode. This is a cross-cutting optimization that applies to all other skills.
Use when a multi-step implementation task needs to be broken down before coding begins. Activate for keywords like "plan", "break down", "implementation steps", "task list", "how to implement", "write a plan", or when a feature spans multiple files or components. Also trigger when handing off work to another developer, when the user says "let's plan this out", or when a task is complex enough that jumping straight to code would be risky. If in doubt, plan first.
Use when creating new skills for this Claude Code kit, editing existing skills, or verifying skills work before deployment. Trigger for keywords like "create a skill", "new skill", "write a skill", "edit skill", "improve skill", "skill format", or when the user wants to add a new capability to the kit. Also activate when auditing skill quality or checking that descriptions trigger correctly.
Universal Claude Code workflow with specialized agents, skills, hooks, and output styles for any software project. Includes orchestrator, code-reviewer, debugger, docs-writer, security-auditor, refactorer, and test-architect agents.
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
oh-my-zsh for Claude Code — 11 agents, 33 commands, 24 skills, 15 hooks + 9 examples (21 events), 9 rules, 4 MCP (minimal: playwright, context7, jina-reader, chrome-devtools@0.23.0). One-line install: curl -fsSL https://raw.githubusercontent.com/sangrokjung/claude-forge/main/install.sh | bash
Complete project development toolkit: 23 agents, 23 slash commands, 29 lifecycle hooks, and 69 reusable skills for Claude Code workflows
Context-Driven Development plugin that transforms Claude Code into a project management tool with structured workflow: Context → Spec & Plan → Implement
Core planning and workflow infrastructure for the Claudikins ecosystem
Production-grade engineering skills for AI coding agents — covering the full software development lifecycle from spec to ship.