Complete plugin development toolkit for creating, refactoring, and validating Claude Code plugins and agents. Use when creating new plugins/skills/agents, refactoring existing plugins/skills, validating frontmatter, or restructuring plugin components. Includes specialized agents for assessment, planning, execution, and validation workflows.
npx claudepluginhub jamie-bitflight/claude_skills --plugin plugin-creatorCreates Claude Code agent files from requirements — handles discovery, template selection, frontmatter generation, scope determination (project/user/plugin), and plugin.json updates. Use when the user asks to create an agent, generate an agent, add an agent to a plugin, or describes agent functionality they need. Trigger phrases — 'create an agent', 'add an agent', 'build a new agent', 'make me an agent that', 'I need an agent for'. Examples — <example>Context — User wants a code review agent. User says 'Create an agent that reviews code for quality issues'. I will use the agent-creator agent to generate the agent configuration. User requesting new agent creation triggers agent-creator.</example> <example>Context — User wants to add agent to plugin. User says 'Add an agent to my plugin that validates configurations'. I will use the agent-creator agent to generate a configuration validator agent. Plugin development with agent addition triggers agent-creator.</example>
Optimize prompts, SKILL.md, and CLAUDE.md files for better Claude comprehension using self-verifying methodology. Use when improving prompt effectiveness, rewriting instructions for AI consumption, analyzing ineffective prompts, or refining system prompts and agent configurations. Applies RT-ICA pre-check and CoVe post-check to ensure verified optimization with token impact reporting and structural enforcement recommendations.
Creates Claude Code hook scripts for plugins — generates Node.js .cjs files, wires hooks.json, selects correct event and scope. Use when creating hooks, wiring PostToolUse or PreToolUse logic, enforcing validation on tool calls, or building SessionStart context injection. Trigger phrases — create a hook, add a hook to my plugin, build a PostToolUse hook, I need a hook that. <example>User asks to block rm -rf with a PreToolUse hook — hook-creator generates the .cjs script and wires hooks.json.</example> <example>User asks to inject project context on SessionStart — hook-creator builds the context injection script.</example> <example>User asks to run prettier after every Write — hook-creator wires a PostToolUse formatter hook.</example>
Analyze Claude Code plugins for structural correctness, frontmatter optimization, schema compliance, and enhancement opportunities. Use when reviewing plugins before marketplace submission, auditing existing plugins, validating plugin structure, or identifying improvements. Handles large plugins with many reference files. Detects orphaned documentation, duplicate content, and missing cross-references.
Execute refactoring tasks from approved task files with parallel orchestration and dependency management. Use when implementing changes from refactoring plans, running specific tasks from task files, or executing approved refactoring work. Delegates to specialized agents based on task type (SKILL_SPLIT, AGENT_OPTIMIZE, DOC_IMPROVE) and tracks completion status. Handles failure recovery and generates execution reports.
Analyze plugin structure and create comprehensive executable refactoring plans with prioritized tasks and parallelization strategy. Use when planning plugin refactoring, breaking down large refactoring efforts into executable tasks, splitting oversized skills that exceed validator token thresholds (SK006/SK007), or assessing plugin quality before systematic improvements. Identifies refactoring opportunities, maps dependencies, and generates task files for execution.
Validate plugin refactoring completeness — verifies task completion, plugin structure integrity, and regression absence. Use when refactoring results need verification, when checking refactoring goals were achieved without content loss, when checking for regressions after changes, or when validating plugin structure after systematic improvements. Runs skilllint and generates comprehensive validation reports with quality metrics.
Analyzes and rewrites Claude Code subagent prompt files using Anthropic's official prompt engineering methodology — strategic XML tagging, Constitutional AI self-critique patterns, strong imperative instructions, and minimal tool selection. Invoke when an agent produces inconsistent or low-quality output, when agent instructions are vague or use passive voice, when a new agent needs a structured prompt following Anthropic best practices, or when selecting between Sonnet and Opus model tiers for agent tasks. Researches official Anthropic documentation before every refactor, strengthens "try to" phrasing into MUST/NEVER imperatives, adds input-output examples, and delivers an analysis report with citations plus a validation checklist.
Add automated documentation updater to any Claude skill. Creates a Python sync script that downloads upstream docs, processes markdown for AI consumption, and maintains local cache with configurable refresh. Collects template variables, then delegates implementation through 5-phase workflow. Use when adding auto-updating reference documentation to plugins or skills.
Runs the description-drift experiment — spawns all Claude Code agents simultaneously to collect self-reported capabilities, then compares them against static frontmatter descriptions to reveal how reliable orchestrator routing based on descriptions actually is. Use when measuring description drift across the agent fleet, re-running the capability collection experiment, analyzing a specific agent's self-reported capabilities, or auditing whether frontmatter descriptions accurately reflect agent behavior.
Create high-quality Claude Code agents from scratch or by adapting existing agents as templates. Use when the user wants to create a new agent, modify agent configurations, build specialized subagents, or design agent architectures. Guides through requirements gathering, template selection, and agent file generation following Anthropic best practices (v2.1.63+).
Agent Skills Open Standard reference (agentskills.io). Use when creating portable skills for Claude Code, Cursor, Gemini CLI, OpenAI Codex, VS Code, Roo Code, and 20+ compatible agents. Covers frontmatter schema, naming rules, directory structure, progressive disclosure, validation, and authoring. Load before creating cross-agent skills.
Knowledge reference for Autonomous Refinement Loop research — pattern research into prerequisites for autonomous execution without synchronous human blocking gates. Defines failure categories, prerequisites, and conditions for replacing human judgment with machine-verifiable checks. Use when designing or evaluating autonomous agent loops, gate conditions, or HOOTL execution patterns.
Assess a plugin and create refactoring task files for parallel agent execution. Use when you need to analyze a plugin structure, score its quality, and generate a phased refactoring plan with design map and implementation tasks.
Audit agent lifecycle — validates agent execution capability against configuration. Accepts plugin path, runs 8 semantic audits (capability vs config alignment, skill loading correctness, inter-agent contracts, prompt contradictions, tool sufficiency, dead agents, scriptable patterns, pattern learning), writes reports to .claude/audits/. Use when auditing agent lifecycle, checking agent capabilities, verifying tool access, finding dead agents, validating agent contract alignment, or confirming agents can execute workflows.
Evaluate a single skill's quality against 8 completeness categories derived from Anthropic's official skills repository. Scores preparation, progression, verification, scripts, examples, anti-patterns, references, and assets. Generates scored report to .claude/audits/. Use when auditing skill quality, checking marketplace readiness, evaluating skill completeness score, performing pre-publication evaluation, or comparing to Anthropic skills.
Audit skill lifecycle by tracing call chains, detecting circular dependencies, finding instruction contradictions, identifying duplicated datasets, analyzing bidirectional coherence, discovering scriptable sequences, and learning patterns. Use when checking skill coherence, validating skill workflow, finding semantic gaps in plugin structure, or auditing plugin before marketplace submission. Generates audit reports to .claude/audits/ with findings by dimension.
Complete reference for Claude Code plugins system (January 2026). Use when creating plugins, understanding plugin.json schema, marketplace configuration, bundling skills/commands/agents/hooks/MCP/LSP servers, plugin caching, validation, or distribution. Covers plugin components, directory structure, installation scopes, environment variables, CLI commands, debugging, and enterprise features.
Reference guide for Claude Code skills system (March 2026). Use when creating, modifying, or understanding skills, SKILL.md format, frontmatter fields, hooks, context fork, or skill best practices.
Create and configure slash commands for Claude Code — the legacy .claude/commands/ format. Use when asked to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or for guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices. Commands are the legacy format superseded by skills — for new development prefer /plugin-creator:skill-creator
Decide which plugin component type to use and how to organize components at scale. Covers the component lifecycle (discovery and activation phases), decision framework for choosing between commands, skills, agents, hooks, and MCP servers, and organization patterns for each component type. Use when asking "which component type should I use", "command vs skill vs agent", "when to use a hook vs MCP server", "component lifecycle", "how to organize plugin components", "plugin structure patterns", or "scale a plugin with many components".
Use when refactoring is complete and needs validation. Performs holistic review of completed plugin refactoring, validates improvements against original assessment score, checks for documentation drift, and creates follow-up task files if issues remain
Autonomous feature research and gap analysis. Use when starting /add-new-feature or analyzing existing architecture documents. Explores codebase patterns, identifies ambiguities, and produces feature-context-{slug}.md for orchestrator RT-ICA phase. Does NOT make technical decisions.
Guide for creating Claude Code plugin hooks — Node.js .cjs scripts only, hooks.json configuration, event selection, prompt-based vs command hooks, ${CLAUDE_PLUGIN_ROOT} paths, stdio suppression, timeout sizing, and testing. Use when adding hooks to a plugin, creating PreToolUse/PostToolUse/Stop/SubagentStop/SessionStart/UserPromptSubmit hooks, or wiring hook scripts to hooks.json.
Hook system fundamentals — all events, configuration structure, matchers per event type, environment variables, execution behavior, security, and debugging. Use when creating hooks, understanding hook events, matchers, configuration locations, environment variables, or troubleshooting hook issues.
Cross-platform hooks reference for AI coding assistants — Claude Code, GitHub Copilot, Cursor, Windsurf, Amp. Covers hook authoring in Node.js CJS and Python, per-platform event schemas, inline-agent hooks and MCP in agent frontmatter, common JSON I/O, exit codes, best practices, and a fetch script to refresh docs from official sources. Use when writing, reviewing, or debugging hooks for any AI assistant.
Hook JSON input/output API reference — what data hooks receive via stdin and what JSON they can return to control Claude Code behavior. Use when writing hook scripts, checking exit code behavior, building JSON output for PreToolUse permissions, or understanding event-specific input schemas.
Hook recipes and working examples — plugin hooks, frontmatter hooks in skills/agents/commands, prompt-based LLM hooks, and complete code examples in Python and Node.js. Use when building hook scripts, integrating hooks into plugins, implementing prompt-based hooks, or looking for hook configuration patterns.
Use when a refactoring task file exists from /assessor and tasks need execution. Reads task files, resolves dependencies, delegates to specialist agents (SKILL_SPLIT, AGENT_OPTIMIZE, DOC_IMPROVE), and tracks completion with parallel orchestration
Use when checking skill quality, validating frontmatter before commit, or diagnosing validator warnings. Runs the plugin validator on a skill, agent, or plugin directory — reports token complexity, broken links, frontmatter issues, and structural problems. Pass the path as an argument.
Integrate MCP servers into Claude Code plugins — covers .mcp.json configuration, plugin.json mcpServers field, server types (stdio, SSE, HTTP, WebSocket), environment variable expansion, tool naming conventions, OAuth and token authentication, security best practices, and testing workflows. Use when adding an MCP server to a plugin, configuring MCP authentication, debugging MCP tool discovery, setting up Model Context Protocol integration, or choosing between stdio and SSE transport types.
Configure and manage Claude Code persistent memory (CLAUDE.md, auto memory, rules) across sessions. Use when setting up project memory, organizing .claude/rules/, managing auto memory files, creating CLAUDE.md with imports, debugging memory loading, or advising on memory hierarchy and best practices.
Define and develop plugin mission statements — purpose, values, anti-patterns, and trade-offs. Use when creating a new plugin, auditing an existing plugin's alignment, or providing a reference for the alignment check loop to evaluate decisions against. Produces mission.json with [draft] status and creates a backlog interview task for the human to refine it.
Optimize CLAUDE.md, SKILL.md, agent definitions, and other AI-facing files for Claude comprehension and economy. Measures baseline metrics, delegates to @contextual-ai-documentation-optimizer agent with file-type-specific context, runs independent verification via second agent, measures post-optimization metrics, and presents comprehensive before/after report. Supports iterative mode for large targets. Use when improving prompt effectiveness, reducing token waste, rewriting instructions for LLM consumption, or enhancing files with latest Claude Code features. Invoke with /optimize-claude-md <file-or-directory>.
Use when writing delegation instructions to subagents, authoring CLAUDE.md files, rules, skills, or agent definitions, or auditing existing AI-facing content for bloat. Activates on "write a rule for", "add to CLAUDE.md", "create an agent", "update memory", or any request to author AI-facing instruction content. Removes discoverable data, explained-away knowledge, invented constraints, and stale cached facts.
Configure Claude Code permissions — tool approval rules, permission modes, managed policies, and sandboxing. Use when setting up permission rules, configuring allow/deny/ask policies, debugging permission prompts, deploying managed settings for organizations, or controlling Bash/Read/Edit/WebFetch/MCP/Agent tool access.
Plugin creator documentation index. Load when needing to read about plugin validation error codes, skill creation, agent creation, or plugin structure.
Use when creating a new Claude Code plugin from scratch — orchestrates prerequisite check, user discussion, parallel research, design with verification, atomic implementation, multi-layer validation, documentation, and final verification. For existing plugin improvement, use /plugin-creator:plugin-lifecycle instead.
Orchestrate the full plugin development lifecycle from blank canvas to marketplace-ready. Use when creating a new plugin, improving an existing plugin, fixing validation errors, or taking a plugin through assessment, research, design, creation, debugging, optimization, and verification. Complements /plugin-creator:plugin-creator which provides the detailed new-plugin creation workflow with discussion capture, parallel research, and atomic implementation.
Per-project plugin configuration via .local.md files — covers the .claude/plugin-name.local.md pattern for storing user-configurable settings with YAML frontmatter and markdown body. Use when implementing plugin settings, reading YAML frontmatter from hooks, creating configuration-driven behavior, managing agent state files, or adding per-project plugin configuration. Covers file structure, parsing techniques, common patterns (temporarily active hooks, agent state management, configuration-driven behavior), security considerations, and best practices.
Optimize CLAUDE.md files and Skills for Claude Code CLI. Use when reviewing, creating, or improving system prompts, CLAUDE.md configurations, or Skill files. Transforms negative instructions into positive patterns following Anthropic's official best practices.
Start a complete plugin refactoring workflow that analyzes plugin structure, creates a refactoring plan with tasks, and guides through execution. Use when you need to refactor an entire plugin — triggers assessment, design, planning, and parallel agent execution phases.
Assess and refactor oversized or multi-domain skills. First determines whether splitting or references/ extraction is appropriate — then executes the correct action. Use when a skill exceeds token thresholds (SK006/SK007) or covers multiple independent domains. Performs candidate assessment before any structural changes; cohesive single-intent skills are redirected to references/ extraction instead of splitting. When splitting is warranted — domain analysis gate, split plan, new SKILL.md generation, validation, and backwards-compatible facade conversion.
Reverse Thinking - Information Completeness Assessment. Mandatory pre-planning checkpoint that blocks planning until prerequisites are verified. Use when receiving specs, PRDs, tickets, RFCs, architecture designs, or any multi-step engineering task. Integrates with CoVe-style planning pipelines. Invoke BEFORE creating plans, delegating to agents, or defining acceptance criteria.
Use when creating a new skill or updating an existing skill that extends Claude's capabilities with specialized knowledge, workflows, or tool integrations. Activates on "create a new skill", "add a skill to plugin", or "update existing skill".
Start or complete a specific refactoring task from a task file. Use when a sub-agent needs to pick up a refactoring task, update its status, implement acceptance criteria, and run verification steps.
Analysis criteria, transformation patterns, output format, and validation checklist for refactoring Claude Code agent prompt files. Load this skill when preparing to run the subagent-refactorer agent or when reviewing agent prompt files for structural, model optimization, or instruction quality improvements.
Write or rewrite frontmatter description fields for Claude Code skills and agents. Use when creating new skills/agents, description exceeds 1024 characters, description uses forbidden YAML multiline indicators (>-, |-), description contains colons that trigger quoting, description lacks trigger keywords, or when optimizing descriptions for AI tool selection. Ensures descriptions are single-line, complete, informative, third-person, front-loaded with trigger conditions.
Reliable automation, in-depth debugging, and performance analysis in Chrome using Chrome DevTools and Puppeteer
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Claude Code skills for Godot 4.x game development - GDScript patterns, interactive MCP workflows, scene design, and shaders
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Use this agent when you need expert assistance with React Native development tasks including code analysis, component creation, debugging, performance optimization, or architectural decisions. Examples: <example>Context: User is working on a React Native app and needs help with a navigation issue. user: 'My stack navigator isn't working properly when I try to navigate between screens' assistant: 'Let me use the react-native-dev agent to analyze your navigation setup and provide a solution' <commentary>Since this is a React Native specific issue, use the react-native-dev agent to provide expert guidance on navigation problems.</commentary></example> <example>Context: User wants to create a new component that follows the existing app structure. user: 'I need to create a custom button component that matches our app's design system' assistant: 'I'll use the react-native-dev agent to create a button component that aligns with your existing codebase structure and design patterns' <commentary>The user needs React Native component development that should follow existing patterns, so use the react-native-dev agent.</commentary></example>