Research-backed skill creation workflow with automated firecrawl research gathering, multi-tier validation, and comprehensive auditing. Use when "create skills with research automation", "build research-backed skills", "validate skills end-to-end", "automate skill research and creation", needs 9-phase workflow from research through final audit, wants firecrawl-powered research combined with validation, or requires quality-assured skill creation following Anthropic specifications for Claude Code.
Research-backed skill creation workflow with automated firecrawl research gathering, multi-tier validation, and comprehensive auditing. Use when creating skills with automated research, end-to-end validation, or quality-assured skill creation following Anthropic specifications.
/plugin marketplace add basher83/lunar-claude/plugin install meta-claude@lunar-claudeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/design-principles.mdreferences/error-handling.mdreferences/troubleshooting.mdreferences/workflow-architecture.mdreferences/workflow-examples.mdreferences/workflow-execution.mdscripts/format_skill_research.pyworkflows/visual-guide.mdComprehensive workflow orchestrator for creating high-quality Claude Code skills with automated research, content review, and multi-tier validation.
Use skill-factory when:
Scope: Creates skills for ANY purpose (not limited to meta-claude plugin):
The skill-factory provides 9 specialized commands for the create-review-validate lifecycle:
| Command | Purpose | Use When |
|---|---|---|
/meta-claude:skill:research | Gather domain knowledge using firecrawl API | Need automated web scraping for skill research |
/meta-claude:skill:format | Clean and structure research materials | Have raw research needing markdown formatting |
/meta-claude:skill:create | Initialize skill structure with references | Ready to scaffold skill directory from research |
/meta-claude:skill:write | Synthesize references into SKILL.md content | Skill initialized but needs content written |
/meta-claude:skill:review-content | Validate content quality and clarity | Need content review before compliance check |
/meta-claude:skill:review-compliance | Run quick_validate.py on SKILL.md | Validate YAML frontmatter and naming conventions |
/meta-claude:skill:validate-runtime | Test skill loading in Claude context | Verify skill loads without syntax errors |
/meta-claude:skill:validate-integration | Check for conflicts with existing skills | Ensure no duplicate names or overlaps |
/meta-claude:skill:validate-audit | Invoke claude-skill-auditor agent | Get comprehensive audit against Anthropic specs |
Power user tip: Commands work standalone or orchestrated. Use individual commands for targeted fixes, or invoke the skill for full workflow automation.
Visual learners: See workflows/visual-guide.md for decision trees, state diagrams, and workflow visualizations.
Creating new skill (full workflow):
skill-factory <skill-name> <research-path>skill-factory <skill-name> (includes firecrawl research)skill-factory <skill-name> → Select "Skip research"Using individual commands (power users):
| Scenario | Command | Why |
|---|---|---|
| Need web research for skill topic | /meta-claude:skill:research <name> [sources] | Automated firecrawl scraping |
| Have messy research files | /meta-claude:skill:format <research-dir> | Clean markdown formatting |
| Ready to scaffold skill directory | /meta-claude:skill:create <name> <research-dir> | Creates structure with references |
| Skill initialized, needs content | /meta-claude:skill:write <skill-path> | Synthesizes references into SKILL.md |
| Content unclear or incomplete | /meta-claude:skill:review-content <skill-path> | Quality gate before compliance |
| Check frontmatter syntax | /meta-claude:skill:review-compliance <skill-path> | Runs quick_validate.py |
| Skill won't load in Claude | /meta-claude:skill:validate-runtime <skill-path> | Tests actual loading |
| Worried about name conflicts | /meta-claude:skill:validate-integration <skill-path> | Checks existing skills |
| Want Anthropic spec audit | /meta-claude:skill:validate-audit <skill-path> | Runs claude-skill-auditor |
When to use full workflow: Creating new skills from scratch When to use individual commands: Fixing specific issues, power user iteration
For full workflow details, see Quick Start section below.
If you have research materials ready:
# Research exists at docs/research/skills/<skill-name>/
skill-factory <skill-name> docs/research/skills/<skill-name>/
The skill will:
If starting from scratch:
# Let skill-factory handle research
skill-factory <skill-name>
The skill will ask about research sources and proceed through full workflow.
User: "Create a skill for CodeRabbit code review best practices"
skill-factory detects no research path provided, asks:
"Have you already gathered research for this skill?
[Yes - I have research at <path>]
[No - Help me gather research]
[Skip - I'll create from knowledge only]"
User: "No - Help me gather research"
skill-factory proceeds through Path 2:
1. Research skill domain
2. Format research materials
3. Create skill structure
... (continues through all phases)
Your role: You are the skill-factory orchestrator. Your task is to guide the user through creating a high-quality, validated skill using 9 primitive slash commands.
Analyze the user's prompt to determine which workflow path to use:
If research path is explicitly provided:
User: "skill-factory coderabbit docs/research/skills/coderabbit/"
→ Use Path 1 (skip research phase)
If no research path is provided:
Ask the user using AskUserQuestion:
"Have you already gathered research for this skill?"
Options:
[Yes - I have research at a specific location]
[No - Help me gather research]
[Skip - I'll create from knowledge only]
Based on user response:
Create a TodoWrite list based on the selected path:
Path 2 (Full Workflow with Research):
TodoWrite([
{"content": "Research skill domain", "status": "pending", "activeForm": "Researching skill domain"},
{"content": "Format research materials", "status": "pending", "activeForm": "Formatting research materials"},
{"content": "Create skill structure", "status": "pending", "activeForm": "Creating skill structure"},
{"content": "Write skill content", "status": "pending", "activeForm": "Writing skill content"},
{"content": "Review content quality", "status": "pending", "activeForm": "Reviewing content quality"},
{"content": "Review technical compliance", "status": "pending", "activeForm": "Reviewing technical compliance"},
{"content": "Validate runtime loading", "status": "pending", "activeForm": "Validating runtime loading"},
{"content": "Validate integration", "status": "pending", "activeForm": "Validating integration"},
{"content": "Run comprehensive audit", "status": "pending", "activeForm": "Running comprehensive audit"},
{"content": "Complete workflow", "status": "pending", "activeForm": "Completing workflow"}
])
Path 1 (Research Exists or Skipped):
Omit the first "Research skill domain" task. Start with "Format research materials" or "Create skill structure" depending on whether research exists.
For each phase in the workflow, follow this pattern:
Update the corresponding TodoWrite item to in_progress status.
Before running a command, verify prior phases completed:
/meta-claude:skill:research <skill-name> [sources]
/meta-claude:skill:format <research-dir>
/meta-claude:skill:create <skill-name> <research-dir>
/meta-claude:skill:write <skill-path>
/meta-claude:skill:review-content <skill-path>
/meta-claude:skill:review-compliance <skill-path>
/meta-claude:skill:validate-runtime <skill-path>
/meta-claude:skill:validate-integration <skill-path>
/meta-claude:skill:validate-audit <skill-path>
IMPORTANT: Wait for each command to complete before proceeding to the next phase. Do not invoke multiple commands in parallel.
Each command returns success or failure with specific error details.
The workflow uses a three-tier fix strategy:
One-shot policy: Each fix applied once, re-run once, then fail fast if still broken.
For complete tier definitions, issue categorization, examples, and fix workflows: See references/error-handling.md
Update TodoWrite item to completed status.
Proceed to the next workflow phase, or exit if fail-fast triggered.
When all phases pass successfully:
Present completion summary:
✅ Skill created and validated successfully!
Location: <skill-output-path>/
Research materials: docs/research/skills/<skill-name>/
Ask about artifact cleanup:
Keep research materials? [Keep/Remove] (default: Keep)
Present next steps using AskUserQuestion:
Next steps - choose an option:
[Test in new session - Skills require session reload to be discoverable]
[Create PR - Submit skill to repository]
[Done - Exit workflow]
Execute user's choice:
Note: Skills auto-discover based on directory structure - no plugin.json registration needed.
Sequential Execution: Do not run commands in parallel. Wait for each phase to complete before proceeding.
Context Window Protection: You are orchestrating commands, not sub-agents. Your context window is safe because you're invoking slash commands sequentially, not spawning multiple agents.
State Management: TodoWrite provides real-time progress visibility. Update it at every phase transition.
Fail Fast: When Tier 3 issues occur or user declines fixes, exit immediately with clear guidance. Don't attempt complex recovery.
Dependency Enforcement: Never skip dependency checks. Review phases are sequential, validation phases are tiered.
One-shot Fixes: Apply each fix once, re-run once, then fail if still broken. This prevents infinite loops.
User Communication: Report progress clearly. Show which phase is running, what the result was, and what's happening next.
Two paths based on research availability: Path 1 (research exists) and Path 2 (research needed). TodoWrite tracks progress through 8-10 phases. Entry point detection uses prompt analysis and AskUserQuestion.
Details: See references/workflow-architecture.md
Sequential phase invocation pattern: mark in_progress → check dependencies → invoke command → check result → apply fixes → mark completed → continue. Dependencies enforced (review sequential, validation tiered). Commands invoked via SlashCommand tool with wait-for-completion pattern.
Details: See references/workflow-execution.md
When all phases pass successfully:
✅ Skill created and validated successfully!
Location: <skill-output-path>/
Research materials: docs/research/skills/<skill-name>/
Keep research materials? [Keep/Remove] (default: Keep)
Artifact Cleanup:
Ask user about research materials:
Next Steps:
Present options to user:
Next steps - choose an option:
[1] Test in new session - Skills require session reload to be discoverable
[2] Create PR - Submit skill to repository
[3] Done - Exit workflow
What would you like to do?
User Actions:
Note: Skills auto-discover based on directory structure - no plugin.json registration needed.
Execute the user's choice, then exit cleanly.
The skill-factory workflow supports various scenarios:
Detailed Examples: See references/workflow-examples.md for complete walkthrough scenarios showing TodoWrite state transitions, command invocations, error handling, and success paths.
Six core principles: (1) Primitives First (slash commands foundation), (2) KISS State Management (TodoWrite only), (3) Fail Fast (no complex recovery), (4) Context-Aware Entry (prompt analysis), (5) Composable & Testable (standalone or orchestrated), (6) Quality Gates (sequential dependencies).
Details: See references/design-principles.md
skill-factory orchestrates 9 primitive slash commands through a sequential workflow:
Creation Phase:
/meta-claude:skill:research → Gather domain knowledge via firecrawl/meta-claude:skill:format → Clean and structure research materials/meta-claude:skill:create → Scaffold skill directory with references (runs init_skill.py)/meta-claude:skill:write → Synthesize references into SKILL.md contentValidation Phase:
/meta-claude:skill:review-content → Quality gate for clarity and completeness/meta-claude:skill:review-compliance → Technical validation via quick_validate.py/meta-claude:skill:validate-runtime → Test actual skill loading/meta-claude:skill:validate-integration → Check for naming conflicts/meta-claude:skill:validate-audit → Comprehensive audit via claude-skill-auditor agentEach command is standalone and testable. skill-factory provides orchestration, not abstraction.
This skill provides:
Load sections as needed for your use case.
Common issues: research phase failures (check FIRECRAWL_API_KEY), content review loops (Tier 3 issues need redesign), compliance validation (run quick_validate.py manually), integration conflicts (check duplicate names).
Details: See references/troubleshooting.md
You know skill-factory succeeds when:
/skill-* namespaceThis skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.