Multi-role requirement analysis and task breakdown workflow using 4 specialized AI agents (PM, UX, Tech, QA). Each agent conducts web research before analysis to gather industry best practices, case studies, and current trends. Supports Quick Mode (parallel, ~3 min, one Q&A session) and Deep Mode (serial, ~8 min, Q&A after EACH agent so answers inform subsequent analysis). Triggers on 'foreman-spec', 'spec feature', 'break down requirement', 'define tasks', 'spec this'.
/plugin marketplace add mylukin/agent-foreman/plugin install agent-foreman@agent-foreman-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Multi-role requirement analysis using 4 specialized AI agents, each equipped with web research capabilities.
Transform a high-level requirement into fine-grained, implementable tasks through multi-perspective analysis.
Key Feature: Research-First Approach
Each agent conducts web research BEFORE analysis to:
Agents (all equipped with WebSearch):
Modes:
Before any analysis, detect project state and ask user to choose mode.
Use Glob to detect project state:
Check if ai/tasks/ exists → EXISTING_PROJECT
Check if package.json or pyproject.toml exists → EXISTING_PROJECT
Otherwise → NEW_PROJECT
IF NEW_PROJECT OR COMPLEX:
recommendation = "Deep Mode"
ELSE:
recommendation = "Quick Mode"
Use AskUserQuestion tool:
{
"question": "How would you like to analyze this requirement?",
"header": "Mode",
"options": [
{
"label": "Quick Mode (Recommended)" or "Quick Mode",
"description": "4 experts analyze in parallel, ~3 min, one combined Q&A at the end. Best for clear requirements."
},
{
"label": "Deep Mode (Recommended)" or "Deep Mode",
"description": "4 experts analyze sequentially, ~8 min, Q&A after EACH expert (answers inform next expert). Best for complex/new projects."
}
],
"multiSelect": false
}
Place "(Recommended)" on the recommended mode based on Step 3.
Scan the project to understand existing patterns.
Use Glob to find key files:
README.md, ARCHITECTURE.md, CLAUDE.mdpackage.json, pyproject.toml, go.modsrc/**/*.ts, src/**/*.py, src/**/*.go (sample files)Read project configuration to detect:
Create context summary for agents:
Before launching agents, prepare research context based on the requirement.
Identify which areas need research based on the requirement:
| Requirement Type | Research Focus |
|---|---|
| New product | Market analysis, competitor products, industry trends |
| New feature | Similar implementations, UX patterns, technical approaches |
| Integration | API documentation, security best practices, compatibility |
| Performance | Benchmarks, optimization techniques, scalability patterns |
Extract keywords from the requirement for targeted searches:
Requirement: "Build a real-time chat application with end-to-end encryption"
Research keywords:
- "real-time chat architecture 2024 2025"
- "WebSocket vs Server-Sent Events comparison"
- "end-to-end encryption implementation best practices"
- "chat application UX patterns"
- "Signal Protocol implementation guide"
Include research keywords in agent prompts:
research_context = {
"domain": "[product domain]",
"keywords": ["keyword1", "keyword2", "keyword3"],
"tech_stack": "[detected or proposed stack]",
"competitors": ["competitor1", "competitor2"]
}
IMPORTANT: Each agent will conduct its own targeted research using these keywords. The research phase is built into each agent's workflow, not a separate step.
Launch all 4 agents IN PARALLEL using Task tool. Each agent will conduct web research before analysis:
Task(subagent_type="agent-foreman:pm", prompt="Analyze requirement: {requirement}. Project context: {codebase_context}. Research context: {research_context}. IMPORTANT: Use WebSearch to research industry best practices before analysis.")
Task(subagent_type="agent-foreman:ux", prompt="Design UX for: {requirement}. Project context: {codebase_context}. Research context: {research_context}. IMPORTANT: Use WebSearch to research UX patterns before design.")
Task(subagent_type="agent-foreman:tech", prompt="Design architecture for: {requirement}. Project context: {codebase_context}. Research context: {research_context}. IMPORTANT: Use WebSearch to research framework best practices before architecture.")
Task(subagent_type="agent-foreman:qa", prompt="Define QA strategy for: {requirement}. Project context: {codebase_context}. Research context: {research_context}. IMPORTANT: Use WebSearch to research testing strategies before planning.")
Wait for all to complete (~30-60 seconds).
Then: Merge questions from all 4 agents:
Then: Present merged questions to user in one AskUserQuestion call.
Launch agents ONE AT A TIME. CRITICAL: After each agent completes, immediately collect their questions, ask the user, and write answers to the spec file BEFORE launching the next agent.
This ensures:
Step 2A: Product Manager
Task(subagent_type="agent-foreman:pm", prompt="Analyze requirement: {requirement}. Project context: {codebase_context}. Research context: {research_context}. CRITICAL: Use WebSearch FIRST to research industry best practices, market trends, and competitor approaches before starting your analysis.")
Wait for completion. PM will:
ai/tasks/spec/PM.md---QUESTIONS FOR USER--- format→ SKILL Orchestrator Actions (MANDATORY):
---QUESTIONS FOR USER--- outputai/tasks/spec/PM.mdStep 2B: UX Designer
Task(subagent_type="agent-foreman:ux", prompt="Design UX for: {requirement}. IMPORTANT: First read ai/tasks/spec/PM.md to see PM's analysis AND user's answers to PM questions. Research context: {research_context}. CRITICAL: Use WebSearch FIRST to research UX patterns before starting your design.")
Wait for completion. UX will:
ai/tasks/spec/UX.md---QUESTIONS FOR USER--- format→ SKILL Orchestrator Actions (MANDATORY):
---QUESTIONS FOR USER--- outputai/tasks/spec/UX.mdStep 2C: Technical Architect
Task(subagent_type="agent-foreman:tech", prompt="Design architecture for: {requirement}. IMPORTANT: First read ai/tasks/spec/PM.md and ai/tasks/spec/UX.md to see previous analyses AND user's answers. Research context: {research_context}. CRITICAL: Use WebSearch FIRST to research framework best practices before starting your design.")
Wait for completion. Tech will:
ai/tasks/spec/TECH.md---QUESTIONS FOR USER--- format→ SKILL Orchestrator Actions (MANDATORY):
---QUESTIONS FOR USER--- outputai/tasks/spec/TECH.mdStep 2D: QA Manager
Task(subagent_type="agent-foreman:qa", prompt="Define QA strategy for: {requirement}. IMPORTANT: First read all spec files (PM.md, UX.md, TECH.md) including their Q&A sections. Research context: {research_context}. CRITICAL: Use WebSearch FIRST to research testing strategies before defining your strategy.")
Wait for completion. QA will:
ai/tasks/spec/QA.md---QUESTIONS FOR USER--- format→ SKILL Orchestrator Actions (MANDATORY):
---QUESTIONS FOR USER--- outputai/tasks/spec/QA.mdThis phase applies ONLY to Quick Mode. In Deep Mode, questions are handled inline after each agent (see above).
After all 4 agents complete in parallel, handle questions:
Each agent outputs questions using this format (NOT written to file):
---QUESTIONS FOR USER---
1. **[Question text]**
- Why: [reason]
- Options: A) ... B) ... C) ...
- Recommend: [option] because [rationale]
---END QUESTIONS---
Parse each agent's output and extract questions from the ---QUESTIONS FOR USER--- section.
Use AskUserQuestion tool to present merged questions interactively:
{
"questions": [
{
"question": "[Merged question text]",
"header": "[Topic - max 12 chars]",
"options": [
{"label": "[Option A] (Recommended)", "description": "[Why recommended]"},
{"label": "[Option B]", "description": "[What this means]"}
],
"multiSelect": false
}
]
}
After user answers, append Q&A section to EACH relevant spec file:
## Questions & Answers
### Q1: [Question text]
**Answer**: [User's selected option]
**Impact**: [How this affects this role's analysis]
### Q2: [Question text]
**Answer**: [User's selected option]
**Impact**: [How this affects this role's analysis]
For each file, only include questions relevant to that role:
PM.md: Business/scope questionsUX.md: Design/flow questionsTECH.md: Architecture/implementation questionsQA.md: Testing/quality questionsAfter Phase 2.5 (Q&A) completes, delegate to breakdown-writer agent.
Phase 3 requires significant context (reading 4 spec files, creating N+2 task files). Delegating to a subagent preserves main session context for subsequent interactions.
Collect all Q&A from Phase 2/2.5 into a formatted summary:
qa_decisions = """
### Scope Decisions
- Q: [Question from PM] → A: [User answer]
- Q: [Question from PM] → A: [User answer]
### UX Decisions
- Q: [Question from UX] → A: [User answer]
### Technical Decisions
- Q: [Question from Tech] → A: [User answer]
### Quality Decisions
- Q: [Question from QA] → A: [User answer]
"""
Launch the breakdown-writer agent:
Task(
subagent_type="agent-foreman:breakdown-writer",
prompt="""
SPEC BREAKDOWN TASK
## Context
- Requirement: {requirement}
- Mode: {quick|deep}
- Date: {YYYY-MM-DD}
- Project: {codebase_context}
## Q&A Decisions
{qa_decisions}
## Your Mission
1. Read all spec files (PM.md, UX.md, TECH.md, QA.md)
2. Create OVERVIEW.md with executive summaries
3. Create BREAKDOWN tasks for all modules (devops first, integration last)
4. Run `agent-foreman status` to verify index update
5. Return structured result
## Output Format
Return result block at END:
---BREAKDOWN RESULT---
overview_created: true|false
modules_created: [devops, module1, ..., integration]
tasks_created: N
index_updated: true|false
status: success|partial|failed
errors: []
notes: "summary"
---END BREAKDOWN RESULT---
"""
)
Parse ---BREAKDOWN RESULT--- from agent output:
If status == "success":
→ Display success message with modules_created
→ Continue to Phase 4
If status == "partial":
→ Display warning with errors
→ Continue to Phase 4 (partial results may be usable)
If status == "failed":
→ Display error message
→ Show errors list
→ Stop workflow, user must investigate
After successful delegation, output:
Spec breakdown complete!
Created:
- ai/tasks/spec/OVERVIEW.md (executive summaries)
- ai/tasks/devops/BREAKDOWN.md (priority: 0)
- ai/tasks/{module}/BREAKDOWN.md (priority: N)
- ...
- ai/tasks/integration/BREAKDOWN.md (priority: 999999)
Total: {tasks_created} BREAKDOWN tasks registered.
If delegation fails, you can manually execute Phase 3 by:
agent-foreman status to verify index updateAfter spec generation, guide the user to process all BREAKDOWN tasks.
⚠️ IMPORTANT: This output is displayed to the user in the console/terminal. It is NOT written to any file.
Display the following guidance to the user:
## Next Steps
To process all BREAKDOWN tasks and create fine-grained implementation tasks:
/agent-foreman:run
Alternatively, to process a specific module:
/agent-foreman:run {module}.BREAKDOWN
The /agent-foreman:run command uses the standard Bash workflow for each task:
# For each BREAKDOWN task, executes:
agent-foreman next <task_id> # 1. Get task details
# ... implement task ... # 2. Create implementation tasks
agent-foreman check <task_id> # 3. Verify
agent-foreman done <task_id> # 4. Complete + commit
# Loop to next BREAKDOWN
For each BREAKDOWN task, the AI will:
agent-foreman next to get task details and spec contextai/tasks/spec/ (PM.md, UX.md, TECH.md, QA.md, OVERVIEW.md)ai/tasks/{module}/agent-foreman check and agent-foreman done to verify and completeNote: The
donecommand automatically triggers validation instructions when all BREAKDOWNs complete.
After all BREAKDOWN tasks are complete, run validation:
agent-foreman validate
This spawns 4 validators in parallel to check task quality.
All generated tasks MUST follow agent-foreman format.
{module}.BREAKDOWN (e.g., auth.BREAKDOWN){module}.{task-name} (e.g., auth.oauth-google)---
id: module.task-name
module: module-name
priority: N
status: failing
version: 1
origin: spec-workflow
dependsOn: []
tags: []
testRequirements:
unit:
required: false
pattern: "tests/{module}/**/*.test.*"
---
# Task Title
## Context
[Brief context from spec documents]
## Acceptance Criteria
1. [Specific, testable criterion]
2. [Specific, testable criterion]
3. [Error handling criterion]
## Technical Notes
- Reference: [From spec/OVERVIEW.md]
- UX: [From spec/UX.md]
- Test: [From spec/QA.md]
CRITICAL: Always include blank lines:
## heading (blank line required)## heading (blank line required)Break into SMALLEST implementable units. Each task should be:
ai/tasks/spec/{ROLE}.mddevops (first) and integration (last)---QUESTIONS FOR USER--- formatUse specific, targeted queries:
| Agent | Good Query Examples |
|---|---|
| PM | "[industry] product metrics KPIs 2024", "[product type] market size trends" |
| UX | "[component] UX pattern best practices", "WCAG 2.2 [element] accessibility" |
| Tech | "[framework] architecture patterns", "OWASP [vulnerability] prevention" |
| QA | "[framework] testing best practices", "[tool] performance benchmarks" |
Each agent should:
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.