Transform feature descriptions into well-structured project plans
Transforms feature descriptions into comprehensive project plans with acceptance criteria, research, and implementation tasks.
/plugin marketplace add majesticlabs-dev/majestic-marketplace/plugin install majestic-engineer@majestic-marketplace[feature description, bug report, or improvement idea]workflows/CRITICAL: This is a WORKFLOW SKILL, not Claude's built-in plan mode. Do NOT use EnterPlanMode or ExitPlanMode tools. Execute each step below using the specified tools.
Transform feature descriptions into well-structured markdown plans.
<feature_description> $ARGUMENTS </feature_description>
If empty: Ask user to describe the feature, bug fix, or improvement.
For complex or ambiguous features, offer a deep requirements interview first:
AskUserQuestion: "This feature could benefit from a requirements interview. Would you like to explore it in depth first?"
Options:
- "Yes, interview me first" → Skill(skill: "majestic:interview", args: "[feature description]"), then use interview output as refined input
- "No, proceed to planning" → Continue to Step 2
When to suggest interview:
Skip interview suggestion for:
MANDATORY: Ask the user what "done" means for this feature.
Acceptance Criteria describes feature behaviors, not technical quality (tests, code review, etc. are handled by other agents).
Execute:
AskUserQuestion( questions: [{ question: "What behavior must work for this feature to be done?", header: "Done when", options: [ { label: "User can perform action", description: "Feature enables a specific user action" }, { label: "System responds correctly", description: "API/backend behaves as expected" }, { label: "UI displays properly", description: "Visual elements render correctly" }, { label: "Data is persisted", description: "Changes are saved to database" } ], multiSelect: true }] )
Wait for user response before proceeding.
Good Acceptance Criteria examples (feature behavior):
Bad Acceptance Criteria examples (already handled by other agents):
For each criterion, capture how to verify:
| Criterion | Verification |
|---|---|
| User can login and redirect | curl -X POST /login or manual check |
| Form validates email | bundle exec rspec spec/features/signup_spec.rb |
| API returns 404 | curl /api/nonexistent returns 404 |
Store the Acceptance Criteria table - it will be:
acceptance-criteria-verifier agent during quality gateDetect feature type and delegate to specialists:
| Type | Detection | Action |
|---|---|---|
| UI | page, component, form, button, modal, design | Check design system |
| DevOps | terraform, ansible, infrastructure, cloud, docker | Delegate to devops-plan |
If UI feature:
Skill(skill: "config-reader", args: "design_system_path")docs/design/design-system.md/majestic:ux-brief firstIf DevOps feature:
Skill(skill: "majestic-devops:devops-plan")
Read config values (run in parallel):
Skill(skill: "config-reader", args: "tech_stack generic")
Skill(skill: "config-reader", args: "lessons_path .claude/lessons/")
Run both agents in parallel:
Task 1 (majestic-engineer:workflow:toolbox-resolver):
prompt: "Stage: blueprint | Tech Stack: [tech_stack from config-reader]"
Task 2 (majestic-engineer:workflow:lessons-discoverer):
prompt: "workflow_phase: planning | tech_stack: [tech_stack] | task: [feature description]"
Store outputs for subsequent steps:
research_hooks → use in Step 5lessons_context → use in Step 7 (architect)Error handling:
These are non-blocking - failures do not stop the workflow.
Core agents (always run):
Task(subagent_type="majestic-engineer:research:git-researcher", prompt="[feature]")
Task(subagent_type="majestic-engineer:research:docs-researcher", prompt="[feature]")
Task(subagent_type="majestic-engineer:research:best-practices-researcher", prompt="[feature]")
Stack-specific agents (from toolbox):
For each research_hook in toolbox config:
triggers.any_substring matches feature descriptionTask(subagent_type="[hook.agent]", prompt="[feature] | Context: [hook.context]")Cap at 5 total agents to avoid noise.
Collect results from all agents before proceeding.
Run these two steps in parallel - they have no dependencies on each other:
# Run in parallel:
Task(subagent_type="majestic-engineer:plan:spec-reviewer", prompt="Feature: [feature] | Research findings: [combined research]")
Skill(skill: "[each skill from toolbox.coding_styles]")
Wait for BOTH to complete before proceeding to Step 7.
Store outputs:
spec_findings → gaps, edge cases, questions from spec-reviewerskill_content → loaded coding style contentCRITICAL: Do NOT run in parallel with Step 6. Wait for spec-reviewer to complete.
The architect MUST receive spec findings to avoid designing for incomplete requirements.
Task(subagent_type="majestic-engineer:plan:architect",
prompt="Feature: [feature] | Research: [research] | Spec: [spec_findings] | Skills: [skill_content] | Lessons: [lessons_context from Step 4]")
The architect agent:
Lessons integration: If lessons_context contains relevant lessons, the architect should:
Document patterns and conventions discovered during planning to closest AGENTS.md.
# Identify primary affected directory from architect output
PRIMARY_DIR = extract main folder from architect's file recommendations
# Find closest AGENTS.md
AGENTS_MD = walk up from PRIMARY_DIR until AGENTS.md found
If not found: AGENTS_MD = "AGENTS.md" (root)
# Extract learnings from research and architect phases
PLANNING_LEARNINGS = []
# From research (Step 5)
For each FINDING in research_outputs:
If FINDING describes existing pattern or convention:
PLANNING_LEARNINGS.append({
type: "pattern",
content: FINDING.pattern,
location: FINDING.file_path
})
# From architect (Step 7)
For each DECISION in architect_output.decisions:
PLANNING_LEARNINGS.append({
type: "decision",
content: DECISION.what,
rationale: DECISION.why
})
# Dedupe against existing AGENTS.md content
EXISTING = Read(AGENTS_MD) → extract ## Patterns section
NEW_LEARNINGS = PLANNING_LEARNINGS - EXISTING
# Append if new learnings found
If NEW_LEARNINGS not empty:
FORMAT = """
## Patterns (discovered {date})
| Pattern | Location | Notes |
|---------|----------|-------|
{for each L in NEW_LEARNINGS where L.type == "pattern":}
| {L.content} | `{L.location}` | |
{end for}
## Decisions
{for each L in NEW_LEARNINGS where L.type == "decision":}
- **{L.content}** — {L.rationale}
{end for}
"""
Edit(AGENTS_MD, append to relevant section or create if missing)
Skip if:
Load the plan-builder skill for template guidance:
Skill(skill: "plan-builder")
Select template based on complexity:
Output: Write to docs/plans/[YYYYMMDDHHMMSS]_<title>.md
Task(subagent_type="majestic-engineer:plan:plan-review", prompt="Review plan at docs/plans/<filename>.md")
Incorporate feedback and update the plan file.
Check auto_preview config:
Skill(skill: "config-reader", args: "auto_preview false")
If result is true, open the plan file:
Bash(command: "open docs/plans/<filename>.md")
MANDATORY: Use AskUserQuestion to present options:
Question: "Blueprint ready at docs/plans/<filename>.md. What next?"
| Option | Action |
|---|---|
| Build as single task | Go to Step 12.1 |
| Break into small tasks | Go to Step 12.2 |
| Create as single epic | Go to Step 12.3 |
| Deep dive into specifics | Go to Step 11.1 |
| Preview plan | Read and display plan content, return to Step 11 |
| Revise | Ask what to change, return to Step 9 |
IMPORTANT: After user selects an option, EXECUTE that action. Do not stop.
Ask user what aspect needs more research:
AskUserQuestion: "What aspect of the plan needs deeper research?"
Options: [Free text input expected]
Execute focused research:
Check toolbox research_hooks for relevant agents based on user's aspect, then run:
# Core deep-dive:
Task(subagent_type="majestic-engineer:research:best-practices-researcher", prompt="Deep dive: [user's aspect] for feature [feature]")
Task(subagent_type="majestic-engineer:research:web-research", prompt="[user's aspect] - patterns, examples, gotchas")
# Plus any matching research_hooks from toolbox
Update the plan:
Read(file_path="docs/plans/<filename>.md")Edit(file_path="docs/plans/<filename>.md", ...)Return to Step 11 to present options again.
Skill(skill: "majestic-engineer:workflows:build-task", args="docs/plans/<filename>.md")
End workflow.
Task(subagent_type="majestic-engineer:plan:task-breakdown", prompt="Plan: docs/plans/<filename>.md")
The agent appends ## Implementation Tasks section with:
Check auto_create_task config:
Skill(skill: "config-reader", args: "blueprint.auto_create_task false")
true: Skip to Step 13false: Ask user "Tasks added to plan. Create these in your task manager?"
Create a single task covering the entire plan:
Skill(skill: "backlog-manager")
Update the plan document with the task reference, then go to Step 14.
For each task in the Implementation Tasks section:
Skill(skill: "backlog-manager")Task ID formats by system:
| System | Format |
|---|---|
| GitHub Issues | #123 |
| Linear | PROJ-123 |
| Beads | BEADS-123 |
| File-based | TODO-123 |
Use AskUserQuestion:
Question: "Tasks created. Start building?"
| Option | Action |
|---|---|
| Build all tasks now | Skill(skill: "majestic-engineer:workflows:run-blueprint", args="docs/plans/<filename>.md") |
| Build with ralph (autonomous) | Display ralph-loop command (see below) |
| Done for now | End workflow |
Ralph command to display:
/ralph-loop:ralph-loop "/majestic:run-blueprint docs/plans/<filename>.md" --max-iterations 50 --completion-promise "RUN_BLUEPRINT_COMPLETE"
| Scenario | Action |
|---|---|
| Toolbox resolution fails | Continue with core agents only |
| Research agent fails | Log warning, continue with available research |
| Plan-review fails | Log warning, continue with original plan |
| Task creation fails | Report error, ask user to create manually |
| Config read fails | Use default value (false) |
| User cancels at any step | End workflow gracefully |