Enhance a plan with parallel research agents for each section to add depth, best practices, and implementation details
Enhances existing plans with parallel research agents that add best practices, performance optimizations, and implementation details. Use this after creating a plan to make it production-ready with concrete code examples and real-world patterns.
/plugin marketplace add EveryInc/compound-engineering-plugin/plugin install compound-engineering@every-marketplace[path to plan file]Note: The current year is 2025. Use this when searching for recent documentation and best practices.
This command takes an existing plan (from /workflows:plan) and enhances each section with parallel research agents. Each major element gets its own dedicated research sub-agent to find:
The result is a deeply grounded, production-ready plan with concrete implementation details.
<plan_path> #$ARGUMENTS </plan_path>
If the plan path above is empty:
ls -la plans/plans/my-feature.md)."Do not proceed until you have a valid plan file path.
Read the plan file and extract:
Create a section manifest:
Section 1: [Title] - [Brief description of what to research]
Section 2: [Title] - [Brief description of what to research]
...
Step 1: Discover ALL available skills from ALL sources
# 1. Project-local skills (highest priority - project-specific)
ls .claude/skills/
# 2. User's global skills (~/.claude/)
ls ~/.claude/skills/
# 3. compound-engineering plugin skills
ls ~/.claude/plugins/cache/*/compound-engineering/*/skills/
# 4. ALL other installed plugins - check every plugin for skills
find ~/.claude/plugins/cache -type d -name "skills" 2>/dev/null
# 5. Also check installed_plugins.json for all plugin locations
cat ~/.claude/plugins/installed_plugins.json
Important: Check EVERY source. Don't assume compound-engineering is the only plugin. Use skills from ANY installed plugin that's relevant.
Step 2: For each discovered skill, read its SKILL.md to understand what it does
# For each skill directory found, read its documentation
cat [skill-path]/SKILL.md
Step 3: Match skills to plan content
For each skill discovered:
Step 4: Spawn a sub-agent for EVERY matched skill
CRITICAL: For EACH skill that matches, spawn a separate sub-agent and instruct it to USE that skill.
For each matched skill:
Task general-purpose: "You have the [skill-name] skill available at [skill-path].
YOUR JOB: Use this skill on the plan.
1. Read the skill: cat [skill-path]/SKILL.md
2. Follow the skill's instructions exactly
3. Apply the skill to this content:
[relevant plan section or full plan]
4. Return the skill's full output
The skill tells you what to do - follow it. Execute the skill completely."
Spawn ALL skill sub-agents in PARALLEL:
Each sub-agent:
Example spawns:
Task general-purpose: "Use the dhh-rails-style skill at ~/.claude/plugins/.../dhh-rails-style. Read SKILL.md and apply it to: [Rails sections of plan]"
Task general-purpose: "Use the frontend-design skill at ~/.claude/plugins/.../frontend-design. Read SKILL.md and apply it to: [UI sections of plan]"
Task general-purpose: "Use the agent-native-architecture skill at ~/.claude/plugins/.../agent-native-architecture. Read SKILL.md and apply it to: [agent/tool sections of plan]"
Task general-purpose: "Use the security-patterns skill at ~/.claude/skills/security-patterns. Read SKILL.md and apply it to: [full plan]"
No limit on skill sub-agents. Spawn one for every skill that could possibly be relevant.
LEARNINGS LOCATION - Check these exact folders:
docs/solutions/ <-- PRIMARY: Project-level learnings (created by /workflows:compound)
├── performance-issues/
│ └── *.md
├── debugging-patterns/
│ └── *.md
├── configuration-fixes/
│ └── *.md
├── integration-issues/
│ └── *.md
├── deployment-issues/
│ └── *.md
└── [other-categories]/
└── *.md
Step 1: Find ALL learning markdown files
Run these commands to get every learning file:
# PRIMARY LOCATION - Project learnings
find docs/solutions -name "*.md" -type f 2>/dev/null
# If docs/solutions doesn't exist, check alternate locations:
find .claude/docs -name "*.md" -type f 2>/dev/null
find ~/.claude/docs -name "*.md" -type f 2>/dev/null
Step 2: Read frontmatter of each learning to filter
Each learning file has YAML frontmatter with metadata. Read the first ~20 lines of each file to get:
---
title: "N+1 Query Fix for Briefs"
category: performance-issues
tags: [activerecord, n-plus-one, includes, eager-loading]
module: Briefs
symptom: "Slow page load, multiple queries in logs"
root_cause: "Missing includes on association"
---
For each .md file, quickly scan its frontmatter:
# Read first 20 lines of each learning (frontmatter + summary)
head -20 docs/solutions/**/*.md
Step 3: Filter - only spawn sub-agents for LIKELY relevant learnings
Compare each learning's frontmatter against the plan:
tags: - Do any tags match technologies/patterns in the plan?category: - Is this category relevant? (e.g., skip deployment-issues if plan is UI-only)module: - Does the plan touch this module?symptom: / root_cause: - Could this problem occur with the plan?SKIP learnings that are clearly not applicable:
database-migrations/ learningsrails-specific/ learningsauthentication-issues/ learningsSPAWN sub-agents for learnings that MIGHT apply:
Step 4: Spawn sub-agents for filtered learnings
For each learning that passes the filter:
Task general-purpose: "
LEARNING FILE: [full path to .md file]
1. Read this learning file completely
2. This learning documents a previously solved problem
Check if this learning applies to this plan:
---
[full plan content]
---
If relevant:
- Explain specifically how it applies
- Quote the key insight or solution
- Suggest where/how to incorporate it
If NOT relevant after deeper analysis:
- Say 'Not applicable: [reason]'
"
Example filtering:
# Found 15 learning files, plan is about "Rails API caching"
# SPAWN (likely relevant):
docs/solutions/performance-issues/n-plus-one-queries.md # tags: [activerecord] ✓
docs/solutions/performance-issues/redis-cache-stampede.md # tags: [caching, redis] ✓
docs/solutions/configuration-fixes/redis-connection-pool.md # tags: [redis] ✓
# SKIP (clearly not applicable):
docs/solutions/deployment-issues/heroku-memory-quota.md # not about caching
docs/solutions/frontend-issues/stimulus-race-condition.md # plan is API, not frontend
docs/solutions/authentication-issues/jwt-expiry.md # plan has no auth
Spawn sub-agents in PARALLEL for all filtered learnings.
These learnings are institutional knowledge - applying them prevents repeating past mistakes.
For each identified section, launch parallel research:
Task Explore: "Research best practices, patterns, and real-world examples for: [section topic].
Find:
- Industry standards and conventions
- Performance considerations
- Common pitfalls and how to avoid them
- Documentation and tutorials
Return concrete, actionable recommendations."
Also use Context7 MCP for framework documentation:
For any technologies/frameworks mentioned in the plan, query Context7:
mcp__plugin_compound-engineering_context7__resolve-library-id: Find library ID for [framework]
mcp__plugin_compound-engineering_context7__query-docs: Query documentation for specific patterns
Use WebSearch for current best practices:
Search for recent (2024-2025) articles, blog posts, and documentation on topics in the plan.
Step 1: Discover ALL available agents from ALL sources
# 1. Project-local agents (highest priority - project-specific)
find .claude/agents -name "*.md" 2>/dev/null
# 2. User's global agents (~/.claude/)
find ~/.claude/agents -name "*.md" 2>/dev/null
# 3. compound-engineering plugin agents (all subdirectories)
find ~/.claude/plugins/cache/*/compound-engineering/*/agents -name "*.md" 2>/dev/null
# 4. ALL other installed plugins - check every plugin for agents
find ~/.claude/plugins/cache -path "*/agents/*.md" 2>/dev/null
# 5. Check installed_plugins.json to find all plugin locations
cat ~/.claude/plugins/installed_plugins.json
# 6. For local plugins (isLocal: true), check their source directories
# Parse installed_plugins.json and find local plugin paths
Important: Check EVERY source. Include agents from:
.claude/agents/~/.claude/agents/For compound-engineering plugin specifically:
agents/review/* (all reviewers)agents/research/* (all researchers)agents/design/* (design agents)agents/docs/* (documentation agents)agents/workflow/* (these are workflow orchestrators, not reviewers)Step 2: For each discovered agent, read its description
Read the first few lines of each agent file to understand what it reviews/analyzes.
Step 3: Launch ALL agents in parallel
For EVERY agent discovered, launch a Task in parallel:
Task [agent-name]: "Review this plan using your expertise. Apply all your checks and patterns. Plan content: [full plan content]"
CRITICAL RULES:
Step 4: Also discover and run research agents
Research agents (like best-practices-researcher, framework-docs-researcher, git-history-analyzer, repo-research-analyst) should also be run for relevant plan sections.
Collect outputs from ALL sources:
For each agent's findings, extract:
Deduplicate and prioritize:
Enhancement format for each section:
## [Original Section Title]
[Original content preserved]
### Research Insights
**Best Practices:**
- [Concrete recommendation 1]
- [Concrete recommendation 2]
**Performance Considerations:**
- [Optimization opportunity]
- [Benchmark or metric to target]
**Implementation Details:**
```[language]
// Concrete code example from research
Edge Cases:
References:
### 8. Add Enhancement Summary
At the top of the plan, add a summary section:
```markdown
## Enhancement Summary
**Deepened on:** [Date]
**Sections enhanced:** [Count]
**Research agents used:** [List]
### Key Improvements
1. [Major improvement 1]
2. [Major improvement 2]
3. [Major improvement 3]
### New Considerations Discovered
- [Important finding 1]
- [Important finding 2]
Write the enhanced plan:
-deepened suffix if user prefers a new fileUpdate the plan file in place (or create plans/<original-name>-deepened.md if requested).
Before finalizing:
After writing the enhanced plan, use the AskUserQuestion tool to present these options:
Question: "Plan deepened at [plan_path]. What would you like to do next?"
Options:
/plan_review - Get feedback from reviewers on enhanced plan/workflows:work - Begin implementing this enhanced planBased on selection:
git diff [plan_path] or show before/after/plan_review → Call the /plan_review command with the plan file path/workflows:work → Call the /workflows:work command with the plan file pathBefore (from /workflows:plan):
## Technical Approach
Use React Query for data fetching with optimistic updates.
After (from /workflows:deepen-plan):
## Technical Approach
Use React Query for data fetching with optimistic updates.
### Research Insights
**Best Practices:**
- Configure `staleTime` and `cacheTime` based on data freshness requirements
- Use `queryKey` factories for consistent cache invalidation
- Implement error boundaries around query-dependent components
**Performance Considerations:**
- Enable `refetchOnWindowFocus: false` for stable data to reduce unnecessary requests
- Use `select` option to transform and memoize data at query level
- Consider `placeholderData` for instant perceived loading
**Implementation Details:**
```typescript
// Recommended query configuration
const queryClient = new QueryClient({
defaultOptions: {
queries: {
staleTime: 5 * 60 * 1000, // 5 minutes
retry: 2,
refetchOnWindowFocus: false,
},
},
});
Edge Cases:
cancelQueries on component unmountpersistQueryClientReferences:
NEVER CODE! Just research and enhance the plan.