This skill should be used when enhancing an existing plan with parallel research agents for each section. It discovers all available skills, agents, and learnings, then spawns parallel sub-agents to enrich each plan section with depth, best practices, and implementation details.
From soleurnpx claudepluginhub jikig-ai/soleur --plugin soleurThis skill uses the workspace's default tool permissions.
Note: The current year is 2026. Use this when searching for recent documentation and best practices.
This skill takes an existing plan (from the soleur:plan skill) and enhances each section with parallel research agents. Each major element gets its own dedicated research sub-agent to find:
The result is a deeply grounded, production-ready plan with concrete implementation details.
<plan_path> #$ARGUMENTS </plan_path>
If the plan path above is empty:
ls -la knowledge-base/project/plans/knowledge-base/project/plans/2026-01-15-feat-my-feature-plan.md)."Do not proceed until a valid plan file path is provided.
Read the plan file and extract:
Create a section manifest:
Section 1: [Title] - [Brief description of what to research]
Section 2: [Title] - [Brief description of what to research]
...
Step 1: Discover ALL available skills from ALL sources
# 1. Project-local skills (highest priority - project-specific)
ls .claude/skills/
# 2. User's global skills (~/.claude/)
ls ~/.claude/skills/
# 3. soleur plugin skills
ls ~/.claude/plugins/cache/*/soleur/*/skills/
# 4. ALL other installed plugins - check every plugin for skills
find ~/.claude/plugins/cache -type d -name "skills" 2>/dev/null
# 5. Also check installed_plugins.json for all plugin locations
cat ~/.claude/plugins/installed_plugins.json
Important: Check EVERY source. Don't assume soleur is the only plugin. Use skills from ANY installed plugin that's relevant.
Step 2: For each discovered skill, read its SKILL.md to understand what it does
# For each skill directory found, read its documentation
cat [skill-path]/SKILL.md
Step 3: Match skills to plan content
For each skill discovered:
Step 4: Spawn a sub-agent for EVERY matched skill
CRITICAL: For EACH skill that matches, spawn a separate sub-agent and instruct it to USE that skill.
For each matched skill:
Task general-purpose: "You have the [skill-name] skill available at [skill-path].
YOUR JOB: Use this skill on the plan.
1. Read the skill: cat [skill-path]/SKILL.md
2. Follow the skill's instructions exactly
3. Apply the skill to this content:
[relevant plan section or full plan]
4. Return the skill's full output
The skill tells you what to do - follow it. Execute the skill completely."
Spawn ALL skill sub-agents in PARALLEL:
Each sub-agent:
Example spawns:
Task general-purpose: "Use the dhh-rails-style skill at ~/.claude/plugins/.../dhh-rails-style. Read SKILL.md and apply it to: [Rails sections of plan]"
Task general-purpose: "Use the frontend-design skill at ~/.claude/plugins/.../frontend-design. Read SKILL.md and apply it to: [UI sections of plan]"
Task general-purpose: "Use the agent-native-architecture skill at ~/.claude/plugins/.../agent-native-architecture. Read SKILL.md and apply it to: [agent/tool sections of plan]"
Task general-purpose: "Use the security-patterns skill at ~/.claude/skills/security-patterns. Read SKILL.md and apply it to: [full plan]"
No limit on skill sub-agents. Spawn one for every skill that could possibly be relevant.
LEARNINGS LOCATION - Check these exact folders:
knowledge-base/project/learnings/ <-- PRIMARY: Project-level learnings (created by soleur:compound)
├── performance-issues/
│ └── *.md
├── debugging-patterns/
│ └── *.md
├── configuration-fixes/
│ └── *.md
├── integration-issues/
│ └── *.md
├── deployment-issues/
│ └── *.md
└── [other-categories]/
└── *.md
Step 1: Find ALL learning markdown files
Run these commands to get every learning file:
# PRIMARY LOCATION - Project learnings
find docs/solutions -name "*.md" -type f 2>/dev/null
# If docs/solutions doesn't exist, check alternate locations:
find .claude/docs -name "*.md" -type f 2>/dev/null
find ~/.claude/docs -name "*.md" -type f 2>/dev/null
Step 2: Read frontmatter of each learning to filter
Each learning file has YAML frontmatter with metadata. Read the first ~20 lines of each file to get:
---
title: "N+1 Query Fix for Briefs"
category: performance-issues
tags: [activerecord, n-plus-one, includes, eager-loading]
module: Briefs
symptom: "Slow page load, multiple queries in logs"
root_cause: "Missing includes on association"
---
For each .md file, quickly scan its frontmatter:
# Read first 20 lines of each learning (frontmatter + summary)
head -20 knowledge-base/project/learnings/**/*.md
Step 3: Filter - only spawn sub-agents for LIKELY relevant learnings
Compare each learning's frontmatter against the plan:
tags: - Do any tags match technologies/patterns in the plan?category: - Is this category relevant? (e.g., skip deployment-issues if plan is UI-only)module: - Does the plan touch this module?symptom: / root_cause: - Could this problem occur with the plan?SKIP learnings that are clearly not applicable:
database-migrations/ learningsrails-specific/ learningsauthentication-issues/ learningsSPAWN sub-agents for learnings that MIGHT apply:
Step 4: Spawn sub-agents for filtered learnings
For each learning that passes the filter:
Task general-purpose: "
LEARNING FILE: [full path to .md file]
1. Read this learning file completely
2. This learning documents a previously solved problem
Check if this learning applies to this plan:
---
[full plan content]
---
If relevant:
- Explain specifically how it applies
- Quote the key insight or solution
- Suggest where/how to incorporate it
If NOT relevant after deeper analysis:
- Say 'Not applicable: [reason]'
"
Spawn sub-agents in PARALLEL for all filtered learnings.
These learnings are institutional knowledge - applying them prevents repeating past mistakes.
For each identified section, launch parallel research:
Task Explore: "Research best practices, patterns, and real-world examples for: [section topic].
Find:
- Industry standards and conventions
- Performance considerations
- Common pitfalls and how to avoid them
- Documentation and tutorials
Return concrete, actionable recommendations."
Also use Context7 MCP for framework documentation:
For any technologies/frameworks mentioned in the plan, query Context7:
mcp__plugin_soleur_context7__resolve-library-id: Find library ID for [framework]
mcp__plugin_soleur_context7__query-docs: Query documentation for specific patterns
Verify API availability against installed SDK version: Context7 docs may reference APIs not yet available in the project's pinned dependency version. After recommending a specific API (e.g., getClaims()), check node_modules or Gemfile.lock to confirm the method exists in the installed version before including it in the plan.
Use WebSearch for current best practices:
Search for recent (2024-2026) articles, blog posts, and documentation on topics in the plan.
Step 1: Discover ALL available agents from ALL sources
# 1. Project-local agents (highest priority - project-specific)
find .claude/agents -name "*.md" 2>/dev/null
# 2. User's global agents (~/.claude/)
find ~/.claude/agents -name "*.md" 2>/dev/null
# 3. soleur plugin agents (all subdirectories)
find ~/.claude/plugins/cache/*/soleur/*/agents -name "*.md" 2>/dev/null
# 4. ALL other installed plugins - check every plugin for agents
find ~/.claude/plugins/cache -path "*/agents/*.md" 2>/dev/null
# 5. Check installed_plugins.json to find all plugin locations
cat ~/.claude/plugins/installed_plugins.json
# 6. For local plugins (isLocal: true), check their source directories
# Parse installed_plugins.json and find local plugin paths
Important: Check EVERY source. Include agents from:
.claude/agents/~/.claude/agents/For soleur plugin specifically:
agents/engineering/review/* (all reviewers)agents/engineering/research/* (all researchers)agents/engineering/design/* (design agents)agents/engineering/workflow/* (workflow orchestrators, not reviewers)Step 2: For each discovered agent, read its description
Read the first few lines of each agent file to understand what it reviews/analyzes.
Step 3: Launch ALL agents in parallel
For EVERY agent discovered, launch a Task in parallel:
Task [agent-name]: "Review this plan using your expertise. Apply all your checks and patterns. Plan content: [full plan content]"
CRITICAL RULES:
Step 4: Also discover and run research agents
Research agents (like best-practices-researcher, framework-docs-researcher, git-history-analyzer, repo-research-analyst) should also be run for relevant plan sections.
Collect outputs from ALL sources:
soleur:compoundFor each agent's findings, extract:
Deduplicate and prioritize:
Enhancement format for each section:
## [Original Section Title]
[Original content preserved]
### Research Insights
**Best Practices:**
- [Concrete recommendation 1]
- [Concrete recommendation 2]
**Performance Considerations:**
- [Optimization opportunity]
- [Benchmark or metric to target]
**Implementation Details:**
```[language]
// Concrete code example from research
Edge Cases:
References:
### 8. Add Enhancement Summary
At the top of the plan, add a summary section:
```markdown
## Enhancement Summary
**Deepened on:** [Date]
**Sections enhanced:** [Count]
**Research agents used:** [List]
### Key Improvements
1. [Major improvement 1]
2. [Major improvement 2]
3. [Major improvement 3]
### New Considerations Discovered
- [Important finding 1]
- [Important finding 2]
Write the enhanced plan:
-deepened suffix if the user prefers a new fileUpdate the plan file in place (or if user requests a separate file, append -deepened after -plan, e.g., 2026-01-15-feat-auth-plan-deepened.md).
Before finalizing:
After writing the enhanced plan, use the AskUserQuestion tool to present these options:
Question: "Plan deepened at [plan_path]. What would you like to do next?"
Options:
/plan_review - Get feedback from reviewers on enhanced plansoleur:work - Begin implementing this enhanced planBased on selection:
git diff [plan_path] or show before/after/plan_review -> Call the /plan_review command with the plan file pathsoleur:work -> Use skill: soleur:work with the plan file pathBefore (from soleur:plan):
## Technical Approach
Use React Query for data fetching with optimistic updates.
After (from /deepen-plan):
## Technical Approach
Use React Query for data fetching with optimistic updates.
### Research Insights
**Best Practices:**
- Configure `staleTime` and `cacheTime` based on data freshness requirements
- Use `queryKey` factories for consistent cache invalidation
- Implement error boundaries around query-dependent components
**Performance Considerations:**
- Enable `refetchOnWindowFocus: false` for stable data to reduce unnecessary requests
- Use `select` option to transform and memoize data at query level
- Consider `placeholderData` for instant perceived loading
**Implementation Details:**
```typescript
// Recommended query configuration
const queryClient = new QueryClient({
defaultOptions: {
queries: {
staleTime: 5 * 60 * 1000, // 5 minutes
retry: 2,
refetchOnWindowFocus: false,
},
},
});
Edge Cases:
cancelQueries on component unmountpersistQueryClientReferences: