Create optimized prompts for Claude-to-Claude pipelines with research, planning, and execution stages. Use when building prompts that produce outputs for other prompts to consume, or when running multi-stage workflows (research -> plan -> implement).
/plugin marketplace add glittercowboy/taches-cc-resources/plugin install taches-cc-resources@taches-cc-resourcesThis skill inherits all available tools. When active, it can use any tool Claude has access to.
README.mdreferences/do-patterns.mdreferences/intelligence-rules.mdreferences/metadata-guidelines.mdreferences/plan-patterns.mdreferences/question-bank.mdreferences/refine-patterns.mdreferences/research-patterns.mdreferences/research-pitfalls.mdreferences/summary-template.mdEvery execution produces a SUMMARY.md for quick human scanning without reading full outputs.
Each prompt gets its own folder in .prompts/ with its output artifacts, enabling clear provenance and chain detection.
</objective>
<quick_start> <workflow>
.prompts/{number}-{topic}-{purpose}/<folder_structure>
.prompts/
├── 001-auth-research/
│ ├── completed/
│ │ └── 001-auth-research.md # Prompt (archived after run)
│ ├── auth-research.md # Full output (XML for Claude)
│ └── SUMMARY.md # Executive summary (markdown for human)
├── 002-auth-plan/
│ ├── completed/
│ │ └── 002-auth-plan.md
│ ├── auth-plan.md
│ └── SUMMARY.md
├── 003-auth-implement/
│ ├── completed/
│ │ └── 003-auth-implement.md
│ └── SUMMARY.md # Do prompts create code elsewhere
├── 004-auth-research-refine/
│ ├── completed/
│ │ └── 004-auth-research-refine.md
│ ├── archive/
│ │ └── auth-research-v1.md # Previous version
│ └── SUMMARY.md
</folder_structure> </quick_start>
<context> Prompts directory: !`[ -d ./.prompts ] && echo "exists" || echo "missing"` Existing research/plans: !`find ./.prompts -name "*-research.md" -o -name "*-plan.md" 2>/dev/null | head -10` Next prompt number: !`ls -d ./.prompts/*/ 2>/dev/null | wc -l | xargs -I {} expr {} + 1` </context><automated_workflow>
<step_0_intake_gate>
<title>Adaptive Requirements Gathering</title><critical_first_action> BEFORE analyzing anything, check if context was provided.
IF no context provided (skill invoked without description): → IMMEDIATELY use AskUserQuestion with:
After selection, ask: "Describe what you want to accomplish" (they select "Other" to provide free text).
IF context was provided: → Check if purpose is inferable from keywords:
implement, build, create, fix, add, refactor → Doplan, roadmap, approach, strategy, decide, phases → Planresearch, understand, learn, gather, analyze, explore → Researchrefine, improve, deepen, expand, iterate, update → Refine→ If unclear, ask the Purpose question above as first contextual question → If clear, proceed to adaptive_analysis with inferred purpose </critical_first_action>
<adaptive_analysis> Extract and infer:
auth, stripe-payments)If topic identifier not obvious, ask:
For Refine purpose, also identify target output from .prompts/*/ to improve.
</adaptive_analysis>
<chain_detection>
Scan .prompts/*/ for existing *-research.md and *-plan.md files.
If found:
Match by topic keyword when possible (e.g., "auth plan" → suggest auth-research.md). </chain_detection>
<contextual_questioning> Generate 2-4 questions using AskUserQuestion based on purpose and gaps.
Load questions from: references/question-bank.md
Route by purpose:
<decision_gate> After receiving answers, present decision gate using AskUserQuestion:
Loop until "Proceed" selected. </decision_gate>
<finalization> After "Proceed" selected, state confirmation:"Creating a {purpose} prompt for: {topic} Folder: .prompts/{number}-{topic}-{purpose}/ References: {list any chained files}"
Then proceed to generation. </finalization> </step_0_intake_gate>
<step_1_generate>
<title>Generate Prompt</title>Load purpose-specific patterns:
Load intelligence rules: references/intelligence-rules.md
<prompt_structure> All generated prompts include:
For Research and Plan prompts, output must include:
<confidence> - How confident in findings<dependencies> - What's needed to proceed<open_questions> - What remains uncertain<assumptions> - What was assumedAll prompts must create SUMMARY.md with:
<file_creation>
.prompts/{number}-{topic}-{purpose}/completed/ subfolder.prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md.prompts/{number}-{topic}-{purpose}/{topic}-{purpose}.md
</file_creation>
</step_1_generate><step_2_present>
<title>Present Decision Tree</title>After saving prompt(s), present inline (not AskUserQuestion):
<single_prompt_presentation>
Prompt created: .prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md
What's next?
1. Run prompt now
2. Review/edit prompt first
3. Save for later
4. Other
Choose (1-4): _
</single_prompt_presentation>
<multi_prompt_presentation>
Prompts created:
- .prompts/001-auth-research/001-auth-research.md
- .prompts/002-auth-plan/002-auth-plan.md
- .prompts/003-auth-implement/003-auth-implement.md
Detected execution order: Sequential (002 references 001 output, 003 references 002 output)
What's next?
1. Run all prompts (sequential)
2. Review/edit prompts first
3. Save for later
4. Other
Choose (1-4): _
</multi_prompt_presentation> </step_2_present>
<step_3_execute>
<title>Execution Engine</title><execution_modes> <single_prompt> Straightforward execution of one prompt.
.prompts/{number}-{topic}-{purpose}/{topic}-{purpose}.mdcompleted/ subfolder<sequential_execution> For chained prompts where each depends on previous output.
<progress_reporting> Show progress during execution:
Executing 1/3: 001-auth-research... ✓
Executing 2/3: 002-auth-plan... ✓
Executing 3/3: 003-auth-implement... (running)
</progress_reporting> </sequential_execution>
<parallel_execution> For independent prompts with no dependencies.
<failure_handling> Unlike sequential, parallel continues even if some fail:
<mixed_dependencies> For complex DAGs (e.g., two parallel research → one plan).
<dependency_detection> <automatic_detection> Scan prompt contents for @ references to determine dependencies:
@.prompts/{number}-{topic}/ patterns<inference_rules> If no explicit @ references found, infer from purpose:
Override with explicit references when present. </inference_rules> </automatic_detection>
<missing_dependencies> If a prompt references output that doesn't exist:
.prompts/*/ (already completed)<confidence><dependencies><open_questions><assumptions><validation_failure> If validation fails:
<failure_handling> <sequential_failure> Stop the chain immediately:
✗ Failed at 2/3: 002-auth-plan
Completed:
- 001-auth-research ✓ (archived)
Failed:
- 002-auth-plan: Output file not created
Not started:
- 003-auth-implement
What's next?
1. Retry 002-auth-plan
2. View error details
3. Stop here (keep completed work)
4. Other
</sequential_failure>
<parallel_failure> Continue others, report all results:
Parallel execution completed with errors:
✓ 001-api-research (archived)
✗ 002-db-research: Validation failed - missing <confidence> tag
✓ 003-ui-research (archived)
What's next?
1. Retry failed prompt (002)
2. View error details
3. Continue without 002
4. Other
</parallel_failure> </failure_handling>
<archiving> <archive_timing> - **Sequential**: Archive each prompt immediately after successful completion - Provides clear state if execution stops mid-chain - **Parallel**: Archive all at end after collecting results - Keeps prompts available for potential retry<archive_operation> Move prompt file to completed subfolder:
mv .prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md \
.prompts/{number}-{topic}-{purpose}/completed/
Output file stays in place (not moved). </archive_operation> </archiving>
<result_presentation> <single_result>
✓ Executed: 001-auth-research
✓ Created: .prompts/001-auth-research/SUMMARY.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
# Auth Research Summary
**JWT with jose library and httpOnly cookies recommended**
## Key Findings
• jose outperforms jsonwebtoken with better TypeScript support
• httpOnly cookies required (localStorage is XSS vulnerable)
• Refresh rotation is OWASP standard
## Decisions Needed
None - ready for planning
## Blockers
None
## Next Step
Create auth-plan.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
What's next?
1. Create planning prompt (auth-plan)
2. View full research output
3. Done
4. Other
Display the actual SUMMARY.md content inline so user sees findings without opening files. </single_result>
<chain_result>
✓ Chain completed: auth workflow
Results:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
001-auth-research
**JWT with jose library and httpOnly cookies recommended**
Decisions: None • Blockers: None
002-auth-plan
**4-phase implementation: types → JWT core → refresh → tests**
Decisions: Approve 15-min token expiry • Blockers: None
003-auth-implement
**JWT middleware complete with 6 files created**
Decisions: Review before Phase 2 • Blockers: None
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
All prompts archived. Full summaries in .prompts/*/SUMMARY.md
What's next?
1. Review implementation
2. Run tests
3. Create new prompt chain
4. Other
For chains, show condensed one-liner from each SUMMARY.md with decisions/blockers flagged. </chain_result> </result_presentation>
<special_cases> <re_running_completed> If user wants to re-run an already-completed prompt:
completed/ subfolder{output}.bak<output_conflicts> If output file already exists:
{filename}.bak<commit_handling> After successful execution:
Exception: If user explicitly requests commit, stage and commit:
<recursive_prompts> If a prompt's output includes instructions to create more prompts:
</automated_workflow>
<reference_guides> Prompt patterns by purpose:
Shared templates:
Supporting references:
<success_criteria> Prompt Creation:
.prompts/ with correct namingExecution (if user chooses to run):
completed/ subfolderResearch Quality (for Research prompts):
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.