From compound-engineering
Craft a comprehensive prompt for a frontier reasoning model — clarify intent, gather context, assemble a non-prescriptive prompt
npx claudepluginhub mberto10/mberto-compound[--interview|--review <project>] <topic or question># Craft Prompt Generate a high-quality prompt for a frontier reasoning model. The prompt will give the model rich context and space to think — not prescribed conclusions. **Input:** $ARGUMENTS --- ## Critical Constraint: The Blind Model The reasoning model you are crafting this prompt for has **ZERO access** to anything outside the prompt text. It cannot read files, access Linear, browse the web, or see conversation history. **Everything it needs to reason well must be inlined in the prompt.** If context is referenced but not included, the model will hallucinate or give generic advice....
/craft-promptUse SPARK.SHAPE.SHARPEN to create or improve any prompt for any AI model.
/craft-promptDraft a new prompt (system or user) with clear structure, constraints, and metadata, then add it to the current workspace.
Generate a high-quality prompt for a frontier reasoning model. The prompt will give the model rich context and space to think — not prescribed conclusions.
Input: $ARGUMENTS
The reasoning model you are crafting this prompt for has ZERO access to anything outside the prompt text. It cannot read files, access Linear, browse the web, or see conversation history. Everything it needs to reason well must be inlined in the prompt. If context is referenced but not included, the model will hallucinate or give generic advice.
This constraint drives every decision in this command.
Start with the real question — not logistics.
From $ARGUMENTS, extract:
/investigate gap maps in ./compound-discoveries/*-gaps-*.md)If the stuck point is unclear, ask ONE question before anything else:
"What would change if the reasoning model gave you a perfect answer? What decision would you make, or what direction would you take?"
Mode selection:
--review <project> flag is present → review mode--interview flag is present → interview mode| Mode | Context Source | Best For |
|---|---|---|
| Assemble | Systems — files, APIs, web, investigation artifacts | Technical analysis, system comparison, post-investigation handoff |
| Interview | The user — via multi-round AskUserQuestion | Organizational design, strategy, personal workflows, domain expertise |
| Review | Compound-engineering knowledge base + git/Linear + short interview | Recurring project reviews — architecture critique, optimization proposals, test strategies |
If $ARGUMENTS is empty or too vague to proceed, ask the user to describe what they want the reasoning model to help with.
Before scoping, check if investigation work already exists.
./compound-discoveries/*-gaps-*.md — recent gap analysis artifacts./compound-discoveries/*-reflection-*.md — recent reflection artifacts/investigateIf artifacts exist:
If no artifacts exist: Proceed to Step 2.
Ask 3-4 questions via AskUserQuestion. Lead with intent, not logistics. Use the prompt-craft skill's Phase 2 guidance.
Required decisions (get these answered, directly or via inference):
If investigation artifacts were found in Step 1b, skip questions you can already answer from the gap map. Focus remaining questions on what the reasoning model should specifically help with.
Maximum 4 questions in this phase. Use the user's initial input + investigation artifacts to pre-fill what you can infer.
When context lives in the user's head, run a multi-round interview before gathering any system context. Follow the prompt-craft skill's Phase 2b interview methodology.
If the user references existing documents (Linear docs, files, specs), read them FIRST. They are the baseline — the interview surfaces how reality differs.
Run 3-5 rounds of 3-4 AskUserQuestion calls. Each round builds on the previous:
Round 1: Reality check — Team composition, current state, organizational maturity. Compare against any reference documents: "The doc says X — is that how it actually works?"
Round 2: How work flows — Actual intake, pain points, what's working vs. broken. Follow the user's energy — if they highlight a tension, dig into it.
Round 3: Structural tensions — Specific role boundaries, decision confusion, authority gaps. Reference what surfaced in Round 2.
Round 4: Context and constraints — Industry, authority level, success criteria, external references.
Round 5+ (if needed): Follow remaining threads until you can fully describe the user's reality to the reasoning model.
Stop interviewing when you can answer without guessing:
Throughout the interview, accumulate the user's answers — especially free-text notes where they speak in their own words. These become the context section of the prompt. The user's lived description of their reality is more valuable than any document.
For recurring technical reviews of a project that uses compound-engineering.
The profile IS the existing knowledge artifacts — no separate file needed:
1. Glob: subsystems_knowledge/**/*.yaml → read ALL specs
2. Read: subsystems_knowledge/architecture.yaml (if exists)
3. Read: ORCHESTRATOR.md (if exists)
Include the full subsystem specs in the prompt — not summaries. Trim only large ASCII diagrams from architecture.overview if needed for space.
Glob: scripts/reasoning-prompt-{project}-v*.md
If previous versions exist:
## Review History sectionIf no previous versions: this is v1, note it as the initial review.
Run these in parallel:
# Recent changes (since last review, or last 30 commits)
git log --oneline -30 # or git log --oneline --since={last_review_date}
# Change shape
git diff --stat {last_review_commit}..HEAD # or just recent stats
# Current branch state
git status
Pull from Linear:
Optionally run tier0 tests from subsystem specs to capture current pass/fail state.
Unlike full interview mode, review interviews are short and focused on what's changed:
Round 1:
Round 2 (if needed): Follow up on the bottleneck — what have they tried, what's their current thinking.
Combine: review history + static profile + dynamic context + interview answers.
Write to scripts/reasoning-prompt-{project}-vN-{date}.md where N increments from the last version.
The prompt structure for reviews:
# Review: {project} — v{N}
## Your Role
[Open framing — set by interview focus or left broad]
## Review History
[Previous review findings and their status — compounds across versions]
## Architecture (Static Profile)
### Subsystem Map
[From subsystem specs — full specs inlined]
### Cross-Subsystem Invariants
[From architecture.yaml]
### Conventions, Decisions, Known Risks
[From ORCHESTRATOR.md]
## Current State (Dynamic)
### Recent Changes
[git log since last review]
### Milestone & Issues
[Linear state]
### Test State
[Current pass/fail if gathered]
## Current Focus
[From interview — what the user wants analyzed this review]
## Your Task
[Open-ended, shaped by interview focus but not prescriptive]
Based on clarified intent, determine which context sources to pull from. Remember: the reasoning model is completely blind — everything must be inlined.
In interview mode, this step is optional — only gather from systems if the interview surfaces specific artifacts that should be included.
In review mode, static and dynamic context is already gathered in Step 2c. This step is only needed if the interview surfaces additional sources (e.g., "also look at how project X does it" or "check the Langfuse scores").
If Step 1b found investigation artifacts, use them as the foundation:
Don't just reference these artifacts — distill them into the prompt. The reasoning model can't read files.
Based on the stuck point + investigation artifacts + scoping answers, identify what's still missing:
| What's Needed | Source | How to Gather |
|---|---|---|
| Project identity / product concept | Linear project description, README, PRODUCT.md | Linear MCP tools, Read |
| Subsystem architecture | subsystems_knowledge/**/*.yaml | Read full specs for relevant subsystems |
| Cross-subsystem constraints | architecture.yaml, ORCHESTRATOR.md | Read directly |
| Actual code (key interfaces, schemas) | Source files identified by starter_files or paths.owned | Read targeted excerpts — NOT entire files |
| Dependency context | Specs of subsystems that the focal subsystems depend on | Read at minimum: description, public_api, invariants |
| Project state | Linear milestones, open issues, blockers | Linear MCP tools |
| External references | Docs, blog posts, alternative approaches | WebSearch, WebFetch |
| Comparative systems | Other implementations of the same idea | Gather in parallel |
Use the Agent tool to parallelize independent research:
Each agent prompt should say: "This is a research task — return the raw content, don't summarize. I need accurate details for a reasoning model prompt. The reasoning model has NO access to the codebase — everything must be self-contained in the prompt."
Follow the prompt-craft skill's Phase 4 structure:
Set the reasoning model's perspective. Keep it open:
Organize by system/implementation, not by theme. For each:
If comparative: factual difference tables are context. Interpretive judgments are direction — omit them.
State what the user wants analyzed:
Minimal output scaffolding:
Before writing the file, run two verification passes from the prompt-craft skill:
Pass 1: Self-Containment (The Blind Model Check)
Read the entire assembled prompt and ask: "If a brilliant developer who has never seen this project reads this, can they give specific, actionable advice?"
If any check fails, fix it before proceeding. A prompt that fails self-containment will produce generic advice.
Pass 2: Anti-Prescriptive Check
If any check fails, revise before writing.
Write the prompt to scripts/reasoning-prompt-{slug}.md where {slug} is a kebab-case topic descriptor
Present a summary to the user:
PROMPT CRAFTED
===
File: scripts/reasoning-prompt-{slug}.md
Target: {reasoning model}
Context included:
- {source 1}: {what was included}
- {source 2}: {what was included}
Problem framed as: {one-line summary of the problem section}
Output type: {what the model is asked to produce}
Prompt length: ~{word count} words
Context/Task ratio: {percentage}
Ask if the user wants to adjust anything before using it.