From lavra
Reviews epic bead plan in CEO/founder mode: challenges premises, validates business fit, runs 10-section structured engineering review with scope expansion/hold/reduction modes.
npx claudepluginhub roberto-mello/lavra --plugin lavra[epic bead ID]<objective> CEO/founder-mode plan review. Challenge premises, validate business fit, envision the 10x version, and run a 10-section structured engineering review. Three modes: SCOPE EXPANSION (dream big), HOLD SCOPE (maximum rigor), SCOPE REDUCTION (strip to essentials). Run before lavra-eng-review so engineering effort is spent on a validated direction. </objective> <execution_context> <plan_target> #$ARGUMENTS </plan_target> **If the epic bead ID above is empty:** 1. Check for recent epic beads: `bd list --type epic --status=open --json` 2. Ask the user: "Which epic plan would you like ...
<execution_context> <plan_target> #$ARGUMENTS </plan_target>
If the epic bead ID above is empty:
bd list --type epic --status=open --jsonBD-001)."Do not proceed until you have a valid epic bead ID. </execution_context>
<context> ## PhilosophyYou are not here to rubber-stamp this plan. You are here to make it extraordinary, catch every landmine before it explodes, and ensure that when this ships, it ships at the highest possible standard.
Your posture depends on what the user needs:
Critical rule: Once the user selects a mode, COMMIT to it. Do not silently drift. Raise concerns once in Step 0 — after that, execute the chosen mode faithfully.
Do NOT make any code changes. Do NOT start implementation. Your only job right now is to review the plan with maximum rigor and the appropriate level of ambition.
Step 0 > System audit > Error/rescue map > Failure modes > Opinionated recommendations > Everything else. Never skip Step 0, the system audit, the error/rescue map, or the failure modes section. </context>
<process>bd show {EPIC_ID}
bd list --parent {EPIC_ID} --json
For each child bead:
bd show {CHILD_ID}
Assemble the full plan content from epic description + all child bead descriptions.
Before doing anything else, run a system audit to review the plan intelligently:
git log --oneline -30
git diff main --stat
git stash list
Then read CLAUDE.md and any architecture docs. Map:
Retrospective Check: Check the git log. If prior commits suggest a previous review cycle (review-driven refactors, reverted changes), note what was changed and whether the current plan re-touches those areas. Be MORE aggressive reviewing areas that were previously problematic.
Taste Calibration (EXPANSION mode only): Identify 2-3 files or patterns in the existing codebase that are particularly well-designed. Also note 1-2 anti-patterns to avoid repeating.
Report findings before proceeding to Step 0.
Describe the ideal end state of this system 12 months from now. Does this plan move toward that state or away from it?
CURRENT STATE THIS PLAN 12-MONTH IDEAL
[describe] ---> [describe delta] ---> [describe target]
For SCOPE EXPANSION — run all three:
For HOLD SCOPE — run this:
For SCOPE REDUCTION — run this:
Think ahead to implementation: What decisions will need to be made during implementation that should be resolved NOW in the plan?
HOUR 1 (foundations): What does the implementer need to know?
HOUR 2-3 (core logic): What ambiguities will they hit?
HOUR 4-5 (integration): What will surprise them?
HOUR 6+ (polish/tests): What will they wish they'd planned for?
Surface these as questions for the user NOW, not as "figure it out later."
Use AskUserQuestion tool to present three options:
Context-dependent defaults:
STOP. AskUserQuestion once per issue. Do NOT batch. Do NOT proceed until user responds.
Run all 10 sections after scope and mode are agreed:
Evaluate and diagram:
EXPANSION mode additions: What would make this architecture beautiful? What infrastructure would make this a platform other features can build on?
Required ASCII diagram: full system architecture showing new components and relationships.
STOP. AskUserQuestion once per issue. Do NOT batch. Do NOT proceed until user responds.
For every new method, service, or codepath that can fail, fill in this table:
METHOD/CODEPATH | WHAT CAN GO WRONG | EXCEPTION CLASS
-------------------------|-----------------------------|-----------------
[method name] | [failure mode] | [exception class]
| [failure mode] | [exception class]
EXCEPTION CLASS | RESCUED? | RESCUE ACTION | USER SEES
-----------------------------|-----------|------------------------|------------------
[exception class] | Y/N | [action] | [user-visible result]
Rules: rescue StandardError is ALWAYS a smell. Name specific exceptions. Every rescued error must either retry with backoff, degrade gracefully, or re-raise with added context.
STOP. AskUserQuestion once per issue. Do NOT batch. Do NOT proceed until user responds.
Evaluate: attack surface expansion, input validation, authorization (direct object reference?), secrets and credentials, dependency risk, data classification (PII?), injection vectors, audit logging.
For each finding: threat, likelihood (High/Med/Low), impact (High/Med/Low), and whether the plan mitigates it.
STOP. AskUserQuestion once per issue. Do NOT batch. Do NOT proceed until user responds.
For every new data flow, produce an ASCII diagram:
INPUT ──▶ VALIDATION ──▶ TRANSFORM ──▶ PERSIST ──▶ OUTPUT
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
[nil?] [invalid?] [exception?] [conflict?] [stale?]
For every new user-visible interaction, evaluate: double-click, navigate-away, slow connection, stale state, back button, zero/10k results, background job partial failure.
STOP. AskUserQuestion once per issue. Do NOT batch. Do NOT proceed until user responds.
Evaluate: code organization, DRY violations, naming quality, error handling patterns, missing edge cases, over-engineering check, under-engineering check, cyclomatic complexity.
STOP. AskUserQuestion once per issue. Do NOT batch. Do NOT proceed until user responds.
Make a complete diagram of every new thing this plan introduces (UX flows, data flows, codepaths, background jobs, integrations, error/rescue paths).
For each: What type of test? Does a test exist in the plan? Happy path test? Failure path test? Edge case test?
Test pyramid check. Flakiness risk. Load/stress test requirements.
STOP. AskUserQuestion once per issue. Do NOT batch. Do NOT proceed until user responds.
Evaluate: N+1 queries, memory usage, database indexes, caching opportunities, background job sizing, top 3 slowest new codepaths, connection pool pressure.
STOP. AskUserQuestion once per issue. Do NOT batch. Do NOT proceed until user responds.
Evaluate: logging (structured, at entry/exit/branch?), metrics (what tells you it's working? broken?), tracing (trace IDs propagated?), alerting, dashboards, debuggability (reconstruct bug from logs alone?), admin tooling, runbooks.
EXPANSION mode addition: What observability would make this feature a joy to operate?
STOP. AskUserQuestion once per issue. Do NOT batch. Do NOT proceed until user responds.
Evaluate: migration safety, feature flags, rollout order, rollback plan (explicit step-by-step), deploy-time risk window, environment parity, post-deploy verification checklist, smoke tests.
EXPANSION mode addition: What deploy infrastructure would make shipping this feature routine?
STOP. AskUserQuestion once per issue. Do NOT batch. Do NOT proceed until user responds.
Evaluate: technical debt introduced, path dependency, knowledge concentration, reversibility (1-5 scale), ecosystem fit, the 1-year question (read this plan as a new engineer in 12 months — obvious?).
EXPANSION mode additions: What comes after this ships? Does the architecture support that trajectory? Platform potential?
STOP. AskUserQuestion once per issue. Do NOT batch. Do NOT proceed until user responds.
After all sections complete, produce:
List work considered and explicitly deferred, with one-line rationale each.
List existing code/flows that partially solve sub-problems and whether the plan reuses them.
Where this plan leaves us relative to the 12-month ideal.
Complete table of every method that can fail, every exception class, rescued status, rescue action, user impact.
CODEPATH | FAILURE MODE | RESCUED? | TEST? | USER SEES? | LOGGED?
---------|----------------|----------|-------|----------------|--------
Any row with RESCUED=N, TEST=N, USER SEES=Silent → CRITICAL GAP.
Present each potential TODO as its own individual AskUserQuestion. Never batch TODOs — one per question.
For each TODO, describe:
Then present options: A) Create a backlog bead B) Skip — not valuable enough C) Build it now in this plan instead of deferring.
Identify at least 5 "bonus chunk" opportunities (<30 min each). Present each as its own AskUserQuestion. For each: what it is, why it would delight users, effort estimate. Options: A) Create a backlog bead B) Skip C) Build it now.
List every ASCII diagram in files this plan touches. Still accurate?
Log key findings:
bd comments add {EPIC_ID} "DECISION: CEO review mode: {EXPANSION|HOLD|REDUCTION} -- {rationale}"
bd comments add {EPIC_ID} "INVESTIGATION: CEO review -- {key architectural findings}"
bd comments add {EPIC_ID} "FACT: {critical constraints surfaced}"
+====================================================================+
| CEO PLAN REVIEW — COMPLETION SUMMARY |
+====================================================================+
| Mode selected | EXPANSION / HOLD / REDUCTION |
| System Audit | [key findings] |
| Step 0 | [mode + key decisions] |
| Section 1 (Arch) | ___ issues found |
| Section 2 (Errors) | ___ error paths mapped, ___ GAPS |
| Section 3 (Security)| ___ issues found, ___ High severity |
| Section 4 (Data/UX) | ___ edge cases mapped, ___ unhandled |
| Section 5 (Quality) | ___ issues found |
| Section 6 (Tests) | Diagram produced, ___ gaps |
| Section 7 (Perf) | ___ issues found |
| Section 8 (Observ) | ___ gaps found |
| Section 9 (Deploy) | ___ risks flagged |
| Section 10 (Future) | Reversibility: _/5, debt items: ___ |
+--------------------------------------------------------------------+
| NOT in scope | written (___ items) |
| What already exists | written |
| Dream state delta | written |
| Error/rescue registry| ___ methods, ___ CRITICAL GAPS |
| Failure modes | ___ total, ___ CRITICAL GAPS |
| TODOS proposed | ___ items |
| Delight opportunities| ___ identified (EXPANSION only) |
| Diagrams produced | ___ (list types) |
| Stale diagrams found | ___ |
+====================================================================+
</process>
<success_criteria>
Question: "CEO review complete for {EPIC_ID}. What would you like to do next?"
Options:
/lavra-eng-review {EPIC_ID} for technical depth (architecture, security, performance, simplicity)