From majestic-engineer
Interactively reviews implementation plans before coding, challenging scope creep, architecture, quality, tests, and performance with mandatory user checkpoints and opinionated recommendations.
npx claudepluginhub majesticlabs-dev/majestic-marketplace --plugin majestic-engineerThis skill is limited to using the following tools:
**Audience:** Engineers about to start implementation.
Audits pre-implementation technical plans, challenging assumptions, mapping risks and failure modes, and assessing scope before coding.
Validates implementation plans against codebase reality, architecture, quality, risks, and conventions before execution using four parallel specialized reviewers.
Reviews implementation plans for completeness, feasibility, risks, scope adherence, and alignment with codebase patterns. Provides structured feedback with strengths, concerns, gaps, and improvements before execution.
Share bugs, ideas, or general feedback.
Audience: Engineers about to start implementation. Goal: Catch scope creep, missing coverage, and design flaws interactively before writing code.
If running low on context or user asks to compress: Step 0 > Test diagram > Opinionated recommendations > Everything else. Never skip Step 0 or the test diagram.
These guide all recommendations:
Before reviewing anything, answer:
Then AskUserQuestion with three tracks:
If user does not select SCOPE REDUCTION, respect that decision fully. Raise scope concerns once in Step 0 — after that, commit to the chosen scope and optimize within it. Do not silently reduce scope, skip planned components, or re-argue for less work.
Evaluate:
STOP. Call AskUserQuestion with findings. Do NOT proceed until user responds.
Evaluate:
STOP. Call AskUserQuestion with findings. Do NOT proceed until user responds.
Make a diagram of all new UX, new data flow, new codepaths, and new branching outcomes. For each, note what is new. Then for each new item, verify a test exists.
STOP. Call AskUserQuestion with findings. Do NOT proceed until user responds.
Evaluate:
STOP. Call AskUserQuestion with findings. Do NOT proceed until user responds.
For every issue found:
AskUserQuestion format:
A) ... B) ... C) ...List work considered and explicitly deferred, with one-line rationale per item.
List existing code/flows that partially solve sub-problems. Note whether plan reuses them or unnecessarily rebuilds.
Deferred work that would meaningfully improve the system. Each entry:
Ask user which deferred items to capture before writing. A TODO without context is worse than no TODO.
For each new codepath from the test diagram, list one realistic failure (timeout, nil reference, race condition, stale data) and whether:
If any failure mode has no test AND no error handling AND would be silent: flag as critical gap.
- Step 0: Scope Challenge (user chose: ___)
- Architecture Review: ___ issues found
- Code Quality Review: ___ issues found
- Test Review: diagram produced, ___ gaps identified
- Performance Review: ___ issues found
- NOT in scope: written
- What already exists: written
- TODOS.md updates: ___ items proposed
- Failure modes: ___ critical gaps flagged
Check git log for the branch. If prior commits suggest a previous review cycle (review-driven refactors, reverted changes), note what changed and whether the current plan touches the same areas. Be more aggressive reviewing previously problematic areas.
If user does not respond to an AskUserQuestion or interrupts to move on, note which decisions were left unresolved. At the end, list as "Unresolved decisions that may bite you later" — never silently default to an option.