By saadshahd
Structure AI-assisted coding workflows using HOPE discipline: clarify vague intents into actionable specs, shape technical approaches via expert panels and agent teams, audit code with principles, debug root causes, run postmortems, and capture learnings for reuse.
npx claudepluginhub saadshahd/moo.md --plugin hopeBlock a profile from all expert simulations.
List all blocked profiles.
Assesses team fitness, recommends structure, creates team after approval.
Run the complete hope pipeline — session setup, intent, shape, consult, bond
Turn rough ideas into clear work orders before building
Assemble multiple experts for debate and consensus. Use for design decisions, architecture reviews, tradeoff discussions, spec clarification, stuck debugging, or code review.
Explicitly invoke a single expert for guidance. Use when you want a specific perspective on code, architecture, or design decisions.
Remove a profile from expert blocklist.
Assesses team fitness and composes agent teams. Use when "set up a team", "team for this", "should I use agents", "design a team", "how many agents", "agent team".
Simulates expert panels, compares documented positions across thought leaders, and synthesizes anonymous recommendations grouped by concern. Invoke when facing design tradeoffs, architecture decisions, repeated failure modes, or domain questions where multiple perspectives would reduce decision regret. Triggers on: expert names, "panel", "debate", "what would [X] say", "stuck on", style requests.
Turn rough ideas into clear work orders before planning or building. Use when request is vague like "add a button", "make it better", "fix the thing". Triggers on ambiguous or underspecified requests. Produces a brief with scope, acceptance criteria, and stop conditions.
Generate a project-level CLAUDE.md from stack detection and user-selected rule categories. Use when starting a new project, onboarding a repo, or when the user says "seed claude.md", "create project rules", "set up CLAUDE.md", "configure this project for me", or wants to establish coding conventions.
Resolves technical HOW decisions — architecture choices, technology selection, and design patterns — from a defined spec or intent. Distinct from hope:intent (which clarifies WHAT to build): shape starts when the goal is clear but the technical path is not. Use when: needing an implementation roadmap, choosing between architectural approaches, or resolving design trade-offs before coding. Triggers on: "shape this", "architecture for X", "how should I build", "system design", "technical approach", "design this", "which pattern", "implementation plan".
Why does this exist? Why introduce theatrical friction?
Because YOU the human -> end up being the world model. The LLM powered agentic coder is just your hands with tiny muscle memory.
Before moo:
"Build a settings page" → starts coding → wrong pattern → rebuilds → ships something nobody asked for
After moo:
"Build a settings page" → clarifies what matters → shapes the approach → builds with confidence
/plugin marketplace add saadshahd/moo.md
/plugin install hope@moo.md
You say: /hope:full build a settings page and moo:
[INTENT] Specific questions, not "what do you want?"
moo asks:
You answer:
Two more rounds — product details, tech stack, traffic split, social proof assets. Each question targets a specific gap.
[ECHO] 33 words. You confirm or edit before anything proceeds.
moo echoes back: Rewrite homepage copy and layout (Next.js/Tailwind) to clearly communicate the AI email marketing value prop, move signup above the fold, and add social proof — targeting 5% conversion, especially on mobile.
[ASSUMPTIONS] Surfaces decisions you never stated — forces you to choose.
Page length — you said no long-scroll. How many sections?
Signup mechanism — what fields?
Mobile target — mobile is at 1.2%. Separate target?
[CONSULT] Experts debate. Real tension, not rubber-stamping.
moo identifies the key architectural decision: should the hero section lead with the product or the outcome?
Krug (usability): "Don't make me think. Lead with what the product does — users scan, they don't read. Clever taglines fail."
Cialdini (persuasion): "Lead with the outcome — social proof and results pull harder than feature descriptions. Show the 50M emails number above the fold."
Tension: Krug wants clarity-first ("AI email marketing for e-commerce"). Cialdini wants proof-first ("50M+ emails sent for brands like yours"). Different advice, real tradeoff.
[SHAPE] Two concrete approaches with tradeoffs. You pick before code.
| Approach A: Clarity-first | Approach B: Proof-first | |
|---|---|---|
| Hero | "AI-powered email campaigns for e-commerce" + signup | "50M+ emails sent" + brand logos + signup |
| Strength | Instant comprehension | Instant credibility |
| Risk | Feels generic without differentiation | Confusing if visitor doesn't know the category |
| Best when | Product category is unfamiliar | Product category is obvious, trust is the gap |
You pick Approach A — visitors don't know the category yet. Proof moves to section 2.
| Objective | Rewrite homepage copy and layout. 5% desktop / 3.5% mobile conversion. |
| Non-goals | No visual redesign, no long-scroll, no chatbot, no CMS changes. |
| Acceptance | Hero = "result + how." Signup in first viewport. Social proof within 2 scrolls. Single CTA style. |
| Stop conditions | Conversion drops below 2%. Lighthouse below 90. Page exceeds 5 sections. |
From "make the homepage better" to a spec with stop conditions — before a line of code.
Team-oriented workflow plugin with role agents, 27 specialist agents, ECC-inspired commands, layered rules, and hooks skeleton.
Uses power tools
Uses Bash, Write, or Edit tools
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Comprehensive startup business analysis with market sizing (TAM/SAM/SOM), financial modeling, team planning, and strategic research
Comprehensive .NET development skills for modern C#, ASP.NET, MAUI, Blazor, Aspire, EF Core, Native AOT, testing, security, performance optimization, CI/CD, and cloud-native applications
Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns.
Comprehensive PR review agents specializing in comments, tests, error handling, type design, code quality, and code simplification