Validates product 'why' before building, diagnoses repos for PMF, audits user journeys by cloning projects, prioritizes features via ICE scores.
npx claudepluginhub victorchuyen/everything-claude-code-mainThis skill uses the workspace's default tool permissions.
- Before starting any feature — validate the "why"
Validates product 'why' before building via diagnostics, founder reviews, user journey audits, and feature prioritization. Outputs PRODUCT-BRIEF.md, roadmaps, and next steps.
Use this skill to validate the "why" before building, run product diagnostics, and pressure-test product direction before the request becomes an implementation contract.
Assists with product management tasks: writing PRDs, analyzing features, synthesizing user research, planning roadmaps, and communicating decisions. Integrates with codebase for technical context.
Share bugs, ideas, or general feedback.
Like YC office hours but automated. Asks the hard questions:
1. Who is this for? (specific person, not "developers")
2. What's the pain? (quantify: how often, how bad, what do they do today?)
3. Why now? (what changed that makes this possible/necessary?)
4. What's the 10-star version? (if money/time were unlimited)
5. What's the MVP? (smallest thing that proves the thesis)
6. What's the anti-goal? (what are you explicitly NOT building?)
7. How do you know it's working? (metric, not vibes)
Output: a PRODUCT-BRIEF.md with answers, risks, and a go/no-go recommendation.
Reviews your current project through a founder lens:
1. Read README, CLAUDE.md, package.json, recent commits
2. Infer: what is this trying to be?
3. Score: product-market fit signals (0-10)
- Usage growth trajectory
- Retention indicators (repeat contributors, return users)
- Revenue signals (pricing page, billing code, Stripe integration)
- Competitive moat (what's hard to copy?)
4. Identify: the one thing that would 10x this
5. Flag: things you're building that don't matter
Maps the actual user experience:
1. Clone/install the product as a new user
2. Document every friction point (confusing steps, errors, missing docs)
3. Time each step
4. Compare to competitor onboarding
5. Score: time-to-value (how long until the user gets their first win?)
6. Recommend: top 3 fixes for onboarding
When you have 10 ideas and need to pick 2:
1. List all candidate features
2. Score each on: impact (1-5) × confidence (1-5) ÷ effort (1-5)
3. Rank by ICE score
4. Apply constraints: runway, team size, dependencies
5. Output: prioritized roadmap with rationale
All modes output actionable docs, not essays. Every recommendation has a specific next step.
Pair with:
/browser-qa to verify the user journey audit findings/design-system audit for visual polish assessment/canary-watch for post-launch monitoring