Validates product 'why' before building via diagnostics, founder reviews of codebases, user journey audits, and ICE-scored feature prioritization. Outputs briefs and go/no-go recs.
npx claudepluginhub pcoulbourne/everything-claude-codeThis skill uses the workspace's default tool permissions.
This lane owns product diagnosis, not implementation-ready specification writing.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Designs, implements, and audits WCAG 2.2 AA accessible UIs for Web (ARIA/HTML5), iOS (SwiftUI traits), and Android (Compose semantics). Audits code for compliance gaps.
This lane owns product diagnosis, not implementation-ready specification writing.
If the user needs a durable PRD-to-SRS or capability-contract artifact, hand off to product-capability.
Like YC office hours but automated. Asks the hard questions:
1. Who is this for? (specific person, not "developers")
2. What's the pain? (quantify: how often, how bad, what do they do today?)
3. Why now? (what changed that makes this possible/necessary?)
4. What's the 10-star version? (if money/time were unlimited)
5. What's the MVP? (smallest thing that proves the thesis)
6. What's the anti-goal? (what are you explicitly NOT building?)
7. How do you know it's working? (metric, not vibes)
Output: a PRODUCT-BRIEF.md with answers, risks, and a go/no-go recommendation.
If the result is "yes, build this," the next lane is product-capability, not more founder-theater.
Reviews your current project through a founder lens:
1. Read README, CLAUDE.md, package.json, recent commits
2. Infer: what is this trying to be?
3. Score: product-market fit signals (0-10)
- Usage growth trajectory
- Retention indicators (repeat contributors, return users)
- Revenue signals (pricing page, billing code, Stripe integration)
- Competitive moat (what's hard to copy?)
4. Identify: the one thing that would 10x this
5. Flag: things you're building that don't matter
Maps the actual user experience:
1. Clone/install the product as a new user
2. Document every friction point (confusing steps, errors, missing docs)
3. Time each step
4. Compare to competitor onboarding
5. Score: time-to-value (how long until the user gets their first win?)
6. Recommend: top 3 fixes for onboarding
When you have 10 ideas and need to pick 2:
1. List all candidate features
2. Score each on: impact (1-5) × confidence (1-5) ÷ effort (1-5)
3. Rank by ICE score
4. Apply constraints: runway, team size, dependencies
5. Output: prioritized roadmap with rationale
All modes output actionable docs, not essays. Every recommendation has a specific next step.
Pair with:
/browser-qa to verify the user journey audit findings/design-system audit for visual polish assessment/canary-watch for post-launch monitoringproduct-capability when the product brief needs to become an implementation-ready capability plan