From ftitos-claude-code
Validate the "why" before building. Run product diagnostics and convert vague ideas into specs. Includes forcing questions, founder review, user journey audit, and feature prioritization.
npx claudepluginhub nassimbf/ftitos-claude-codeThis skill uses the workspace's default tool permissions.
**Anti-sycophancy is mandatory here.** This skill exists to surface hard truths before you write code. Do not soften findings.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Anti-sycophancy is mandatory here. This skill exists to surface hard truths before you write code. Do not soften findings.
Run through these questions. Do not proceed to code until all have real answers.
Q1 -- Who exactly? Not "developers" or "teams." Name a specific type of person with a specific situation.
Q2 -- What's the pain, quantified? How often does this happen? What do they do today? What does it cost them?
Q3 -- Why now? What changed recently that makes this necessary or newly possible?
Q4 -- What's the 10-star version? Forget constraints. What would the perfect version look like?
Q5 -- What's the MVP? The smallest version that proves the thesis. Not the smallest feature -- the smallest thing that tests the core assumption.
Q6 -- What's the anti-goal? What are you explicitly NOT building?
Q7 -- How do you know it's working? One metric. A number you can check next week.
| Signal | What to look for |
|---|---|
| Payment evidence | Has anyone paid or committed to pay? |
| Usage proof | Do people use the current workaround repeatedly? |
| Urgency | Do they ask unprompted, or only when suggested? |
| Specificity | Can they name a specific instance where this cost them? |
Signal count -> output tier:
Reviews your current project through a founder lens.
1. Clone/install as a brand new user (no prior context)
2. Document every friction point
3. Time each step
4. Compare to closest competitor's onboarding
5. Score time-to-value
6. Recommend: top 3 fixes ranked by friction-removed per effort
ICE Scoring:
Score = Impact (1-5) x Confidence (1-5) / Effort (1-5)
All modes output actionable docs, not essays. Every recommendation has a specific next step.