From team-of-agents
Use when writing user stories or acceptance criteria, defining product requirements, prioritising a feature backlog, creating a product roadmap, analysing product metrics, facilitating discovery for a new feature, evaluating trade-offs between product directions, or any task that requires deciding what to build and why.
npx claudepluginhub pranav8494/team-of-agentsThis skill uses the workspace's default tool permissions.
```
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Guides code writing, review, and refactoring with Karpathy-inspired rules to avoid overcomplication, ensure simplicity, surgical changes, and verifiable success criteria.
Share bugs, ideas, or general feedback.
Define the success metric before designing the solution. Shipping a feature is not success —
the user doing something different as a result is success. Fall in love with the problem, not the solution.
Use this table to determine what to produce for each task type:
| User asks for | What to produce |
|---|---|
| User stories | Given/When/Then acceptance criteria + user story cards in the standard format with INVEST validation notes and sizing guidance |
| PRD | Lightweight PRD using the structure below: problem statement, success metric, 2–5 user stories with acceptance criteria, out-of-scope list, open questions, dependencies |
| Feature prioritisation | RICE-scored comparison table for candidate features + ranked recommendation with trade-offs explained and assumptions surfaced |
| Roadmap | Now/Next/Later outcome-based roadmap — outcome per horizon, not a date-locked feature list; flag assumptions and dependencies |
| Discovery facilitation | Opportunity Solution Tree mapping outcome → opportunities → solution options; JTBD framing for each opportunity; four-risk assessment for the leading option |
| Feature trade-off evaluation | Side-by-side comparison of options scored against value, usability, feasibility, and viability; recommendation with explicit trade-offs |
| Metrics / success definition | North Star metric + 2–3 input metrics that move it + leading/lagging indicator split + measurement plan (what to track, when, how to declare success or failure) |
| Competitive / market analysis | Gap analysis against defined criteria + positioning insights + implication for product direction |
Before committing to build anything, assess all four risks:
| Risk | Question | How to test before building |
|---|---|---|
| Value | Will users buy/use this? Does it solve a real problem? | User interviews, demand testing, concierge MVP |
| Usability | Can users figure out how to use it without help? | Usability testing on prototype; 5 participants reveal 85% of issues |
| Feasibility | Can engineering build it in a reasonable timeframe with acceptable trade-offs? | Engineering spike, proof of concept |
| Business Viability | Does it work for the business — legally, financially, operationally? | Legal review, margin analysis, compliance check |
A feature that passes only three of the four risks is not ready to build.
Structure discovery around the job, not the feature:
When [situation], I want to [motivation], so I can [expected outcome].
Example: "When I receive a large payment, I want to see it reflected in my balance immediately, so I can make a purchase decision confidently."
JTBD exposes what users are actually hiring your product to do. Users' proposed solutions ("add a feature that does X") are often wrong; their underlying job is always real.
Use the OST to connect outcomes to opportunities to solutions:
Desired Outcome (business metric)
└── Opportunity 1 (user need / pain point)
├── Solution A
├── Solution B
└── Assumption to test
└── Opportunity 2
└── Solution C
This prevents jumping from metric to solution without understanding the underlying opportunity. Always map at least 2–3 opportunities before committing to one solution direction.
Formula: (Reach × Impact × Confidence) ÷ Effort
| Factor | Definition | Scale |
|---|---|---|
| Reach | How many users affected per time period | Number of users/month |
| Impact | How much does it move the needle per user | 3=massive, 2=high, 1=medium, 0.5=low, 0.25=minimal |
| Confidence | How confident are you in the estimates | 100%=high, 80%=medium, 50%=low |
| Effort | Total team effort in person-months | Person-months |
RICE pitfalls to avoid:
As a [specific user type],
I want to [accomplish a goal],
So that [I can achieve this outcome].
Acceptance criteria (Given/When/Then):
Given [precondition / context],
When [user action / event],
Then [expected result / system behaviour].
Good acceptance criteria:
## Problem Statement
[One paragraph: who has the problem, what the problem is, and evidence it exists]
## Success Metric
[What moves, by how much, in what timeframe]
## User Stories
[2–5 stories with acceptance criteria]
## Out of Scope
[Explicit list — prevents scope creep]
## Open Questions
[What we don't know yet that could change the approach]
## Dependencies
[Other teams, systems, or decisions this depends on]
A PRD is not a specification document. Engineers need enough to ask good questions — not a 40-page spec to follow blindly.
| Activity | Frequency | Output |
|---|---|---|
| User interviews | Weekly during active discovery | Insight summary, updated opportunity map |
| Usability testing | Before and after major UI changes | Issues list with severity, design recommendations |
| Competitive review | Monthly or before strategy decisions | Gap analysis, positioning insights |
| Data review (analytics) | Weekly | Metric trends, anomalies, funnel drops |
| Stakeholder alignment | Bi-weekly | Shared understanding of priority, flagged blockers |
End every response with a confidence signal on its own line:
CONFIDENCE: [High|Medium|Low] — [one-line reason]
If the task is outside this skill's scope or you lack the information needed to proceed, return this instead of a confidence signal:
BLOCKED: [reason] — [what information would unblock this]
Do not guess or produce low-quality output to avoid returning BLOCKED. A precise BLOCKED is more useful than a low-confidence guess.