Gate 9 (Full Track) / Gate 4 (Small Track): Delivery roadmap and timeline planning. Transforms tasks into realistic delivery schedule with critical path analysis, resource allocation, and delivery breakdown. MANDATORY gate for both workflows.
Creates realistic delivery roadmaps with critical path analysis, capacity planning, and period boundary scheduling.
npx claudepluginhub lerianstudio/ringThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Every roadmap must be grounded in reality, not optimism.
Creating unrealistic timelines creates:
Roadmaps answer: When will working software be delivered to users? Roadmaps never answer: How fast could we go if everything goes perfectly (that's fantasy).
| Phase | Activities |
|---|---|
| 1. Input Gathering | Load tasks.md, ask user for start date + team composition + delivery cadence + period configuration + velocity multiplier |
| 2. Dependency Analysis | Build dependency graph, identify critical path, find parallelization opportunities |
| 3. Capacity Planning | Calculate team velocity (custom multiplier), allocate resources, identify bottlenecks |
| 4. Delivery Breakdown | Group tasks by cadence (sprint/cycle/continuous), calculate period boundaries, identify spill overs, map parallel streams |
| 5. Risk Analysis | Identify critical dependencies, flag high-risk milestones, define contingencies |
| 6. Gate Validation | Verify all tasks scheduled, critical path correct, resources realistic, dates achievable, period boundaries respected |
Start/end dates (YYYY-MM-DD format), team composition (N devs, roles), velocity multiplier (custom or default), critical path (longest dependency chain), parallel streams (independent task groups), delivery goals (what ships each period), period boundaries (sprint/cycle start/end dates), spill over identification (tasks crossing period boundaries), resource allocation (who works on what), risk milestones (high-impact dependencies), contingency buffer (10-20% for unknowns), Definition of Done per delivery period
Best-case scenarios ("if everything goes perfectly"), optimistic estimates ("assuming no blockers"), undefined capacity ("we'll figure it out"), missing dependencies ("tasks are independent"), unrealistic parallelization ("everyone works on everything"), no buffer time ("ship on last day"), vague milestones ("feature mostly done"), assumed availability ("team always at 100%"), fixed cadence without asking user, period boundaries ignored (tasks don't respect sprint/cycle limits)
| Excuse | Reality | Required Action |
|---|---|---|
| "Team composition doesn't matter, estimate anyway" | Capacity = reality. Without team size, timeline is fantasy. | STOP. Ask user for team composition. |
| "Dependencies are obvious, skip the graph" | Obvious to you ≠ validated. Hidden deps surface during execution. | MUST build dependency graph. Verify critical path. |
| "Parallel streams will emerge naturally" | Natural emergence = chaos. Define streams upfront. | MUST identify independent task groups explicitly. |
| "Default velocity multiplier is fine for everyone" | Teams vary. AI adoption varies. Experience varies. | MUST ask user: use default or custom velocity. |
| "Assume sprint cadence, everyone uses sprints" | Cadence = team culture. Scrum ≠ Kanban ≠ Shape Up. | MUST ask user for their delivery cadence. |
| "Period start date doesn't matter" | Period boundaries determine task allocation. Without start, can't calculate fit. | MUST ask period start date if sprint/cycle chosen. |
| "Tasks will fit naturally into periods" | Tasks don't respect arbitrary boundaries. Calculate fit explicitly. | MUST check if task fits period, identify spill overs. |
| "Buffer is pessimistic, ship dates tight" | Tight dates = guaranteed slippage. Buffer absorbs reality. | MUST add 10-20% contingency buffer. |
| "User knows priorities, skip asking" | Assumptions break. Priorities affect sequencing. | ASK user for priority if ambiguous. |
| "Critical path is longest task" | Critical path = longest dependency chain, not task. | MUST trace full dependency chains. |
| "Resource allocation will sort itself out" | Sorting out = thrashing. Allocate explicitly. | MUST assign tasks to roles upfront. |
| "Risk analysis is overkill for small features" | Small features have risks too. Identify them. | MUST flag high-risk dependencies. |
| "Delivery goals are just task lists" | Task lists ≠ goals. Goals define what ships. | MUST define measurable delivery outcomes. |
If you catch yourself doing any of these, STOP and ask the user:
When you catch yourself: Use AskUserQuestion to resolve ambiguity with the user.
Use AskUserQuestion tool to gather these REQUIRED inputs:
Context:
Question: "What overhead for human validation and adjustments?"
Options:
"1.2x - Minimal validation (20% overhead)"
"1.5x - Standard validation (50% overhead)" ← RECOMMENDED
"2.0x - Deep validation (100% overhead)"
"2.5x - Heavy rework (150% overhead)"
"Custom multiplier"
Multiplier accounts for:
Historical data (update after tasks): After completing tasks, track actual multipliers:
| Category | Requirements |
|---|---|
| Input Completeness | Start date confirmed; team composition known; delivery cadence selected; period configuration set (if sprint/cycle); velocity multiplier validated (default or custom); all tasks loaded from tasks.md |
| Dependency Analysis | Dependency graph built; critical path identified; parallel streams defined; no circular dependencies |
| Capacity Planning | Velocity calculated (custom or default multiplier); resources allocated to tasks; bottlenecks identified; realistic capacity (70-80% utilization) |
| Delivery Breakdown | Periods match chosen cadence; period boundaries calculated (if sprint/cycle); tasks allocated to periods; spill overs identified; delivery goals measurable; parallel streams mapped; handoffs minimized |
| Risk Management | High-risk dependencies flagged; contingency buffer added (10-20%); mitigation strategies defined; spill over risks documented |
| Timeline Realism | No best-case assumptions; critical path validated; dates achievable with given capacity; period boundaries respected; user approved |
Gate Result: ✅ PASS → Ready for execution | ⚠️ CONDITIONAL (adjust capacity/dates) | ❌ FAIL (unrealistic, rework)
See shared-patterns/ai-agent-baseline.md for baseline definition.
Baseline: AI Agent via ring:dev-cycle Capacity: 90% (hardcoded) Multiplier: User-defined (human validation overhead)
adjusted_hours = ai_estimate × multiplier
calendar_hours = adjusted_hours ÷ 0.90
calendar_days = calendar_hours ÷ 8 ÷ team_size
Where:
- ai_estimate = from tasks.md (AI-agent-hours)
- multiplier = human validation overhead (typically 1.2x - 2.5x)
- 0.90 = capacity utilization (90%)
- 8 = hours per working day
- team_size = number of developers
See shared-patterns/ai-agent-baseline.md for detailed capacity breakdown.
Summary: AI Agent has 90% capacity (10% overhead from API limits, context loading, tool execution).
User selects multiplier to account for:
Does NOT account for (already done by AI):
Task T-001: "User Management CRUD API"
Step 1: AI Estimation (Gate 7)
AI Estimate: 4.5 AI-agent-hours
Step 2: Apply Multiplier (Gate 9)
User selects: 1.5x (standard validation)
Adjusted Hours: 4.5h × 1.5 = 6.75h
Step 3: Apply Capacity (hardcoded)
Calendar Hours: 6.75h ÷ 0.90 = 7.5h
Step 4: Convert to Days
Calendar Days: 7.5h ÷ 8h/day = 0.94 developer-days
Step 5: Account for Team Size
With 1 dev: 0.94 ÷ 1 = 0.94 calendar days ≈ 1 day
With 2 devs: 0.94 ÷ 2 = 0.47 calendar days ≈ 0.5 day (4 hours)
Breakdown of 7.5h total:
For Sprints/Cycles:
Define period boundaries:
Sprint 1: Start date to (Start date + Duration)
Sprint 2: (Sprint 1 End + 1 day) to (Sprint 1 End + Duration + 1 day)
...
Check task fit:
Task T-001:
Start: 2026-03-01 (based on dependencies)
Duration: 10 calendar days
End: 2026-03-10
Sprint 1: 2026-03-01 to 2026-03-14 (2 weeks)
Fit check: T-001 end (2026-03-10) <= Sprint 1 end (2026-03-14)
Result: ✅ Fits completely in Sprint 1
Identify spill overs:
Task T-002:
Start: 2026-03-10 (depends on T-001)
Duration: 12 calendar days
End: 2026-03-22
Sprint 1: 2026-03-01 to 2026-03-14
Sprint 2: 2026-03-15 to 2026-03-28
Fit check: T-002 end (2026-03-22) > Sprint 1 end (2026-03-14)
Result: ⚠️ Spill over (starts Sprint 1, ends Sprint 2)
Allocation:
- Sprint 1: 4 days of work (2026-03-10 to 2026-03-14)
- Sprint 2: 8 days of work (2026-03-15 to 2026-03-22)
For Continuous Delivery:
Definition: The longest chain of dependent tasks from start to finish.
How to Calculate:
Example:
Dependency Chain:
T-001 (Foundation, 2 weeks)
→ T-002 (API Layer, 1 week)
→ T-007 (Integration, 2 weeks)
Critical Path Duration: 2 + 1 + 2 = 5 weeks (minimum possible)
Parallel Stream (not on critical path):
T-005 (Frontend, 2 weeks) - can run parallel to T-001/T-002
Identify Independent Streams:
Constraints:
Example:
Stream A (Backend): T-001 → T-002 → T-007
Stream B (Frontend): T-005 → T-006 (runs parallel to A)
Stream C (Infra): T-009 (blocks both A and B, must run first)
With 3 devs: Can run A, B, C in parallel (optimal)
With 2 devs: Must sequence B after A (sub-optimal)
With 1 dev: Fully sequential (slowest)
Output to: docs/pre-dev/{feature-name}/delivery-roadmap.md
# Delivery Roadmap: {Feature Name}
## Executive Summary
| Metric | Value |
|--------|-------|
| **Start Date** | YYYY-MM-DD |
| **End Date** | YYYY-MM-DD |
| **Total Duration** | X weeks |
| **Critical Path** | T-001 → T-003 → T-007 (Y weeks) |
| **Parallel Streams** | N streams identified |
| **Team Composition** | N developers (roles) |
| **Development Mode** | AI Agent via ring:dev-cycle |
| **Human Validation Multiplier** | Xx (e.g., 1.5x = 50% overhead for validation) |
| **Multiplier Source** | Default (1.5x) / Custom (user-validated) |
| **Capacity Utilization** | 90% (AI Agent standard) |
| **Formula** | ai_estimate × multiplier ÷ 0.90 |
| **Delivery Cadence** | Sprints (1-2w) / Cycles (1-3m) / Continuous |
| **Period Duration** | {X} weeks/months (if sprint/cycle) |
| **First Period Starts** | YYYY-MM-DD (if sprint/cycle) |
| **Contingency Buffer** | Z% (A days) |
| **Confidence Level** | High / Medium / Low |
If user chose "Sprints (1-2 weeks)":
## Sprint Breakdown
### Sprint 1: {Sprint Goal} (2026-03-01 to 2026-03-14)
**Deliverable:** {What ships to users}
| Task | Type | Effort | Start | End | Fits Sprint? | Dependencies | Assignee | Status |
|------|------|--------|-------|-----|--------------|--------------|----------|--------|
| T-001 | Foundation | L (13pts) | 03-01 | 03-10 | ✅ Complete | - | Backend | 🟢 Ready |
| T-002 | Feature | M (8pts) | 03-10 | 03-22 | ⚠️ Spill over | T-001 | Backend | ⏸️ Blocked |
| T-005 | Feature | M (8pts) | 03-01 | 03-05 | ✅ Complete | - | Frontend | 🟢 Ready |
**Sprint 1 Scope:**
- Complete tasks: T-001, T-005
- Partial tasks: T-002 (4 days in Sprint 1, 8 days in Sprint 2)
**Parallel Streams:**
- Stream A: T-001 (Backend, Dev 1)
- Stream B: T-005 (Frontend, Dev 2)
**Definition of Done:**
- [ ] T-001 and T-005 fully deployed to staging
- [ ] T-002 progressed 4/12 days (33% complete)
- [ ] Code reviewed and merged
- [ ] Sprint demo shows T-001 + T-005 working
If user chose "Cycles (1-3 months)":
## Cycle Breakdown
### Cycle 1: {Cycle Goal} (2026-03-01 to 2026-04-30, 8 weeks)
**Deliverable:** {What ships to users}
| Task | Type | Effort | Start | End | Fits Cycle? | Dependencies | Assignee | Status |
|------|------|--------|-------|-----|-------------|--------------|----------|--------|
| T-001 | Foundation | L (13pts) | 03-01 | 03-15 | ✅ Complete | - | Backend | 🟢 Ready |
| T-002 | Feature | M (8pts) | 03-15 | 03-25 | ✅ Complete | T-001 | Backend | ⏸️ Blocked |
| T-005 | Feature | M (8pts) | 03-01 | 03-10 | ✅ Complete | - | Frontend | 🟢 Ready |
**Cycle 1 Scope:**
- Complete tasks: T-001, T-002, T-005, T-006, T-007
- All features integrated and deployed
**Parallel Streams:**
- Stream A: T-001 → T-002 → T-003 (Backend)
- Stream B: T-005 → T-006 (Frontend, parallel)
**Definition of Done:**
- [ ] All cycle tasks completed
- [ ] Integration testing passed
- [ ] Deployed to production
- [ ] User feedback collected
- [ ] Post-cycle retrospective completed
If user chose "Continuous (no fixed intervals)":
## Delivery Milestones
### Milestone 1: {Milestone Goal} (Week 2, 2026-03-10)
**Deliverable:** {What ships to users}
| Task | Type | Effort | Start | End | Dependencies | Assignee | Status |
|------|------|--------|-------|-----|--------------|----------|--------|
| T-001 | Foundation | L (13pts) | 03-01 | 03-10 | - | Backend | 🟢 Ready |
**Completion Criteria:**
- [ ] Task T-001 deployed to production
- [ ] Monitoring configured
- [ ] User acceptance confirmed
- [ ] No blockers for T-002
### Milestone 2: {Milestone Goal} (Week 4, 2026-03-25)
**Deliverable:** {What ships to users}
| Task | Type | Effort | Start | End | Dependencies | Assignee | Status |
|------|------|--------|-------|-----|--------------|----------|--------|
| T-002 | Feature | M (8pts) | 03-10 | 03-18 | T-001 | Backend | ⏸️ Blocked |
| T-005 | Feature | M (8pts) | 03-01 | 03-08 | - | Frontend | 🟢 Ready |
**Completion Criteria:**
- [ ] Tasks T-002 and T-005 deployed
- [ ] Integration verified
- [ ] No regressions detected
- [ ] Performance SLAs met
## Critical Path Analysis
**Path:** T-001 → T-002 → T-007
| Task | Duration | Cumulative | Slack | On Critical Path? |
|------|----------|------------|-------|-------------------|
| T-001 | 2 weeks | 2 weeks | 0 days | ✅ Yes |
| T-002 | 1.5 weeks | 3.5 weeks | 0 days | ✅ Yes |
| T-007 | 2 weeks | 5.5 weeks | 0 days | ✅ Yes |
| T-005 | 1 week | 1 week | 2 weeks | ❌ No (parallel) |
**Minimum Project Duration:** 5.5 weeks (critical path)
**With Parallelization:** 5.5 weeks (no impact, T-005 has slack)
**Risk:** Any delay on critical path tasks delays entire project
**Spill Over Impact (for Sprint/Cycle cadences):**
- T-002 spills from Sprint 1 to Sprint 2 (4 days + 8 days)
- This affects Sprint 1 velocity reporting (partial completion)
## Resource Allocation
| Role | Count | Utilization | Assigned Tasks |
|------|-------|-------------|----------------|
| Backend Engineer | 2 | 75% | T-001, T-002, T-003, T-007 |
| Frontend Engineer | 1 | 70% | T-005, T-006 |
| DevOps Engineer | 1 | 50% (part-time) | T-009 (infra setup) |
| QA Analyst | 1 | 60% (from Week 3) | Testing from Week 3 onwards |
**Bottlenecks:**
- Backend heavy: 4 tasks require backend skills (T-001, T-002, T-003, T-007)
- Frontend light: 2 tasks require frontend skills (T-005, T-006)
**Recommendations:**
- Consider cross-training if backend becomes bottleneck
- QA can start test planning during Week 1-2
- Frontend can assist with integration testing during T-007
## Risk Milestones
| Milestone | Date | Risk Level | Impact | Mitigation |
|-----------|------|------------|--------|------------|
| Database Foundation (T-001) | Week 2 | 🔴 HIGH | Blocks T-002, T-003, T-007 (entire backend) | Start T-001 immediately, daily progress checks |
| API Integration (T-007) | Week 5.5 | 🟡 MEDIUM | Blocks deployment, but frontend can continue | Buffer time added, fallback: mock API |
| Sprint 1 Spill Over (T-002) | Sprint 1 end | 🟡 MEDIUM | Affects Sprint 1 velocity, team morale | Communicate spill over upfront, adjust Sprint 2 capacity |
| External Dependency (T-009) | Week 1 | 🟡 MEDIUM | Blocks deployment setup | Contact vendor early, have backup provider |
**Contingency Plan:**
- If T-001 slips by >2 days → Escalate to stakeholders, consider adding resource
- If T-007 blocked → Deploy frontend with mock backend, continue integration in next period
- If spill overs accumulate → Re-plan delivery cadence (extend sprint/cycle duration)
## Timeline Visualization
\`\`\`
Week 1-2: [████████ T-001: Foundation ████████]
[██ T-005: Frontend ██][T-006]
Week 3-4: [████ T-002 (cont.) ████][████ T-003: Logic ████]
[██ T-006: UI ██][── idle──]
Week 5-6: [████████ T-007: Integration ████████]
[████ T-008: Polish ████][── idle──]
Period Boundaries (if Sprint/Cycle):
Sprint 1: Week 1-2 (ends 2026-03-14)
Sprint 2: Week 3-4 (ends 2026-03-28)
Sprint 3: Week 5-6 (ends 2026-04-11)
Legend:
[████] = Active work
[──] = Idle/Buffer
⚠️ = Spill over (task crosses period boundary)
\`\`\`
## Assumptions
1. **Team Availability:** All developers available full-time (no vacations, no split focus)
2. **Dependency Resolution:** All external dependencies (APIs, credentials) available on time
3. **Scope Stability:** No scope changes during execution (new requirements = new planning)
4. **Infrastructure Ready:** Development/staging environments available Day 1
5. **Capacity Utilization:** 90% (AI Agent via ring:dev-cycle, 10% overhead for API/technical)
6. **Multiplier Accuracy:** Custom multiplier ({X}x) validated against historical validation overhead OR using default multiplier (1.5x)
7. **Period Boundaries:** Sprint/Cycle boundaries do not shift (dates fixed)
8. **Baseline Execution:** All implementation via ring:dev-cycle (AI Agent with automated gates)
## Constraints
1. **Team Size:** N developers (cannot increase mid-project without re-planning)
2. **Fixed Scope:** All tasks from tasks.md must be completed (no cutting features)
3. **Quality Gates:** All ring:dev-cycle gates must pass (cannot skip review/testing)
4. **Critical Path:** Cannot compress critical path without adding resources or cutting scope
5. **Delivery Cadence:** {Sprint/Cycle/Continuous} rhythm cannot change mid-project
6. **Spill Over Management:** Tasks crossing period boundaries must be tracked explicitly
| Violation | Wrong | Correct |
|---|---|---|
| Optimistic Timelines | "5-week critical path, let's commit to 4 weeks" (no buffer) | "5-week critical path + 10% buffer = 5.5 weeks commitment" |
| Ignoring Dependencies | "All tasks 2 weeks, finish in 2 weeks" (assumes parallelization) | "Critical path 5 weeks (T-001→T-002→T-007), other tasks parallel" |
| 100% Capacity | "2 devs × 2 weeks × 5 days = 20 dev-days" (unrealistic) | "2 devs × 2 weeks × 5 days × 0.75 capacity = 15 dev-days" |
| Fixed Cadence Assumption | "Everyone works in 2-week sprints" | "Ask user: sprint/cycle/continuous delivery?" |
| Ignoring Period Boundaries | "Task T-002 starts in Sprint 1, wherever it ends is fine" | "T-002 starts Sprint 1, ends Sprint 2 → spill over, track explicitly" |
| Default Multiplier Always | "Use 1.5x always" | "Ask user: default (1.5x) or custom based on historical validation overhead?" |
| Factor | Points | Criteria |
|---|---|---|
| Dependency Clarity | 0-30 | All dependencies mapped: 30, Most clear: 20, Ambiguous: 10 |
| Capacity Realism | 0-25 | Realistic utilization (70-80%): 25, Optimistic (90%+): 10, Undefined: 0 |
| Critical Path Validated | 0-25 | Full dependency graph: 25, Partial: 15, Assumptions: 5 |
| Risk Mitigation | 0-20 | All high-risk flagged + mitigations: 20, Some identified: 10, None: 0 |
Total Score Interpretation:
Action Based on Score:
Output to: docs/pre-dev/{feature-name}/delivery-roadmap.md
After user approves roadmap:
Integration with dev-team:
/ring:worktree to create isolated workspace/ring:dev-cycle to execute tasks with AI-assisted gates| User Says | Your Response |
|---|---|
| "Can't we finish in 3 weeks instead of 5?" | "Critical path is 5 weeks (T-001→T-002→T-007). To compress:<br>1. Add more developers (parallelization)<br>2. Cut scope (remove tasks)<br>3. Reduce quality (skip gates - NOT RECOMMENDED)<br><br>Which option aligns with your goals?" |
| User Says | Your Response |
|---|---|
| "Just estimate with 'a team'" | "Cannot create realistic roadmap without team size. Timeline varies dramatically:<br>- 1 dev: 8 weeks sequential<br>- 2 devs: 5 weeks (some parallelization)<br>- 4 devs: 4 weeks (max parallelization)<br><br>How many developers will work on this?" |
| User Says | Your Response |
|---|---|
| "Just use the default, I don't have historical data" | "Understood. Using default multipliers:<br>- AI-assisted: 0.6x (40% faster)<br>- Traditional: 1.0x (baseline)<br><br>Note: These are industry averages. Your team's actual velocity may vary. Recommend tracking actuals and updating multiplier after first period." |
| User Says | Your Response |
|---|---|
| "T-005 should be on critical path, not T-001" | "Let me verify the dependency chain:<br><br>T-001 (Foundation) blocks:<br> → T-002 (API Layer)<br> → T-007 (Integration)<br><br>T-005 (Frontend) has no blockers, can run parallel.<br><br>Critical path = longest dependency chain. If T-005 should block something, please clarify the dependency." |
| User Says | Your Response |
|---|---|
| "I don't know, what do you recommend?" | "Cannot recommend without context. Cadence depends on:<br>- Team culture (Scrum/Kanban/Shape Up)<br>- Release process (deploy daily vs. every 2 weeks)<br>- Stakeholder expectations (regular demos vs. continuous feedback)<br><br>How does your team currently organize work?" |
| User Says | Your Response |
|---|---|
| "Too many spill overs, this looks bad" | "Spill overs are reality, not failure. They occur when:<br>- Task duration doesn't align with period length<br>- Dependencies cause mid-period starts<br><br>Options to reduce spill overs:<br>1. Adjust period duration (e.g., 1 week → 2 weeks)<br>2. Switch to continuous delivery (no fixed periods)<br>3. Accept spill overs and communicate transparently<br><br>Which approach fits your team culture?" |
If you created a roadmap without asking about team composition, delivery cadence, period configuration, or velocity multiplier, delete it and start over.
Roadmaps are not educated guesses. Roadmaps are calculated schedules based on:
"We'll figure it out as we go" is not a roadmap. It's hope.
Questions that must be answered before committing dates:
If any question is unanswered, STOP and ask the user.
Deliver realistic roadmaps. Respect team capacity. Respect period boundaries. Build trust through accuracy.
This skill is a delivery planning skill and does NOT require WebFetch of language-specific standards.
Purpose: Delivery Planning transforms tasks into realistic schedules. Technical standards are irrelevant at this stage—they apply during implementation via ring:dev-cycle.
However, MUST load tasks.md (Gate 7) to access AI estimates, dependencies, and scope definitions.
| Condition | Action | Severity |
|---|---|---|
| Tasks (Gate 7) not validated | STOP and complete Gate 7 first | CRITICAL |
| Team composition unknown | STOP and ask user for team size | CRITICAL |
| Start date not provided | STOP and ask user for start date | CRITICAL |
| Delivery cadence not selected | STOP and ask user for cadence preference | HIGH |
| Critical path forms circular dependency | STOP and resolve dependency cycle | HIGH |
| All questions not answered | STOP and gather missing inputs | HIGH |
These requirements are NON-NEGOTIABLE:
| Severity | Definition | Example |
|---|---|---|
| CRITICAL | Cannot create roadmap | Tasks not validated, no team composition |
| HIGH | Roadmap missing essential elements | No buffer, no critical path, cadence assumed |
| MEDIUM | Roadmap incomplete but usable | Missing one risk milestone |
| LOW | Minor documentation gaps | Spill over impact not fully detailed |
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.