From agent-teams
Analyzes Claude Code agent team session exports against best practices, delivering structured reports with verdicts, scorecards, recommendations, and improved prompt rewrites.
npx claudepluginhub prince-khanna/claude-code-plugins --plugin agent-teamsThis skill uses the workspace's default tool permissions.
Analyze an agent team session export against the official Claude Code agent teams best practices. Produce a structured report with a suitability verdict, scorecard, actionable recommendations, and an improved prompt rewrite.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Analyze an agent team session export against the official Claude Code agent teams best practices. Produce a structured report with a suitability verdict, scorecard, actionable recommendations, and an improved prompt rewrite.
Core Principle: Every recommendation must cite a specific best practice from the official docs and quote specific evidence from the session. No vague assessments.
Prerequisite: This skill requires a markdown session export. Use the agent-teams:view-team-session skill to generate one from a session ID.
Do NOT use for:
Read the full markdown session export file. If too large, read in chunks.
Extract these elements:
Search for the latest agent teams best practices. Try these approaches in order:
https://code.claude.com/docs/en/agent-teamshttps://code.claude.com/docs/en/costs for the "Agent team token costs" and "Manage agent team costs" sectionsThese are the authoritative source for all rubric evaluations. If both WebSearch and WebFetch fail, inform the user that analysis cannot proceed without the official docs.
Read the evaluation rubric from references/rubric.md (10 categories). For each category:
Save the report to .claude/output/<team-name>-analysis.md. Actionable content comes first, detailed evidence last.
Use this structure:
# Agent Team Session Analysis: <team-name>
**Session:** <session-id> | **Team:** <team-name>
**Duration:** <duration> | **Agents:** <comma-separated list>
**Analysis date:** <today's date>
---
## Suitability Verdict
[1-2 paragraphs assessing whether this task was a good fit for agent teams.
If not, explain what approach would have been better and why. Be direct.]
**Verdict:** Good fit / Marginal fit / Poor fit
---
## Summary
| Category | Rating |
|----------|--------|
| Suitability | [Strong/Partial/Gap] |
| Context Sharing | [Strong/Partial/Gap] |
| Task Sizing | [Strong/Partial/Gap] |
| Task Dependencies | [Strong/Partial/Gap/N/A] |
| Communication Quality | [Strong/Partial/Gap] |
| File Conflict Avoidance | [Strong/Partial/Gap/N/A] |
| Lead Orchestration | [Strong/Partial/Gap] |
| Model Selection & Cost Efficiency | [Strong/Partial/Gap] |
| Quality Gates | [Strong/Partial/Gap/N/A] |
| Cleanup | [Strong/Partial/Gap] |
---
## Top Recommendations
[3-5 recommendations ranked by impact. Each must be specific, actionable,
and reference what happened in the session.]
1. **[Category]** — [Recommendation with concrete example of what to change]
2. ...
---
## Improved Prompt
If the original prompt could be rewritten following all best practices,
here's what it would look like:
[Rewrite the user's original prompt incorporating all recommendations.
This should be ready to copy-paste for their next run.]
---
## Detailed Rubric Scorecard
### 1. Suitability — [Rating]
**Evidence:**
> [Quoted passages from the session export]
**Assessment:** [Explanation referencing specific best practice from the docs]
[...continue for all 10 categories...]
After writing the report, tell the user where it was saved and give a 2-3 sentence summary of the key findings.
.claude/output/<team-name>-analysis.md