From team-assemble
Analyzes complex tasks, scouts codebase, dynamically designs and assembles expert agent teams via TeamCreate API, executes with validation and user confirmation.
npx claudepluginhub team-attention/plugins-for-claude-natives --plugin team-assembleThis skill is limited to using the following tools:
Analyze a task, dynamically design the right expert agents, and orchestrate them as a team using Claude Code's agent teams feature.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Share bugs, ideas, or general feedback.
Analyze a task, dynamically design the right expert agents, and orchestrate them as a team using Claude Code's agent teams feature.
Agent teams must be enabled. See references/enable-agent-teams.md for setup instructions.
Do NOT use for: single-file edits, simple questions, purely sequential work
references/agents.md)
opus — strategy/judgment (scouts, complex execution)sonnet — standard execution/validation (worker, qa, support)haiku — exploration/writing (researcher, editor)references/agents.md provides reference examples (not mandatory)Phase 1 → Phase 2 → Phase 3 → Phase 4 → Phase 5 → Phase 6
Task Codebase Integrate Execute Validate Complete
Analysis Scouts & Confirm & Cleanup
↕ FAIL → support fix (max 3x)
Analyze the user's request and identify relevant areas of the codebase:
Get user approval via AskUserQuestion:
I've analyzed your task and identified the following areas of interest:
- [x] src/auth/ — Authentication module (needs refactoring)
- [x] tests/auth/ — Corresponding tests
- [ ] src/api/ — Not directly affected
I'll scout these areas to design an optimal team.
Options: "Looks good, proceed" / "I'd like to adjust the scope"
For straightforward tasks that don't need codebase exploration, skip Phase 2 and go directly to Phase 3 — design the team yourself using references/agents.md as a guide.
Launch scout agents in parallel to explore relevant areas of the codebase.
general-purpose (constrained to read-only via prompt)Each scout reads the relevant codebase area and proposes agents for the task:
references/agents.md for examples, but freely design new agentsPrompt template:
references/prompt-templates.md§ Codebase Scout
## Scout Report: {area}
### Current State
- {file structure summary}
- {relevance to the task}
### Proposed Agents
| Agent | Role | Tasks | Reference Files |
|-------|------|-------|-----------------|
| {name} | {role} | {task} | {files} |
### Notes
- {area-specific constraints or patterns to follow}
Merge scout reports into a final team composition:
Get final approval via AskUserQuestion:
Proposed team: {team-name}
| # | Agent | Role | Tasks | Dependencies |
|---|-------|------|-------|--------------|
| 1 | architect | System design | Design new auth flow | - |
| 2 | implementer | Code changes | Implement the design | #1 |
| 3 | test-writer | Test coverage | Write tests for changes | #2 |
| 4 | qa | Validation | PASS/FAIL against acceptance criteria | #2, #3 |
Acceptance criteria:
- [ ] AC-1: {measurable criterion}
- [ ] AC-2: {measurable criterion}
Options: "Looks good, execute" / "I'd like to adjust roles"
If the user selects "adjust roles", ask what specifically to change. After 2+ revision requests, switch to free-text input.
TeamCreate(team_name: "{keyword}-team", description: "Task description")
team_name convention: core keyword + -team
Create TaskCreate entries for each agent, then set dependencies with TaskUpdate.
references/agents.md)"general-purpose""bypassPermissions"Detailed prompt structure:
references/prompt-templates.md
qa (sonnet) evaluates each acceptance criterion from Phase 3.
Agent(name: "qa", model: "sonnet", prompt: """
## Acceptance Criteria
- [ ] AC-1: {criterion}
## Validation Target
{Phase 4 execution results}
Evaluate each criterion with evidence-based PASS/FAIL judgment.
No PASS without evidence.
## Output Format
| # | Criterion | Verdict | Evidence |
Overall: PASS / FAIL
Include fix suggestions for any FAIL items.
""")
support (sonnet) fixes only FAIL items → qa re-validates:
## Validation Failed — Manual Intervention Needed
### Repeated Failures
- AC-{N}: {criterion} — {failure reason}
### Attempted Fixes
1. {attempt 1} 2. {attempt 2} 3. {attempt 3}
### Recommended Action
{what needs to be done manually}
## Team Results: {team-name}
### Acceptance Criteria
- [x] AC-1: {criterion} — PASS
### Per-Agent Results
- {agent}: {result summary}
### Deliverables
- {file paths or outputs}
### Validation History
- Validated {N} times, {M} fixes applied
SendMessage(type: "shutdown_request", recipient: "{name}", content: "Work complete")
TeamDelete()
| Mistake | Correct Approach |
|---|---|
| Creating team without user approval | Get AskUserQuestion approval in Phase 1 + Phase 3 |
| Executing without acceptance criteria | Always define criteria in Phase 3 |
| Running scouts for simple tasks | Skip Phase 2 for straightforward work |
| Skipping validation | Always run Phase 5 after execution |
| Ignoring model tiers | Use opus/sonnet/haiku based on role purpose |
| Only picking from fixed catalog | Scouts design freely; examples are reference only |
| Forgetting TeamDelete | Always shutdown_request → TeamDelete |
| Infinite FAIL loop | Max 3 verify/fix rounds, then report to user |
references/agents.md — Agent example bank with model tier guidereferences/prompt-templates.md — Scout + execution + QA prompt templatesreferences/examples.md — Worked examples: feature dev, refactoring, researchreferences/enable-agent-teams.md — How to enable agent teams in Claude Code