This skill should be used when the user asks to review a completed team's process, run an after-action review, or evaluate team effectiveness. Use this skill when the user says "after action review", "AAR", "process review", "team retrospective", "review team process", "how did the team work", or at the prompt after a team completes its work. Distinct from `/evaluate-spawn` (output quality). AAR evaluates team process. Both can run in the same session.
From agent-teamsnpx claudepluginhub nthplusio/functional-claude --plugin agent-teamsThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
Process evaluation for completed agent teams. Reviews team effectiveness using the military AAR format — 4 structured questions that surface what worked, what didn't, and specific improvements.
TeamDelete)Follow the AAR protocol at ${CLAUDE_PLUGIN_ROOT}/shared/aar-protocol.md.
The protocol covers:
gh issue create for plugin scope improvements; project scope noted in AAR onlydocs/retrospectives/[team-name]-aar.mdAAR evaluates process (team effectiveness); /evaluate-spawn evaluates output (spec quality prediction). See aar-protocol.md for full comparison.