Use when tracking team performance, analyzing agent strengths and weaknesses, collecting metrics, and generating performance reports. Trigger with performance review or metrics requests.
npx claudepluginhub emasoft/emasoft-plugins --plugin emasoft-chief-of-staffThis skill uses the workspace's default tool permissions.
Performance tracking enables the Chief of Staff to understand how well the agent team is performing, identify individual strengths and weaknesses, and make data-driven decisions about team composition and task assignment. This skill teaches systematic approaches to collecting metrics, analyzing performance, and generating useful reports.
Analyzes Claude Code agent team session exports against best practices, delivering structured reports with verdicts, scorecards, recommendations, and improved prompt rewrites.
Creates and manages dynamic teams of domain-specific agents. Analyzes project to propose 5-20 agents with tracking framework. Modes: create, update, status, cleanup.
Provides Drucker-inspired principles for leading agent teams: strengths-based delegation, effectiveness prioritization, dissent-driven decisions, change embrace. For multi-agent coordination and task delegation.
Share bugs, ideas, or general feedback.
Performance tracking enables the Chief of Staff to understand how well the agent team is performing, identify individual strengths and weaknesses, and make data-driven decisions about team composition and task assignment. This skill teaches systematic approaches to collecting metrics, analyzing performance, and generating useful reports.
Before using this skill, ensure:
| Metric Type | Output |
|---|---|
| Task completion | Completion rate, average time |
| Resource usage | Memory, CPU, API calls |
| Quality metrics | Error rate, rework rate |
Performance tracking is the systematic collection and analysis of metrics about agent behavior, output quality, and efficiency. Unlike traditional employee performance reviews, agent performance tracking happens continuously and focuses on measurable outcomes rather than subjective assessments.
Key characteristics:
Quantifiable measures of agent output and behavior.
Identification of what each agent does well and where they struggle.
Communication of findings to relevant stakeholders.
When to use: Continuously during agent operation, at task completion, during periodic reviews.
Steps: Define metrics to track, capture data at relevant events, aggregate over time periods, validate data quality.
Related documentation:
When to use: When making role assignments, after performance issues, during team planning.
Steps: Review collected metrics, identify patterns, compare against benchmarks, document findings.
Related documentation:
When to use: On regular schedule (daily/weekly), on request, after significant events.
Steps: Aggregate metrics, format for audience, include analysis, provide recommendations.
Related documentation:
Copy this checklist and track your progress:
# Task Completion Record
Agent: helper-agent-generic
Task: TASK-042 (Implement logout endpoint)
Assigned: 2025-02-01T08:00:00Z
Completed: 2025-02-01T12:30:00Z
Duration: 4.5 hours
Estimated: 4 hours
Quality: Passed review on first attempt
Blockers: None
Metrics:
- On-time: YES (within 110% of estimate)
- First-pass quality: YES
- Blocker-free: YES
# Agent Analysis: helper-agent-generic
## Strengths
1. **Code Review Speed**: Completes reviews 25% faster than average
2. **First-Pass Quality**: 90% of code passes review on first attempt
3. **Communication**: Clear, concise status updates
## Weaknesses
1. **Complex Algorithms**: Struggles with optimization tasks
2. **Documentation**: Often leaves docs incomplete
3. **Estimation**: Underestimates by average of 30%
## Recommendations
- Assign code review tasks (strength)
- Pair with senior agent for algorithm work (weakness mitigation)
- Require documentation checklist for all tasks
# Weekly Performance Summary
Period: 2025-01-27 to 2025-02-02
## Team Overview
- Active Agents: 8
- Tasks Completed: 45
- On-Time Rate: 82%
- First-Pass Quality: 75%
## Top Performers
1. libs-svg-svgbbox: 12 tasks, 100% on-time
2. helper-agent-generic: 8 tasks, 88% first-pass
## Areas for Improvement
1. Documentation completion rate: 65% (target: 90%)
2. Estimation accuracy: +35% average overrun
## Recommendations
- Add documentation checkpoint to workflow
- Review estimation process with underperforming agents
Step-by-step runbooks for executing each performance tracking operation. Use these when performing the actual procedures described above.
Detailed step-by-step runbook for systematically collecting quantifiable metrics about agent behavior, output quality, and efficiency.
Detailed step-by-step runbook for identifying agent strengths and weaknesses to enable better task assignment and team optimization.
Detailed step-by-step runbook for creating formatted performance reports for stakeholders with actionable insights.
Symptoms: Missing entries, gaps in timelines, inconsistent records.
Solution: Automate metric collection, add validation at data entry, backfill where possible from logs.
Symptoms: Agent assigned harder tasks appears to underperform.
Solution: Normalize metrics by task complexity, compare similar task types, consider external factors.
Symptoms: Same issues appear week after week, no action taken.
Solution: Include specific action items, assign owners, track action completion, escalate stalled items.
Version: 1.0 Last Updated: 2025-02-01 Target Audience: Emasoft Chief of Staff Agent Difficulty Level: Intermediate