Collect agent usage metrics from git history and generate health reports. Use when measuring agent adoption, reviewing system health, or producing periodic dashboards. Implements 8 key metrics from agent-metrics.md.
Generates agent usage metrics and health reports from git history for adoption tracking and system monitoring.
/plugin marketplace add rjmurillo/ai-agents/plugin install project-toolkit@ai-agentsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
This utility collects and reports metrics on agent usage from git history. It implements the 8 key metrics defined in docs/agent-metrics.md for measuring agent system health, effectiveness, and adoption.
| Trigger Phrase | Operation |
|---|---|
collect agent metrics | Run collect_metrics.py with default 30-day window |
generate metrics dashboard | Run with markdown output for reporting |
check agent adoption rate | Run and highlight Metric 2 (agent coverage) |
weekly metrics report | Run with 7-day window, markdown output |
export metrics as JSON | Run with JSON output for automation |
Use this skill when:
Use manual git log inspection instead when:
| Avoid | Why | Instead |
|---|---|---|
| Running without specifying time window | Default 30 days may not match your intent | Use --since with explicit day count |
| Comparing metrics across different time windows | Misleading trends | Normalize to same window size |
| Ignoring zero agent coverage | Indicates broken detection patterns | Verify commit message conventions match patterns |
| Manual commit counting | Error-prone, misses patterns | Use the script for consistent detection |
| Storing JSON output without markdown | Loses human-readable context | Generate both formats for archival |
After execution:
| Script | Platform | Usage |
|---|---|---|
collect_metrics.py | Python 3.8+ | Cross-platform |
# Basic usage (30 days, summary output)
python .claude/skills/metrics/collect_metrics.py
# Last 90 days as markdown
python .claude/skills/metrics/collect_metrics.py --since 90 --output markdown
# JSON output for automation
python .claude/skills/metrics/collect_metrics.py --output json
The utility collects the following metrics:
| Metric | Description | Target |
|---|---|---|
| Metric 1: Invocation Rate | Agent usage distribution | Proportional to task types |
| Metric 2: Agent Coverage | % of commits with agent involvement | 50% |
| Metric 4: Infrastructure Review | % of infra changes with security review | 100% |
| Metric 5: Usage Distribution | Agent utilization patterns | Balanced distribution |
The utility detects agents in commit messages using these patterns:
orchestrator, analyst, architect, etc.Reviewed by: securityagent: implementer or [security-agent]Infrastructure commits are identified by these patterns:
.github/workflows/*.yml.githooks/*Dockerfile**.tf, *.tfvars.env*.agents/*Conventional commit prefixes are classified:
feat: - Featurefix: - Bug fixdocs: - Documentationci: - CI/CDrefactor: - RefactoringHuman-readable console output with key metrics highlighted.
Formatted markdown suitable for dashboards and reports. Can be saved directly to .agents/metrics/ for archival.
Structured data for programmatic consumption and CI integration.
See .github/workflows/agent-metrics.yml for automated weekly metrics collection.
The workflow:
To generate a monthly dashboard report:
# Generate report
python .claude/skills/metrics/collect_metrics.py \
--since 30 \
--output markdown \
> .agents/metrics/report-$(date +%Y-%m).md
# Review and commit
git add .agents/metrics/
git commit -m "docs(metrics): add monthly metrics report"
docs/agent-metrics.mdUpdate the AGENT_PATTERNS / $AgentPatterns arrays to detect new agent references.
Update the INFRASTRUCTURE_PATTERNS / $InfrastructurePatterns arrays for new infrastructure file types.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.