Copilot CLI sub-agent system for persona-based analysis. Use when piping large contexts to GitHub Copilot models for security audits, architecture reviews, QA analysis, or any specialized analysis requiring a fresh model context.
From copilot-clinpx claudepluginhub richfrem/agent-plugins-skills --plugin copilot-cliThis skill is limited to using the following tools:
acceptance-criteria.mdevals/evals.jsonevals/experiments/2026-03-13_182514/results.jsonevals/experiments/2026-03-13_182514/results.tsvevals/experiments/2026-03-13_182514/timing.jsonevals/experiments/2026-03-13_183006/results.jsonevals/experiments/2026-03-13_183006/results.tsvevals/experiments/2026-03-13_183006/timing.jsonevals/experiments/2026-03-13_183028/results.jsonevals/experiments/2026-03-13_183028/results.tsvevals/experiments/2026-03-13_183028/timing.jsonevals/experiments/2026-03-13_183345/logs/improve_iter_1.jsonevals/experiments/2026-03-13_183345/results.jsonevals/experiments/2026-03-13_183345/results.tsvevals/experiments/2026-03-13_183345/timing.jsonevals/results.tsvfallback-tree.mdreferences/acceptance-criteria.mdreferences/fallback-tree.mdreferences/program.mdSorts ECC skills, commands, rules, hooks, and extras into DAILY vs LIBRARY buckets using repo evidence like file extensions and configs. Creates trimmed install plan for project-specific needs.
Guides agentic engineering workflows: eval-first loops, 15-min task decomposition, model routing (Haiku/Sonnet/Opus), AI code reviews, and cost tracking.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
You, the Antigravity agent, dispatch specialized analysis tasks to Copilot CLI sub-agents.
[!IMPORTANT] By default, all Copilot sub-agent orchestration uses the gpt-5-mini model for efficiency and accuracy. Explicitly use this model unless the user characterizes a specific need for a larger model.
To ensure Copilot CLI behaves as a specialized persona rather than a generic responder, always embed the persona and source material directly into the prompt flag (-p).
copilot -p "$(cat agents/persona.md)
---SOURCE CODE---
$(cat target.py)
---INSTRUCTION---
Perform a full code review. Use severity levels: 🔴 CRITICAL, 🟡 MODERATE, 🟢 MINOR.
You are operating as an isolated sub-agent.
Do NOT use tools. Do NOT access filesystem." > review.md
run_agent.py (Cross-Platform)For reusable sub-agent execution, use the provided Python orchestrator which handles temp file assembly and prompt concatenation reliably across Windows, macOS, and Linux.
# Location: plugins/copilot-cli/scripts/run_agent.py
python3 ./scripts/run_agent.py <PERSONA_FILE> <INPUT_FILE> <OUTPUT_FILE> "<INSTRUCTION>"
python3 ./scripts/run_agent.py agents/security-auditor.md target.py security.md \
"Find vulnerabilities. Use severity levels: 🔴 CRITICAL, 🟡 MODERATE, 🟢 MINOR."
agents/)| Persona | Use For |
|---|---|
security-auditor.md | Red team, vulnerability scanning, threat modeling |
refactor-expert.md | Optimizing code for readability, performance, and DRY |
architect-review.md | Assessing system design, modularity, and complexity |
Using cat code.py | copilot -p "review this" is unreliable. The CLI often prioritizes the prompt flag and ignores the piped input. Always embed the code inside the command string as shown in the Core Pattern.
Large prompt expansions (e.g., $(cat ...) > 10KB) can silently fail when run in the background (&).
run_agent.py).wc -l.Always add these instructions to your dispatch prompt to prevent the sub-agent from attempting to use external tools. Strictly use gpt-5-mini as the default model.
To dramatically improve review results, add:
"Think step-by-step internally, but output only final results. Be strict and critical. Do not be polite."
Before initiating major orchestrations or long-running iterative loops (e.g., Triple-Loop), you MUST perform a zero-shot heartbeat check to verify the host CLI has end-to-end connectivity and correct model defaults.
python3 .agents/skills/copilot-cli-agent/scripts/run_agent.py \
/dev/null /dev/null ./HEARTBEAT_MD.md \
"HEARTBEAT CHECK: Respond with 'HEARTBEAT_OK' only."
# Verification Logic:
[ -s ./HEARTBEAT_MD.md ] && grep -q "HEARTBEAT_OK" ./HEARTBEAT_MD.md && echo "HEARTBEAT_OK" || echo "HEARTBEAT_FAIL"
Logging Requirement: The result of this heartbeat (Success or Failure) MUST be explicitly written to the session log before proceeding. If it fails, halt execution and report the error details (e.g., 401 Unauthorized, 429 Rate Limit, or Network Error).
python3 ./scripts/run_agent.py agents/refactor-expert.md target.py output.md "Refactor this code."
Examine output.md. It should contain ONLY the refactored code and a brief 3-bullet summary.