From gaia-ops
Use when the user wants to create a brief or spec for a feature before planning
npx claudepluginhub metraton/gaia --plugin gaia-opsThis skill uses the workspace's default tool permissions.
Conversational brief creation. The orchestrator loads this inline to
Triggers research for existing libraries, tools, and patterns before coding new features. Searches npm, PyPI, MCP/skills, GitHub; evaluates matches and decides adopt/extend/build.
Audits cross-stack repos (C++/Android/iOS/Web), classifies files as project/third-party/artifacts, detects embedded libraries, assigns module verdicts, generates interactive HTML reports.
Reorganizes X and LinkedIn networks: review-first pruning of low-value follows, priority-based add/follow recommendations, and drafts warm outreach in user's voice.
Share bugs, ideas, or general feedback.
Conversational brief creation. The orchestrator loads this inline to co-create a brief with the user before dispatching to gaia-planner.
Size the work
| Size | Signal | Questions |
|---|---|---|
| S | Bug fix, config tweak, single-file | 0-1 |
| M | Feature, endpoint, integration | 2-3 |
| L | Project, multi-agent, cross-surface | 4-6 |
For S: skip brief, tell the user to describe what they want and dispatch directly to the appropriate agent.
Ask questions (M/L) -- Target gaps, not completeness:
One question per round via AskUserQuestion. Stop when each AC has a declared evidence type and every question above has an answer or an explicit "N/A".
Write brief.md -- Use the structure below. Write to:
.claude/project-context/briefs/open_{feature-name}/brief.md
where {feature-name} is a kebab-case slug.
Directory prefix convention:
open_ -- draft or ready, no work started yet (this skill always creates with open_)in-progress_ -- work has begunclosed_ -- complete, verified, or doneTransitions between prefixes are done with gaia plans rename. This skill
only ever creates with open_.
The frontmatter is the executable source of truth (orchestrator parses it
with yaml.safe_load). The body's ## Acceptance Criteria section mirrors
it as a human summary.
---
status: draft
surface_type: ui | api | job | cli
acceptance_criteria:
- id: AC-1
description: "Login button visible on /login"
evidence:
type: url
shape:
method: GET
url: http://localhost:3000/login
expect:
status: 200
body_contains: "Sign in"
artifact: evidence/AC-1.json
- id: AC-2
description: "pytest auth suite green"
evidence:
type: command
shape:
run: "pytest tests/auth/ -q"
expect: "exit 0"
artifact: evidence/AC-2.txt
---
# [Feature Name]
## Objective
[1-3 sentences: what problem, why now, who benefits]
## Context
[Project constraints relevant to this feature]
## Approach
[High-level strategy, not implementation details. 3-5 sentences max]
## Acceptance Criteria
Human-readable summary. Source of truth lives in frontmatter.
- AC-1: Login button visible on /login (evidence: url)
- AC-2: pytest auth suite green (evidence: command)
## Milestones (M/L features only)
- M1: [name] -- [what is shippable after this]
- M2: [name] -- [what is shippable after this]
## Out of Scope
[Explicit boundaries -- what this feature does NOT include]
artifact path; the orchestrator persists the
verification output there so the user can read it after completion.The shapes below are frontmatter fragments under acceptance_criteria:.
The body's ## Acceptance Criteria section mirrors them for human reading;
the frontmatter is the executable source of truth.
| type | shape | valid surface |
|---|---|---|
command | run: "bash command"; expect: exit_code | substring | any |
url | method: GET|POST; url; expect: {status, body_contains} | ui, api |
playwright | url; steps: [...]; assert: "selector visible" | screenshot | ui |
artifact | path; kind: json|log|screenshot; assert: schema | contains | any |
metric | query; threshold: "p95 < 200ms" | api, job |
Shape examples (frontmatter fragments):
# command
evidence:
type: command
shape:
run: "pytest tests/auth/ -q"
expect: "exit 0"
# url
evidence:
type: url
shape:
method: GET
url: http://localhost:3000/health
expect:
status: 200
body_contains: '"status":"ok"'
# playwright
evidence:
type: playwright
shape:
url: http://localhost:3000/login
steps:
- fill: "#email with user@test.com"
- click: "button[type=submit]"
assert: "selector [data-testid=dashboard] visible"
# artifact
evidence:
type: artifact
shape:
path: dist/build-report.json
kind: json
assert: ".summary.errors == 0"
# metric
evidence:
type: metric
shape:
query: "curl -s http://localhost:3000/metrics | grep http_p95"
threshold: "< 200"
Present the full brief. Ask: "Does this capture what you want?" When confirmed, suggest dispatching to gaia-planner to create a plan.