From agentops
Explores codebases or topics via Grep/Bash searches, prior knowledge lookup with ao, and writes evidence-backed findings to .agents/research Markdown files.
npx claudepluginhub boshu2/agentops --plugin agentopsThis skill is limited to using the following tools:
> **Quick Ref:** Deep codebase exploration with multi-angle analysis. Output: `.agents/research/*.md`
references/backend-background-tasks.mdreferences/backend-claude-teams.mdreferences/backend-codex-subagents.mdreferences/backend-inline.mdreferences/claude-code-latest-features.mdreferences/context-discovery.mdreferences/deep-research-mcp.mdreferences/document-template.mdreferences/failure-patterns.mdreferences/iterative-retrieval.mdreferences/ralph-loop-contract.mdreferences/vibe-methodology.mdschemas/findings.jsonscripts/validate.mdscripts/validate.shCreates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Quick Ref: Deep codebase exploration with multi-angle analysis. Output:
.agents/research/*.md
YOU MUST EXECUTE THIS WORKFLOW. Do not just describe it.
CLI dependencies: ao (knowledge injection — optional). If ao is unavailable, skip prior knowledge search and proceed with direct codebase exploration.
| Flag | Default | Description |
|---|---|---|
--auto | off | Skip human approval gate. Used by /rpi --auto for fully autonomous lifecycle. |
Given /research <topic> [--auto]:
mkdir -p .agents/research
First, search and inject existing knowledge (if ao available):
# Pull relevant prior knowledge for this topic
ao lookup --query "<topic>" --limit 5 2>/dev/null || \
ao search "<topic>" 2>/dev/null || \
echo "ao not available, skipping knowledge search"
Apply retrieved knowledge (mandatory when results returned):
If ao returns relevant learnings or patterns, do NOT just load them as passive context. For each returned item:
After applying, record each citation:
ao metrics cite "<learning-path>" --type applied 2>/dev/null || true
Also look for:
Search ALL local knowledge locations by content (not just filename):
Use Grep to search every knowledge directory for the topic. This catches learnings from /retro, brainstorms, and plans — not just research artifacts.
# Search all knowledge locations by content
for dir in research learnings knowledge patterns retros plans brainstorm; do
grep -r -l -i "<topic>" .agents/${dir}/ 2>/dev/null
done
# Search global patterns (cross-repo knowledge)
grep -r -l -i "<topic>" ~/.claude/patterns/ 2>/dev/null
If matches are found, read the relevant files with the Read tool before proceeding to exploration. Prior knowledge prevents redundant investigation.
Before launching the explore agent, detect which backend is available:
spawn_agent is available → log "Backend: codex-sub-agents"TeamCreate is available → log "Backend: claude-native-teams"skill tool is read-only (OpenCode) → log "Backend: opencode-subagents"Task is available → log "Backend: background-task-fallback""Backend: inline (no spawn available)"Record the selected backend — it will be included in the research output document for traceability.
Read the matching backend reference for concrete tool call examples:
skills/shared/references/claude-code-latest-features.mdreferences/claude-code-latest-features.mdreferences/backend-codex-subagents.mdreferences/backend-claude-teams.mdreferences/backend-background-tasks.mdreferences/backend-inline.mdlow for explore agents — research is breadth-first scanning, not deep reasoning.--from-pr <url> to scope research to a specific PR's changed files when investigating PR-related topics.YOU MUST DISPATCH AN EXPLORATION AGENT NOW. Select the backend using capability detection:
spawn_agent is available → Codex sub-agentTeamCreate is available → Claude native team (Explore agent)skill tool is read-only (OpenCode) → OpenCode subagent — task(subagent_type="explore", description="Research: <topic>", prompt="<explore prompt>")Use this prompt for whichever backend is selected. The exploration uses iterative retrieval (see references/iterative-retrieval.md): start broad, score relevance, extract new search terms from high-relevance files, and repeat for up to 3 cycles.
Thoroughly investigate: <topic>
Use iterative retrieval: after each discovery tier, score results 0-1 for relevance.
From files scoring 0.5+, extract new search terms (function names, imports, config keys).
Use extracted terms in subsequent tiers. Max 3 refinement cycles.
Discovery tiers (execute in order, skip if source unavailable):
Tier 1 — Code-Map (fastest, authoritative):
Read docs/code-map/README.md → find <topic> category
Read docs/code-map/{feature}.md → get exact paths and function names
Skip if: no docs/code-map/ directory
Tier 2 — Semantic Search (conceptual matches):
mcp__smart-connections-work__lookup query="<topic>" limit=10
Skip if: MCP not connected
Tier 2.5 — Git History (recent changes and decision context):
git log --oneline -30 -- <topic-related-paths> # scoped to relevant paths, cap 30 lines
git log --all --oneline --grep="<topic>" -10 # cap 10 matches
git blame <key-file> | grep -i "<topic>" | head -20 # cap 20 lines
Skip if: not a git repo, no relevant history, or <topic> too broad (>100 matches)
NEVER: git log on full repo without -- path filter (same principle as Tier 3 scoping)
NOTE: This is git commit history, not session history. For session/handoff history, use /trace.
Tier 3 — Scoped Search (keyword precision):
Grep("<topic>", path="<specific-dir>/") # ALWAYS scope to a directory
Glob("<specific-dir>/**/*.py") # ALWAYS scope to a directory
NEVER: Grep("<topic>") or Glob("**/*.py") on full repo — causes context overload
Tier 4 — Source Code (verify from signposts):
Read files identified by Tiers 1-3 (including git history leads from Tier 2.5)
Use function/class names, not line numbers
Tier 5 — Prior Knowledge (may be stale):
Search ALL .agents/ knowledge dirs by content:
for dir in research learnings knowledge patterns retros plans brainstorm; do
grep -r -l -i "<topic>" .agents/${dir}/ 2>/dev/null
done
Read matched files. Cross-check findings against current source.
Tier 6 — External Docs (last resort):
WebSearch for external APIs or standards
Only when Tiers 1-5 are insufficient
Return a detailed report with:
- Key files found (with paths)
- How the system works
- Important patterns or conventions
- Any issues or concerns
Cite specific file:line references for all claims.
If your runtime supports spawning parallel subagents, spawn one or more research agents with the exploration prompt. Each agent explores independently and writes findings to .agents/research/.
If no multi-agent capability is available, perform the exploration inline in the current session using file reading, grep, and glob tools directly.
For thorough research, perform quality validation:
Auto mode enforcement: When --auto is set, quality validation is mandatory. If depth rating < 2 for any critical area (Step 4b), emit WARN and log to .agents/research/quality-warning.md. In interactive mode, this step remains optional.
Check: Did we look everywhere we should? Any unexplored areas?
Check: Do we UNDERSTAND the critical parts? HOW and WHY, not just WHAT?
Check: What DON'T we know that we SHOULD know?
Check: What assumptions are we building on? Are they verified?
After the Explore agent and validation swarm return, write findings to:
.agents/research/YYYY-MM-DD-<topic-slug>.md
Use this format:
---
id: research-YYYY-MM-DD-<topic-slug>
type: research
date: YYYY-MM-DD
---
# Research: <Topic>
**Backend:** <codex-sub-agents | claude-native-teams | background-task-fallback | inline>
**Scope:** <what was investigated>
## Summary
<2-3 sentence overview>
## Key Files
| File | Purpose |
|------|---------|
| path/to/file.py | Description |
## Findings
<detailed findings with file:line citations>
## Recommendations
<next steps or actions>
After the research artifact is written, identify any reusable findings that should influence future work.
Persist only reusable findings, not transient observations, to .agents/findings/registry.jsonl using the finding-registry contract:
source.repo, source.session, source.file, source.skilldedup_key, pattern, detection_question, checklist_item, applicable_when, and confidencestatus, superseded_by, ttl_days, hit_count, last_citeddedup_keyAfter the registry update, if hooks/finding-compiler.sh exists, run:
bash hooks/finding-compiler.sh --quiet 2>/dev/null || true
This refreshes promoted findings and compiled prevention outputs in the same session.
Skip this step if --auto flag is set. In auto mode, proceed directly to Step 7.
USE AskUserQuestion tool:
Tool: AskUserQuestion
Parameters:
questions:
- question: "Research complete. Approve to proceed to planning?"
header: "Gate 1"
options:
- label: "Approve"
description: "Research is sufficient, proceed to /plan"
- label: "Revise"
description: "Need deeper research on specific areas"
- label: "Abandon"
description: "Stop this line of investigation"
multiSelect: false
Wait for approval before reporting completion.
Tell the user:
/plan to create implementation planfile:line.agents/research/ artifactInclude in your Explore agent prompt:
User says: /research "authentication system"
What happens:
.agents/research/2026-02-13-authentication-system.mdResult: Detailed report identifying auth middleware location, session handling, and token validation patterns.
User says: /research "cache implementation"
What happens:
.agents/research/2026-02-13-cache-implementation.mdResult: Summary of cache strategy, TTL settings, and eviction policies with file references.
User says: /research "payment processing flow"
What happens:
Result: End-to-end payment flow diagram with file paths and critical decision points.
| Problem | Cause | Solution |
|---|---|---|
| Research too shallow | Default exploration depth insufficient for the topic | Re-run with broader scope or specify additional search areas |
| Research output too large | Exploration covered too many tangential areas | Narrow the goal to a specific question rather than a broad topic |
| Missing file references | Codebase has changed since last exploration or files are in unexpected locations | Use Glob to verify file locations before citing them. Always use absolute paths |
| Auto mode skips important areas | Automated exploration prioritizes breadth over depth | Remove --auto flag to enable human approval gate for guided exploration |
| Explore agent times out | Topic too broad for single exploration pass | Split into smaller focused topics (e.g., "auth flow" vs "entire auth system") |
| No backend available for spawning | Running in environment without Task or TeamCreate support | Research runs inline — still functional but slower |