Use for open-ended exploration and investigation of unfamiliar codebases, systems, or problems. Complements systematic-debugging (which is for known errors). Use investigate when: "how does X work", "understand this codebase", "map this system". Triggers: "investigate", "explore", "understand how", "map the codebase".
From superomninpx claudepluginhub wilder1222/superomni --plugin superomniThis skill is limited to using the following tools:
SKILL.md.tmplSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
mkdir -p ~/.omni-skills/sessions
_PROACTIVE=$(~/.claude/skills/superomni/bin/config get proactive 2>/dev/null || echo "true")
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
_TEL_START=$(date +%s)
echo "Branch: $_BRANCH | PROACTIVE: $_PROACTIVE"
When this skill is active, NEVER use Claude Code's built-in EnterPlanMode tool.
Use the superomni pipeline skills (brainstorm, writing-plans, executing-plans) instead.
If PROACTIVE is false: do NOT proactively suggest skills. Only run skills the
user explicitly invokes. If you would have auto-invoked, say:
"I think [skill-name] might help here ā want me to run it?" and wait.
Report status using one of these at the end of every skill session:
Pipeline stage order: THINK ā PLAN ā REVIEW ā BUILD ā VERIFY ā SHIP ā REFLECT
REVIEW is the only human gate. All other stages auto-advance on DONE.
| Status | At REVIEW stage | At all other stages |
|---|---|---|
| DONE | STOP ā present review summary, wait for user input (Y / N / revision notes) | Auto-advance ā print [STAGE] DONE ā advancing to [NEXT-STAGE] and immediately invoke next skill |
| DONE_WITH_CONCERNS | STOP ā present concerns, wait for user decision | STOP ā present concerns, wait for user decision |
| BLOCKED / NEEDS_CONTEXT | STOP ā present blocker, wait for user | STOP ā present blocker, wait for user |
When auto-advancing:
docs/superomni/[STAGE] DONE ā advancing to [NEXT-STAGE] ([skill-name])When the user sends a follow-up message after a completed session, before doing anything else:
ls docs/superomni/specs/spec-*.md docs/superomni/plans/plan-*.md docs/superomni/ .superomni/ 2>/dev/null | head -20
git log --oneline -3 2>/dev/null
To find the latest spec or plan:
_LATEST_SPEC=$(ls docs/superomni/specs/spec-*.md 2>/dev/null | sort | tail -1)
_LATEST_PLAN=$(ls docs/superomni/plans/plan-*.md 2>/dev/null | sort | tail -1)
workflow skill for stage ā skill mapping) and announce:
"Continuing in superomni mode ā picking up at [stage] using [skill-name]."using-skills/SKILL.md.When asking the user a question, match the confirmation requirement to the complexity of the response:
| Question type | Confirmation rule |
|---|---|
| Single-choice ā user picks one option (A/B/C, 1/2/3, Yes/No) | The user's selection IS the confirmation. Do NOT ask "Are you sure?" or require a second submission. |
| Free-text input ā user types a value and presses Enter | The submitted text IS the confirmation. No secondary prompt needed. |
| Multi-choice ā user selects multiple items from a list | After the user lists their selections, ask once: "Confirm these selections? (Y to proceed)" before acting. |
| Complex / open-ended discussion ā back-and-forth clarification | Collect all input, then present a summary and ask: "Ready to proceed with the above? (Y/N)" before acting. |
Rule: never add a redundant confirmation layer on top of a single-choice or text-input answer.
Custom Input Option Rule: Whenever you present a predefined list of choices (A/B/C, numbered options, etc.), always append a final "Other" option that lets the user describe their own idea:
[last letter/number + 1]) Other ā describe your own idea: ___________
When the user selects "Other" and provides their custom text, treat that text as the chosen option and proceed exactly as you would for any other selection. If the custom text is ambiguous, ask one clarifying question before proceeding.
Load context progressively ā only what is needed for the current phase:
| Phase | Load these | Defer these |
|---|---|---|
| Planning | Latest docs/superomni/specs/spec-*.md, constraints, prior decisions | Full codebase, test files |
| Implementation | Latest docs/superomni/plans/plan-*.md, relevant source files | Unrelated modules, docs |
| Review/Debug | diff, failing test output, minimal repro | Full history, specs |
If context pressure is high: summarize prior phases into 3-5 bullet points, then discard raw content.
All skill artifacts are written to docs/superomni/ (relative to project root).
See the Document Output Convention in CLAUDE.md for the full directory map.
Agent failures are harness signals ā not reasons to retry the same approach:
harness-engineering skill to update the harness before retrying.It is always OK to stop and say "this is too hard for me." Escalation is expected, not penalized.
After completing any skill session, run a 3-question self-check before writing the final status:
If any answer is NO, address it before reporting DONE. If it cannot be addressed, report DONE_WITH_CONCERNS and name the gap.
For a full performance evaluation spanning the entire sprint, use the self-improvement skill.
_TEL_END=$(date +%s)
_TEL_DUR=$(( _TEL_END - _TEL_START ))
~/.claude/skills/superomni/bin/analytics-log "SKILL_NAME" "$_TEL_DUR" "OUTCOME" 2>/dev/null || true
Nothing is sent to external servers. Data is stored only in ~/.omni-skills/analytics/.
Goal: Build a shared, accurate mental model of an unfamiliar system, codebase, or problem space.
Distinction from systematic-debugging:
investigate ā you don't have a specific error; you're building understandingsystematic-debugging ā you have a specific error and need to find root causeStart by understanding the big picture before any details.
# Project structure overview
find . -type f -name "*.md" | head -20 # documentation
ls -la # top-level files
cat README.md 2>/dev/null | head -50 # quick overview
cat package.json 2>/dev/null # or Makefile, go.mod, etc.
# Technology stack
ls -la *.json *.yaml *.toml *.rb *.py *.go 2>/dev/null | head -20
# Size estimate
find . -type f | grep -v ".git\|node_modules\|dist\|build" | wc -l
Output: "System overview: ..." ā 3-5 sentence description of what this system is.
Find where the system starts and where users/callers interact with it:
# Find main entry points
grep -rn "main()\|app.listen\|server.listen\|createApp\|bootstrap" . \
--include="*.js" --include="*.ts" --include="*.py" --include="*.go" \
-l | head -10
# Find API routes
grep -rn "router\.\|app\.get\|app\.post\|@route\|@app\.route" . \
--include="*.js" --include="*.ts" --include="*.py" -l | head -10
# Find CLI commands
grep -rn "commander\|yargs\|argparse\|click\|cobra" . \
--include="*.js" --include="*.ts" --include="*.py" --include="*.go" \
-l | head -10
Pick ONE representative flow (the most common user action) and trace it end to end:
# Trace a specific function through the codebase
FUNCTION_NAME="handleRequest"
grep -rn "${FUNCTION_NAME}" . --include="*.js" --include="*.ts" -A 3 | head -30
Map the key components and their responsibilities:
SYSTEM MAP
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Entry points: [list]
Core modules: [list with 1-line descriptions]
Data stores: [DBs, caches, files]
External deps: [APIs, services, libraries]
Test coverage: [rough %]
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
# Most-changed files (hotspots)
git log --oneline | wc -l # commit count
git log --pretty=format: --name-only | sort | uniq -c | sort -rn | head -20
# Largest files (complexity hotspots)
find . -name "*.js" -o -name "*.ts" -o -name "*.py" | \
xargs wc -l 2>/dev/null | sort -rn | head -10
# Files with most TODOs
grep -rn "TODO\|FIXME\|HACK\|XXX" . \
--include="*.js" --include="*.ts" --include="*.py" | wc -l
Write a brief investigation summary:
INVESTIGATION REPORT
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Subject: [what was investigated]
Time spent: [~N minutes]
Overview:
[3-5 sentences describing the system]
Key findings:
- [finding 1]
- [finding 2]
- [finding 3]
Hotspots/risks:
- [file/area]: [why it's risky]
Unknown/unclear:
- [thing that wasn't resolved]
Recommended next steps:
1. [action based on findings]
2. [action based on findings]
Status: DONE | NEEDS_CONTEXT
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
# Count lines of code by file type
find . -name "*.js" | xargs wc -l | tail -1 # JavaScript
find . -name "*.py" | xargs wc -l | tail -1 # Python
# Find all configuration files
find . -name "*.env*" -o -name "*.config.*" -o -name "*.yaml" -o -name "*.toml" | \
grep -v "node_modules\|.git" | head -20
# Find database schema
find . -name "*.sql" -o -name "schema.*" -o -name "*migration*" | \
grep -v "node_modules\|.git" | head -10
# Understand test coverage
find . -name "*.test.*" -o -name "*.spec.*" -o -name "test_*.py" | \
grep -v "node_modules" | wc -l