From cocosearch
Explores codebases to answer questions about how code works, trace execution flows, or research topics via semantic search. Offers autonomous mode for structured subagent output and interactive mode with narrative checkpoints.
npx claudepluginhub violetcranberry/coco-search --plugin cocosearchThis skill uses the workspace's default tool permissions.
A unified exploration skill with two modes:
Guides onboarding to new or unfamiliar codebases by checking CocoSearch index, exploring architecture, key modules, and code patterns via semantic search.
Performs semantic search, grep, artifact fetching, relationship tracing, and data source listing across CodeAlive-indexed codebases and docs.
Explores codebases using SocratiCode semantic search, dependency graphs, and tools like codebase_search for understanding architecture, finding functions/types, analyzing dependencies, and searching schemas/specs.
Share bugs, ideas, or general feedback.
A unified exploration skill with two modes:
| Skill | Goal | Best for |
|---|---|---|
| cocosearch-explore | Answer a question about the codebase | "How does X work?", "Go figure out X", subagent research |
| cocosearch-onboarding | Broad codebase understanding | First time in a codebase |
| cocosearch-debugging | Find root cause of a bug | Error-driven investigation |
Use autonomous mode when:
Use interactive mode when:
cocosearch.yaml for indexName field -- if found, use itlist_indexes() and match the current project's directory name against available indexes. The MCP tools auto-derive index names from directory paths (e.g., my-project/ -> my_project), so a match is likely if the repo was indexed without a config file.cocosearch.yaml is missing.list_indexes() to confirm project is indexedindex_stats(index_name="<resolved-name>") to check freshnesscocosearch.yaml has linkedIndexes):
warnings array from index_stats() for entries starting with "Linked index"Run to completion without user interaction. Return structured findings.
Cast a wide net to locate where the concept lives.
Semantic search for the concept:
search_code(
query="<question rephrased as a natural description>",
use_hybrid_search=True,
smart_context=True,
limit=10
)
Cross-project search: If
linkedIndexesis configured incocosearch.yaml, searches automatically expand to linked indexes. For ad-hoc multi-project exploration, passindex_names=["project1", "project2"].
Symbol search if the question mentions specific identifiers:
search_code(
query="<identifier>",
symbol_name="<identifier>*",
use_hybrid_search=True,
smart_context=True,
limit=5
)
After Phase 1, assess:
If Phase 1 fully answers the question (rare), skip to Output.
Fill gaps identified in Phase 1. Choose searches based on what's missing.
Trace a specific function or class:
search_code(
query="<function-name>",
symbol_name="<function-name>",
symbol_type="function",
use_hybrid_search=True,
smart_context=True
)
Find related components not yet discovered:
search_code(
query="<aspect of question not covered by Phase 1>",
use_hybrid_search=True,
smart_context=True,
limit=5
)
Find callers or consumers of a key function:
search_code(
query="<function-name> call invoke use",
use_hybrid_search=True,
smart_context=True,
limit=5
)
Trace dependencies for a key file (if dependency index exists):
get_file_dependencies(file="<file-path>", depth=2)
get_file_impact(file="<file-path>", depth=2)
Dependency tools provide instant, complete file-level dependency data. Use them to map how modules connect without needing multiple search hops.
Search budget: 3-5 total searches across Phases 1-2. If you need more than 7, split the question.
For the 2-3 most important findings, ensure you have full function/class bodies via smart_context=True. If earlier searches already returned sufficient context, this phase may be a no-op.
Return findings in this exact structure. This is what consuming agents expect.
## Findings
**Question:** <original question, verbatim>
**Status:** COMPLETED | PARTIAL | FAILED
**Index:** <index-name> (last indexed: <date or "unknown">)
<if stale>**Warning:** Index is <N> days old -- findings may not reflect recent changes.</if>
### Summary
<2-4 sentences directly answering the question. Be specific -- reference files, functions, patterns. No filler.>
### Key Files
| File | Role | Key Symbols |
|------|------|-------------|
| `src/module/file.py` | <what this file does for the question> | `func_a`, `ClassB` |
### Code References
**<descriptive title>** (`file:line`)
<1-2 sentence explanation of why this code matters>
\```python
<relevant code snippet from smart_context>
\```
### Connections
- <bullet showing how piece A connects to piece B>
- <bullet showing data flow or dependency>
### Gaps
<what couldn't be determined -- omit this section entirely if there are no gaps>
Status definitions:
Invoked by a subagent via Task tool:
Task(
subagent_type="general-purpose",
prompt="Use the cocosearch-explore skill to answer: How does the config precedence system resolve conflicts? Return the structured findings.",
description="Explore config precedence"
)
Invoked in plan mode: Use this skill to understand the area you'll be modifying before proposing changes.
Step-by-step narrative exploration with user checkpoints and "go deeper" offers.
Identify what the user wants to understand. Different question types need different strategies:
Flow questions -- "How does X flow through the system?"
Logic questions -- "How does X decide/determine Y?"
Subsystem questions -- "How does the X subsystem work?"
Integration questions -- "How do X and Y interact?"
Confirm understanding: "You want to understand [rephrased question]. Let me trace through the codebase."
Cast a wide net with semantic and symbol searches.
Semantic search for the concept:
search_code(
query="<user's concept described naturally>",
use_hybrid_search=True,
smart_context=True,
limit=10
)
Symbol search for key identifiers:
search_code(
query="<identifier>",
symbol_name="<identifier>*",
use_hybrid_search=True,
smart_context=True,
limit=5
)
Synthesize entry points:
Branch:
Starting from entry points, trace how the concept works. Adapt strategy to question type:
For flow questions: Follow the data from input to output, one hop at a time. Build the chain: A() -> B() -> C() -> result. Use get_file_dependencies(file, depth=2) to quickly map how files connect.
For logic questions: Find the core decision function, examine branching logic (if/else, match, strategy patterns), trace each branch.
For subsystem questions: Map public API surface first (breadth-first), then drill into each function (depth-first). Use get_file_impact(file, depth=2) to see what depends on a key file.
For integration questions: Find component A's outbound interface, component B's inbound interface, then the glue where they connect. Dependency tools can reveal cross-module connections instantly.
Present a clear, structured narrative -- not raw search results.
Structure:
One-sentence summary: "Here's how [concept] works: [summary]."
Step-by-step walkthrough: For each step:
file:line reference)smart_context)Key design decisions: Notable patterns, trade-offs, or architectural choices.
Keep explanations narrative, not listy. Connect the dots between code locations. Explain why, not just what.
After presenting the explanation, offer focused follow-ups.
Always ask: "Want me to go deeper into any of these steps, or explore a related area?"
Common follow-ups:
smart_context=True with the specific functionsearch_code(query="test <concept>", symbol_name="test_*<concept>*", symbol_type="function")For common search tips (hybrid search, smart_context, symbol filtering), see skills/README.md.
Autonomous-mode specific:
Interactive-mode specific: