From cocosearch
Guides debugging of errors, unexpected behavior, and code flow tracing using CocoSearch semantic/symbol search with index/dependency health checks.
npx claudepluginhub violetcranberry/coco-search --plugin cocosearchThis skill uses the workspace's default tool permissions.
1. **Resolve index name** (use the resolved name for all operations):
Explores codebases to answer questions about how code works, trace execution flows, or research topics via semantic search. Offers autonomous mode for structured subagent output and interactive mode with narrative checkpoints.
Performs semantic code search across the codebase using ccc CLI, manages initialization, indexing freshness, path/language filtering, and pagination.
Provides semantic code search, call graphs, find callers, and impact analysis via Codanna MCP tools. Use before grep/find for refactoring, dependency tracing, and codebase exploration.
Share bugs, ideas, or general feedback.
cocosearch.yaml for indexName field -- if found, use itlist_indexes() and match the current project's directory name against available indexes. The MCP tools auto-derive index names from directory paths (e.g., my-project/ -> my_project), so a match is likely if the repo was indexed without a config file.cocosearch.yaml is missing.list_indexes() to confirm project is indexedindex_stats(index_name="<resolved-name>") to check freshnessCheck dependency freshness — call get_file_dependencies on any known file from the error context:
get_file_dependencies(file="<file-from-error>", depth=1)
warnings with type deps_outdated or deps_branch_drift:
Warn: "Dependency data is outdated — call chain tracing may be incomplete. Want me to re-extract dependencies first?"warnings with type deps_not_extracted:
Note: "No dependency data found. I'll use search-based call tracing instead. Dependency data can improve tracing accuracy — want me to extract dependencies?"Linked index health (if cocosearch.yaml has linkedIndexes):
warnings array from index_stats() for entries starting with "Linked index"Parse what the user is reporting. Different inputs require different extraction:
If error message or exception:
ValueError, TypeError, NullPointerException, etc.If unexpected behavior:
If user provides stack trace:
Store extracted information:
Present back to user: "I see the error is <error-type> in <function-name> when <operation>. Let me search for where this originates."
This is the critical discovery phase. Run both semantic and symbol searches simultaneously to find the strongest leads.
Semantic search for symptom:
search_code(
query="<user's symptom description>",
use_hybrid_search=True,
smart_context=True,
limit=10
)
Cross-project search: If
linkedIndexesis configured incocosearch.yaml, searches automatically expand to linked indexes. For bugs spanning shared libraries, passindex_names=["app", "shared-lib"]to trace across boundaries.
Symbol search for each identifier: For each identifier extracted from the symptom (function names, class names, error types):
search_code(
query="<identifier>",
symbol_name="<identifier>*",
use_hybrid_search=True,
smart_context=True,
limit=5
)
Synthesize results:
Present findings: "Based on the symptom, I found these strong leads:
<file-path> contains <function-name> (appears in both semantic and symbol searches)<other-file> has related code but lower confidenceThe issue likely originates in <strongest-candidate>. Want me to trace how code flows through this area?"
Branch based on findings:
language="python" if language knownStart shallow, go deeper on request. Default to ONE HOP first.
If the project has a dependency index, use the dependency MCP tools first — they provide instant, complete dependency data:
# What does this file depend on?
get_file_dependencies(file="<file-path>", depth=1)
# What depends on this file? (impact analysis)
get_file_impact(file="<file-path>", depth=2)
This immediately shows callers and callees at the file level. If the bug is in an imported dependency or caused by an upstream caller, the dependency tree reveals it directly.
If dependency tools return useful data: Use it as the primary trace and supplement with search below for symbol-level detail.
One-hop trace:
search_code(
query="<function-name>",
symbol_name="<function-name>",
symbol_type="function",
smart_context=True
)
search_code(
query="calls <function-name>",
use_hybrid_search=True,
limit=10
)
search_code(
query="<called-function-name>",
symbol_name="<called-function-name>",
smart_context=True
)
Present one-hop view:
"Function <function-name> at <file>:<line>:
<caller-A>, <caller-B>, <caller-C><callee-D>, <callee-E>Here's the function body:
[full function code from smart_context]
Checkpoint with user: "This is one level deep. Want me to trace deeper into any of these callers or callees?"
If user wants deeper trace:
Trace strategies:
Stop conditions:
Present the root cause clearly:
What's wrong:
smart_context=True)Where it is:
Why it causes the symptom:
Example root cause presentation:
"Root cause found in src/auth/validator.py:45 in function validate_token:
def validate_token(token: str) -> User:
decoded = jwt.decode(token, verify=False) # <-- Problem here
return User.from_dict(decoded)
The issue: verify=False disables signature verification, allowing any malformed JWT to pass. This causes the KeyError you saw because the fake token doesn't have required fields.
This explains your symptom: when an attacker sends a crafted JWT, it's accepted without validation, then fails when trying to extract user data."
Ask about fix suggestions (do NOT auto-suggest): "Want me to suggest a fix based on how this is handled elsewhere in the codebase?"
If user wants fix suggestions:
search_code(
query="JWT token validation with signature verification",
use_hybrid_search=True,
language="python"
)
search_code(
query="jwt.decode verify signature",
use_hybrid_search=True
)
src/api/auth.py:23:def validate_api_token(token: str) -> User:
try:
decoded = jwt.decode(token, SECRET_KEY, algorithms=['HS256'])
return User.from_dict(decoded)
except jwt.InvalidTokenError as e:
raise AuthenticationError(f"Invalid token: {e}")
Suggested fix for validator.py:45:
SECRET_KEYWant me to show the exact code change?"
If user doesn't want fixes: "Root cause identified. Let me know if you need anything else!"
Pattern 1: Symbol type filtering for specific searches
When debugging object-oriented code:
# Find all classes related to authentication
search_code(query="authentication", symbol_type="class")
# Find all methods that handle errors
search_code(query="error handler", symbol_type=["method", "function"])
Pattern 2: Language filtering for polyglot codebases
When error is language-specific:
# Python-specific async issue
search_code(query="async await deadlock", language="python")
# TypeScript type error
search_code(query="type mismatch interface", language="typescript")
Pattern 3: Symbol name wildcards for related functions
When tracing naming conventions:
# Find all handler functions
search_code(query="request processing", symbol_name="*Handler")
# Find all validator methods
search_code(query="validation", symbol_name="validate*")
Pattern 4: Context expansion for full understanding
When you need complete function bodies:
# Get full function context (default with smart_context=True)
search_code(query="database transaction", smart_context=True)
# Get fixed context lines
search_code(query="error handling", context_before=10, context_after=10)
Pattern 5: Pipeline analysis for search debugging
When search results are unexpected, use analyze_query to see the full pipeline breakdown:
# See why a query returns specific results
analyze_query(query="getUserById")
Returns stage-by-stage diagnostics: identifier detection, hybrid mode decision, vector/keyword search results, RRF fusion breakdown (both/semantic-only/keyword-only counts), definition boost effects, and per-stage timings.
For installation instructions, see skills/README.md.