Build deep, structured understanding of a codebase area or system topic before acting on it. Three complementary lenses: (1) Surface mapping — enumerate product and internal surfaces a topic touches, trace connections and blast radius across the system. (2) Pattern inspection — discover conventions, shared abstractions, and the "grain" of a code area by reading siblings. (3) System tracing — follow call chains, map transitive dependencies, identify cross-boundary contracts and implicit coupling. Use before spec work, implementation, review, debugging, or any task that needs understanding of what a feature touches and how the code works. Subsumes find-similar. Triggers: explore, inspect, discover, what does this touch, map the surfaces, world model, surface areas, what's affected, what patterns exist here, how does this area do X, before implementing, check conventions, find similar, trace the flow, blast radius, what would break, dependency tree, codebase inspection, feature scope, what would this change.
From engnpx claudepluginhub inkeep/team-skills --plugin engThis skill uses the workspace's default tool permissions.
references/output-formats.mdreferences/surface-categories.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Explore builds structured understanding through three complementary lenses:
Surface mapping is wide (system-level). Pattern inspection is horizontal (what do peers look like?). System tracing is vertical (what does this flow through?). An exploration can use any combination of lenses, calibrated to purpose.
Explore is factual, not prescriptive. It reports what exists — surfaces, conventions, flows, dependencies. It does not evaluate whether patterns are good, recommend changes, or propose architecture.
Output defaults to conversation. Saving to a file is fine when it serves a consumer (e.g., feeding an implementation prompt, anchoring a spec). But default to ephemeral — codebases change and saved output gets stale.
/spec)/research)Determine which lenses to activate based on purpose:
| Context | Primary lens(es) | Depth |
|---|---|---|
| Pre-spec: understanding what a feature touches | Surface mapping + Tracing | Broad — L1-L3, sometimes L4 |
| Pre-implementation: what conventions to follow | Pattern inspection | Focused — L1-L3, L2 is critical |
| Reviewing PR or evaluating feedback | Pattern inspection | Targeted — L1-L2, just enough to answer |
| Writing or modifying tests | Pattern inspection | Focused — L1-L2 in test directories |
| Debugging / tracing a failure | System tracing | Focused — call chains through the failing path |
| "Find similar" / "do we do X elsewhere?" | Pattern inspection | Varies — start L1, expand as needed |
| "What connects to X" / "blast radius" | System tracing | Deep — entry point to system boundaries |
| "What surfaces does this touch?" / impact analysis | Surface mapping | Exhaustive — both product and internal |
| Changing a core primitive or shared abstraction | Surface mapping + Tracing | Deep — map all consumers and fan-out |
| Full world model for a topic | All three | Broad — full enumeration + code verification |
When the purpose is not stated, infer from context using this table.
Before starting, create tasks to track progress through the phases:
Mark each task in_progress when starting and completed when finished.
Determine what to investigate and why.
Input: A target (files, directories, domain, module, feature, topic) and optionally a purpose and consumer.
/spec, /ship, /docs, /debug, direct user — this determines emphasis and output format.Before any original investigation, find what the repo already knows.
Scan locations (check all that exist):
.agents/skills/, .claude/agents/ — skills with surface area catalogs, system maps, dependency graphsAGENTS.md, CLAUDE.md — may reference architecture docs or conventionsRecord: What was found and what it covers (reference by path — don't duplicate catalog content). What gaps remain. How current the knowledge appears (check dates, verify a few claims against code).
When nothing is found: Fall back entirely to original investigation. Note the absence — the repo would benefit from surface area documentation.
Active when surface mapping lens is selected.
Load: references/surface-categories.md
Identify which customer-facing and internal surfaces the topic touches. Start from repo-level catalogs if available; enumerate from scratch for gaps. Use the canonical category tables as enumeration scaffolding — they prevent missing surfaces.
For each product surface touched:
For each internal surface touched:
conversations table and its migration history"Both maps, always. Every topic touches product AND internal surfaces. Never produce one map without the other.
Don't just list surfaces — trace why each is affected. A surface is touched because something upstream in the dependency chain connects it to the topic. Name that connection.
Scope control for investigation depth:
After identifying surfaces, map how they connect for this topic. This turns two flat lists into a dependency graph.
Build the connection map: Surface → depends on → Surface. This is the skeleton of the world model brief's connection graph.
Active for pattern inspection and system tracing lenses.
Start narrow. Expand only when needed. Stop when patterns emerge or the flow is mapped.
Level 1 — Direct search. Find the specific files, functions, or types mentioned or implied. Exact names, type names, import/export statements, known synonyms. Stop if: found clear matches.
Level 2 — Sibling discovery. Find files that serve the same role. Same directory, same naming pattern (*.handler.ts, *.route.ts, use*.ts), parallel directories. Read 3-5 sibling files. This is the minimum to distinguish patterns (consistent across files) from one-offs. One file is an anecdote; three files showing the same thing is a convention.
git log --oneline <path> to identify which convention is newer (active migration) vs. older (legacy). The consumer decides which to follow.<other domain>, not confirmed locally").Stop if: patterns are emerging (or confirmed absent).
Level 3 — Reference tracing. Follow the dependency graph. What does this area import? (shared types, utilities, helpers, services.) What imports this area? (consumers, dependents.) Where are the shared abstractions defined? This reveals the shared vocabulary — the types and helpers the codebase expects you to use.
External import escalation (mandatory — do not silently stop at repo boundaries): When tracing imports that reference a package outside the current repo, always check whether organization companions or community packages exist. This applies whether the external dependency is OSS or a closed product — companies frequently open-source client SDKs, templates, adapters, and provider packages around a proprietary core.
Specifically: (1) Check the organization's package registry (npm @org/*, GitHub org repos) for companion packages. A missing feature in repo A may live in companion package B. (2) Search for community ecosystem: "[project] integrations", "[project] adapter", "awesome [project]". Third-party wrappers and adapters bridging between ecosystems frequently implement capabilities the core project doesn't. (3) Check locally (~/.claude/oss-repos/, node_modules/) for already-available packages. Flag all findings — do not assume the primary repo contains all functionality. For the full ecosystem discovery protocol when this exploration is part of a research workflow, see source-code-research.md.
Stop if: you understand the shared abstractions and import conventions.
Level 4 — Conceptual expansion. Broaden to find analogous patterns in other domains. Same concept different domain, same pattern shape different names, cross-domain analogues. Use sparingly. Only go here when L1-L3 didn't reveal clear patterns or you need to confirm whether a pattern is area-local or repo-wide.
Supplementary signals: Git history (git log --oneline <path>, git log --diff-filter=M <path>) — files that change together are related; recent changes reveal active patterns vs. legacy. Git blame (git log --follow <file>) — helps distinguish intentional conventions from accidental one-offs.
When the goal is flow, dependencies, or blast radius rather than pattern discovery.
Think in graphs, not chains. A codebase is a dependency graph — changes propagate non-linearly. A change to a shared type fans out to every consumer, and each consumer's dependents in turn. Notice where the graph fans out — amplifier nodes where blast radius grows exponentially.
Start from an entry point — a specific function, route, handler, type, or module.
Trace forward (downstream): What does this call? Where does data go? What transformations happen? What side effects are triggered? (DB writes, events, external calls.)
Trace backward (upstream): What calls this? Where does input data originate? What triggers this code path? (API routes, event handlers, cron jobs.)
Map cross-boundary transitions: Where does control cross package or domain lines? What are the contracts at each boundary? (types, APIs, events, DB schemas.) Which transitions are tight coupling (shared types, direct imports) vs. loose coupling (API contracts, events)?
Trace implicit coupling (highest-risk — invisible in the import graph):
Identify surface area touched: Product surfaces (API endpoints, SDK methods, UI components, CLI commands, docs, error messages) and internal surfaces (database tables, auth/permissions, telemetry spans, build/CI, configuration). If the repo provides surface area inventories, use them as the authoritative map — they're far more reliable than ad hoc enumeration.
Stop conditions:
Keep tracing through leaky boundaries: shared types across domains, internal utilities without versioning, implicit contracts (convention-based, not enforced).
Depth control: Calibrate to the question. "What does this touch?" → one level. "Blast radius?" → transitive to stable boundaries. "Full flow?" → entry to system boundary, both directions.
Active when pattern inspection lens is selected. Runs after search has found the area.
Synthesize what the sibling files and reference tracing revealed about the area's conventions.
Patterns to look for (include only categories where you found clear patterns):
Shared vocabulary: Types, utilities, helpers, and abstractions this area uses — things to build on, not duplicate. Format each as: name — what it does — where it's defined.
When patterns diverge: Report both and indicate which appears to be the active convention based on recency. Do not flatten the divergence — the consumer needs to know the area is mid-migration.
When searching for specific patterns (especially "find similar" queries), classify what kind of similarity:
| Type | Search strategy | Example |
|---|---|---|
| Lexical — same names, keywords | Grep for exact terms | "Where else do we call formatDate?" |
| Structural — same code shape, different names | Read siblings, look for repeating structure | "Where else do we have retry logic?" |
| Analogous — same role, different domain | Check parallel directories | "Equivalent handler in run/ vs. manage/?" |
| Conceptual — same purpose, possibly different approach | Level 4 expansion | "How do we handle validation elsewhere?" |
For each match: location, similarity type, confidence (HIGH/MEDIUM/LOW), why similar.
Load: references/output-formats.md
After investigating, synthesize findings into a coherent brief. Choose the format based on which lenses were active:
Confidence provenance — label every finding:
Gap discipline — for every surface or area you couldn't verify: name it, state what you checked, state what investigation would resolve it.
Typically 15-50 lines, not a full report. Calibrate to what the consumer needs.
Save vs. inline: Default to inline. Save to a file when the output will be consumed by a downstream skill (e.g., /spec needs the world model, /implement needs the pattern brief) or when the user explicitly asks. Saved briefs are snapshots — they go stale as the codebase changes.
Good exploration:
Bad exploration: