From rpikit
Explores codebases through collaborative dialogue: clarifies research needs via questions, then systematically locates and analyzes files for architecture, patterns, and data flows.
npx claudepluginhub bostonaholic/rpikit --plugin rpikitThis skill uses the workspace's default tool permissions.
Research topic: **$ARGUMENTS**
Explores unfamiliar codebases by detecting project types, monorepos, architecture, and patterns to produce structured research reports before spec writing.
Performs deep codebase exploration with parallel agents to discover architecture, find files, trace data flows, analyze patterns, and assess code health. Invoke for repo walkthroughs or feature tracing.
Conducts evidence-based codebase analysis with confidence tracking, gathering from code, docs, tests, git history for architecture mapping, pattern extraction, and technical research.
Share bugs, ideas, or general feedback.
Research topic: $ARGUMENTS
Help turn research requests into thorough codebase understanding through natural collaborative dialogue.
Start by understanding what the user needs to learn, then ask questions one at a time to refine the scope. Once you understand what you're researching, explore the codebase systematically, presenting findings in digestible sections and validating as you go.
Ask questions BEFORE exploring code.
Do not touch the codebase until the problem is understood. Resist the urge to immediately search for files or read code.
Your first action must be asking a clarifying question.
Do NOT:
Ask questions one at a time using AskUserQuestion:
Focus on understanding:
When you believe you understand, confirm:
Summarize your understanding and ask if it's accurate before proceeding. If anything needs clarification, ask follow-up questions.
Only proceed after confirming understanding with the user.
Use the file-finder agent to locate files relevant to the research objective:
Task tool with subagent_type: "file-finder"
Prompt: "Find files related to [topic from interrogation]. Goal: [user's stated purpose]"
The file-finder will return a structured report with:
Use TaskCreate to track exploration based on the file-finder report. Create one task per file category (core, supporting, test, config) and update via TaskUpdate as you examine each.
Examine core files first:
Trace relevant data flow:
Review supporting files:
Identify technical constraints:
After identifying relevant files, use the LSP tool for deeper structural understanding:
goToDefinition — trace how functions and types connect across filesfindReferences — understand where symbols are used throughout the codebasedocumentSymbol — get a structured overview of a file's exports and structureincomingCalls / outgoingCalls — map call hierarchies to understand data flowIf LSP is unavailable (no configured language server), skip this step and rely on Grep-based content search.
For single-page lookups (e.g., checking a library's API docs or a specific GitHub issue), use WebFetch directly instead of spawning a web-researcher agent. Reserve the web-researcher for multi-source research requiring synthesis.
If codebase exploration reveals external factors that need broader investigation, use the web-researcher agent:
Task tool with subagent_type: "web-researcher"
Prompt: "[specific research question about external topic]"
Use web research for:
The web-researcher returns findings with source citations and confidence assessments.
Present findings incrementally:
Create research document at: docs/plans/YYYY-MM-DD-<topic>-research.md
(Use today's date in YYYY-MM-DD format)
# Research: <Topic> (YYYY-MM-DD)
## Problem Statement
[What the user wants to accomplish]
## Requirements
[Key requirements gathered during interrogation]
## Findings
### Relevant Files
| File | Purpose | Key Lines |
| --------------- | ----------- | --------- |
| path/to/file.ts | Description | 42-87 |
### Existing Patterns
[Patterns discovered that inform implementation]
### Dependencies
[External and internal dependencies]
### External Research
[Findings from web research, if conducted - include sources]
### Technical Constraints
[Limitations discovered during exploration]
## Open Questions
[Questions that remain unanswered]
## Recommendations
[Initial thoughts on approach]
Ask what the user wants to do next:
Funnel questions - Start broad, narrow based on answers:
Assumption surfacing - Make assumptions explicit:
I'm assuming this needs to work with the existing auth system. Is that correct?
Trade-off questions - When multiple approaches exist:
There's a trade-off: Option A is faster to build but less flexible. Option B is more flexible but more complex. Which matters more here?
Clarification through examples - When requirements are vague:
Can you give me an example of what you'd expect to happen?
| Wrong | Right |
|---|---|
| Reading files immediately | Ask questions first |
| Multiple questions in one message | One question, wait, then next |
| "I understand, let me look" | "Let me confirm: [summary]. Accurate?" |
| "How should we handle this?" | "Should we A) do X, B) do Y, or C) something else?" |
| "I'll add a new AuthService" | "The codebase uses repository pattern. Auth is here." |