From agent-almanac
Investigates unfamiliar codebases, problems, or systems using adapted Coordinate Remote Viewing: clears assumptions via cooldown, stages raw observations to analysis, manages analytical overlays for unbiased insights.
npx claudepluginhub pjt222/agent-almanacThis skill uses the workspace's default tool permissions.
---
Autonomously explores codebases in two phases: discovers natural perspectives (layers, components, boundaries), then dispatches adaptive deep-dive explorers. Synthesizes actionable insights for unfamiliar codebases or cross-component features.
Conducts evidence-based codebase analysis with confidence tracking, gathering from code, docs, tests, git history for architecture mapping, pattern extraction, and technical research.
Launches agent team for parallel deep research on codebases, architectures, or technical topics, building causal models (what exists, why, what breaks) over surface coverage. Use for multi-file investigations or complex questions.
Share bugs, ideas, or general feedback.
Approach an unknown codebase, problem, or system using the Coordinate Remote Viewing protocol adapted for AI investigation — gathering raw observations before forming conclusions, managing premature labeling (Analytical Overlay), and building understanding through staged data collection.
meditate)Transition from assumption-heavy mode into receptive observation. This step is non-negotiable.
Expected: A list of declared preconceptions and a conscious shift from "I think I know what this is" to "I will observe what this actually is." Alert and receptive, not jumping to conclusions.
On failure: If assumptions keep reasserting ("but it really IS a React app..."), extend the cooldown. Write the assumption on a "parking lot" list and continue. Do not begin data gathering while actively attached to a specific hypothesis — it will color everything you observe.
Make initial contact with the target through the most minimal observation possible.
Glob to see only the top-level structure (e.g., * or path/*) — do not read any files yetExpected: A handful of raw, low-level observations about the target's surface characteristics. No names, no labels, no architectural patterns — just shapes, sizes, and textures.
On failure: If you immediately categorize the project ("oh, this is a Next.js app"), declare it as AOL (Step 6), extract the raw descriptors underneath the label ("JavaScript files, nested pages directory, package.json present"), and continue with those raw observations.
Systematically collect raw data about the target without interpretation.
Stage II Data Channels for Codebase Investigation:
┌──────────────────┬────────────────────────────────────────────────────┐
│ Channel │ What to Observe │
├──────────────────┼────────────────────────────────────────────────────┤
│ File patterns │ Extensions, naming conventions, file sizes │
│ │ (NOT frameworks — just patterns) │
├──────────────────┼────────────────────────────────────────────────────┤
│ Directory shape │ Depth, breadth, nesting patterns, symmetry │
├──────────────────┼────────────────────────────────────────────────────┤
│ Configuration │ What config files exist? How many? What formats? │
├──────────────────┼────────────────────────────────────────────────────┤
│ Dependencies │ Lock files present? How large? How many entries? │
├──────────────────┼────────────────────────────────────────────────────┤
│ Documentation │ README present? How long? Other docs? Comments? │
├──────────────────┼────────────────────────────────────────────────────┤
│ Test presence │ Test directories? Test files? Ratio to source? │
├──────────────────┼────────────────────────────────────────────────────┤
│ History signals │ Presence of .git/, CHANGELOG/RELEASE_NOTES, │
│ │ lockfile timestamps (via Glob/Read if accessible) │
├──────────────────┼────────────────────────────────────────────────────┤
│ Energy/activity │ Which areas changed recently? Which are dormant? │
└──────────────────┴────────────────────────────────────────────────────┘
Glob, Grep, and light Read operationsExpected: A list of raw observations that feel discovered rather than assumed. Some will be significant, some noise. The data should be low-level descriptions, not high-level categorizations.
On failure: If every observation turns into a categorization, you have slipped into analysis. Stop, return to the ideogram step, and re-contact the target with fresh eyes. If one channel dominates (all file observations, nothing about history), deliberately shift to underused channels.
Move from raw observations to spatial and structural understanding.
Expected: A rough structural map with relationship annotations. The target's general scope (large/small, simple/complex, monolithic/modular) becomes clearer. The "feeling" of the codebase is captured.
On failure: If the map feels like pure guesswork, simplify: note only the connections you can verify (actual import statements, actual config references). If no structural patterns emerge, return to Stage II and collect more raw data — dimensional understanding requires a foundation of observations.
In classic CRV, Stage IV focuses on deeper analytical structure; for codebase investigation, that work is intentionally merged into the earlier dimensional/structural stages above, so this adapted protocol proceeds to Stage V for directed questioning.
Now, and only now, bring specific questions to the investigation.
Grep and Read — targeted, not exploratoryExpected: Specific answers to directed questions, grounded in the raw and structural data already collected. Confidence levels are honest.
On failure: If directed questions produce only AOL (you are answering from assumption rather than evidence), return to earlier stages. The CRV protocol is sequential for a reason — skipping the observation stages and jumping to questions produces unreliable answers.
AOL is the primary source of error in investigation. It occurs when the analytical mind prematurely labels the target. Manage it throughout the entire session.
AOL Types in Codebase Investigation:
┌──────────────────┬─────────────────────────────────────────────────┐
│ Type │ Description and Response │
├──────────────────┼─────────────────────────────────────────────────┤
│ AOL (labeling) │ "This is a Django app" — Declare: "AOL: Django"│
│ │ Extract raw descriptors: "Python files, urls.py,│
│ │ migrations directory, settings module." │
├──────────────────┼─────────────────────────────────────────────────┤
│ AOL Drive │ The label becomes insistent: "This HAS to be │
│ │ Django." Declare "AOL Drive" and pause. What │
│ │ evidence contradicts the label? Look for it. │
├──────────────────┼─────────────────────────────────────────────────┤
│ AOL Signal │ The label may contain valid information. After │
│ │ declaring, extract: "Django" → "URL routing, │
│ │ ORM pattern, middleware chain." These raw │
│ │ descriptors are valid data even if "Django" is │
│ │ wrong. │
├──────────────────┼─────────────────────────────────────────────────┤
│ AOL Peacocking │ An elaborate narrative: "This was built by a │
│ │ team that was migrating from Java and..." This │
│ │ is imagination, not signal. Declare "AOL/P" and │
│ │ return to raw observation. │
└──────────────────┴─────────────────────────────────────────────────┘
The discipline is not avoiding AOL — it is recognizing and declaring it so it does not contaminate the investigation. Every investigation produces AOL. Skill is in how fast you catch it.
Expected: AOL is recognized within moments of arising, declared explicitly, and the investigation continues with raw descriptors rather than labels.
On failure: If AOL has taken over (you realize you have been reasoning from a label for several steps), call an "AOL Break." Return to Stage II and collect new raw observations that test the label. A heavily contaminated investigation should be noted as such in the review.
End the investigation formally and synthesize findings.
Expected: A grounded understanding of the target built up from raw observations rather than assumed from pattern matching. The synthesis is more accurate than a quick categorization would have been, and the confidence levels are honest.
On failure: If the synthesis feels thin, the earlier stages may not have collected enough data. But do not dismiss partial findings — a description of "73 TypeScript files, deeply nested component structure, active git history, thin test coverage" is more useful than a wrong label. Accurate description is the goal, not identification.
remote-viewing-guidance — the human-guidance variant where AI acts as CRV monitor/taskermeditate — the mental stillness and assumption-clearing developed in meditation directly improves investigation qualityheal — when investigation reveals the AI's own reasoning biases, self-healing addresses the root cause