From agent-almanac
Conducts structured knowledge acquisition for unfamiliar codebases, frameworks, domains, or topics via survey, model-building, exploration, verification, and consolidation.
npx claudepluginhub pjt222/agent-almanacThis skill uses the workspace's default tool permissions.
---
Guides structured learning of new technologies, skills, or topics as a coach: assesses knowledge, designs paths, teaches material, tests understanding, adapts based on feedback, and plans spaced repetition reviews.
Guides deep learning of code via neuroscience: quiz codebases, reflect on builds, brainstorm designs, practice struggle, space reviews. Invoke @agentic-learning learn|quiz|reflect|etc.
Collaborates to investigate any topic, problem, or idea, building evidence-based shared understanding before acting or deciding.
Share bugs, ideas, or general feedback.
Conduct a structured knowledge acquisition session — surveying unfamiliar territory, building initial models, testing them through deliberate exploration, integrating findings into coherent understanding, and consolidating for durable retrieval.
remote-viewing surfaces intuitive leads that need systematic validationteach a topic — the AI must first understand it deeply enough to explain itBefore attempting to understand anything, map the landscape to identify what exists.
Learning Modality Selection:
┌──────────────────┬──────────────────────────┬──────────────────────────┐
│ Territory Type │ Primary Modality │ Tool Pattern │
├──────────────────┼──────────────────────────┼──────────────────────────┤
│ Codebase │ Structural mapping — │ Glob for file tree, │
│ │ find entry points, core │ Grep for exports/imports,│
│ │ modules, boundaries │ Read for key files │
├──────────────────┼──────────────────────────┼──────────────────────────┤
│ API / Library │ Interface mapping — │ WebFetch for docs, │
│ │ find public surface, │ Read for examples, │
│ │ types, configuration │ Grep for usage patterns │
├──────────────────┼──────────────────────────┼──────────────────────────┤
│ Domain concept │ Ontology mapping — │ WebSearch for overviews, │
│ │ find core terms, │ WebFetch for definitions,│
│ │ relationships, debates │ Read for local notes │
├──────────────────┼──────────────────────────┼──────────────────────────┤
│ User's context │ Conversational mapping │ Read conversation, │
│ │ — find stated goals, │ Read MEMORY.md, │
│ │ preferences, constraints │ Read CLAUDE.md │
└──────────────────┴──────────────────────────┴──────────────────────────┘
Expected: A skeletal map of the territory with 5-15 landmarks identified. A sense of which areas are clear from the surface and which require deeper investigation. No understanding yet — just a map.
On failure: If the territory is too large to survey, narrow scope immediately. Ask: "What is the minimum I need to understand to serve the user's purpose?" If the territory has no clear entry point, start from the output (what does this system produce?) and trace backward.
From the survey, construct initial hypotheses about how the system works.
Expected: Concrete, falsifiable hypotheses — not vague impressions. Each has a test that would confirm or refute it. The hypotheses collectively cover the most important aspects of the territory.
On failure: If no hypotheses form, the survey was too shallow — return to Step 1 and read 2-3 landmarks in depth. If all hypotheses feel equally uncertain, start with the simplest one (Occam's razor) and build from there.
Systematically test each hypothesis through targeted investigation.
Expected: At least one hypothesis tested to conclusion. The mental model is beginning to take shape — some parts confirmed, some revised. Surprises are noted as particularly valuable data.
On failure: If probes consistently produce ambiguous results, the hypotheses may be testing the wrong things. Step back and ask: "What would someone who understands this system consider the most important fact?" Probe for that instead.
Synthesize findings into a coherent model that connects the pieces.
Expected: A coherent mental model that explains the territory's structure and predicts its behavior. The model should be expressible in 3-5 sentences and should make specific claims, not vague generalizations.
On failure: If the pieces do not integrate into a coherent model, there may be a fundamental misunderstanding in one of the earlier hypotheses. Identify the piece that does not fit and re-test it. Alternatively, the territory may genuinely be incoherent (poorly designed systems exist) — note this as a finding rather than forcing coherence.
Test the mental model by making predictions and checking them.
Expected: The mental model survives at least 2 of 3 prediction tests. Where it breaks, the failure is understood and the model is corrected. The model now has both confirmed strengths and known limitations.
On failure: If most predictions fail, the mental model has a fundamental flaw. This is actually valuable information — it means the territory works differently than expected. Return to Step 2 with the new evidence and rebuild the hypotheses from scratch. The second attempt will be much faster because the wrong models have been eliminated.
Capture the learning in a form that supports future retrieval and application.
Expected: A concise, retrievable summary that captures the essential understanding. Future references to this topic can start from this summary rather than re-learning from scratch.
On failure: If the learning resists summarization, it may not yet be fully integrated — return to Step 4. If the learning seems too obvious to be worth storing, consider that what feels obvious now may not feel obvious in a fresh context. Store the non-obvious parts.
learn-guidance — the human-guidance variant for coaching a person through structured learningteach — knowledge transfer calibrated to a learner; builds on the model constructed hereremote-viewing — intuitive exploration that surfaces leads for systematic learning to validatemeditate — clearing prior context noise before entering a new learning territoryobserve — sustained neutral pattern recognition that feeds learning with raw data