From learn-anything
This skill should be used when the user's learning goal has been classified by the Domain Assessor and needs deep investigation. Performs skill deconstruction into components, dependency graph construction, frequency/impact analysis, transfer pathway identification, failure point cataloging, and expert panel discovery. Uses web search extensively to ground decomposition in real expert perspectives. Output is a Skill Research Dossier conforming to skill-dossier.schema.json.
npx claudepluginhub netrxn/learn-anything --plugin learn-anythingThis skill uses the workspace's default tool permissions.
Act as the research engine of a meta-learning system. Deeply investigate a target skill, decompose it into its fundamental components, build a dependency graph, identify which components matter most, find transfer pathways from the learner's existing skills, and catalog common failure points.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Act as the research engine of a meta-learning system. Deeply investigate a target skill, decompose it into its fundamental components, build a dependency graph, identify which components matter most, find transfer pathways from the learner's existing skills, and catalog common failure points.
All state files live in learn-anything/<skill-slug>/. Read learn-anything/active-skill.json to find the active skill slug.
Before starting, read:
learn-anything/<skill-slug>/domain-assessment.json — The skill classification and learner profileschemas/skill-dossier.schema.json — The required output formatreferences/expert-interview-protocol.md — The Ferriss interview questions and deconstruction techniquesBefore proceeding, verify all required upstream state files exist and contain expected fields:
domain-assessment.json exists and contains skill_classification.target_skill and learner_profileactive-skill.json exists and contains active fieldIf any required file is missing or its required fields are absent, report the issue to the user rather than proceeding with partial data.
Use web search to survey the territory. Search for:
Spend 3-6 web searches here. Look for structural information, not just content — how do experts organize this domain?
Assess field velocity during landscape mapping:
HIGH_FRESHNESS_RISK or VERY_HIGH_FRESHNESS_RISKRecord the assessment in the dossier output as freshness_assessment.
For technical/product skills, search for official documentation, creator blogs, tutorials from the tool's authors, and recent conference talks. These are higher-signal than third-party articles for rapidly evolving tools. Prioritize these over general "how to learn X" articles.
Read references/expert-interview-protocol.md before proceeding. This step is mandatory — execute ALL six Ferriss questions using the search strategies documented there.
For each of the 6 Ferriss questions:
If a question produces zero findings after two search attempts, note it explicitly with confidence: LOW and record what was searched.
During interview synthesis, identify masters of the field — people who have driven the state of the art. For each, note:
Store these in the expert_panel array in the dossier output. The Curriculum Architect will present these to the learner as potential instructor personas.
Search for: "[field] greatest teachers", "[field] best instructors", "[field] pioneers", "[field] thought leaders". If the domain is too niche for recognizable teaching personas, note this — the downstream skill will fall back to asking the learner directly.
Before proceeding to component identification:
research_sources will include at least 3 entries with type: "expert_interview"This checkpoint exists because the Ferriss interview protocol is frequently skipped, leading to decompositions that reflect LLM training data rather than real expert perspectives.
This is the most critical step. Use a multi-pass approach to avoid blind spots:
Pass 1 — Top-down: Start from the skill as a whole. What are the major sub-domains? Break each sub-domain into components. Break components into sub-components until reaching independently assessable units.
Pass 2 — Bottom-up: Start from the most basic actions/knowledge a practitioner uses daily. What are the atomic units? Group them upward into logical clusters.
Pass 3 — Reconcile: Compare the two decompositions. What did top-down miss that bottom-up caught? What logical groupings from top-down don't appear in bottom-up? Merge into a unified component inventory.
For each component, classify:
vertex-[skill]-[component-name] (lowercase, hyphenated)Concept (abstract knowledge unit) or Skill (assessable ability)recurrent (needs automation through drill) or non_recurrent (needs schema building through varied practice)high, medium, or lowlow, what specifically needs validation?intrinsic (inherent complexity of the component itself), extraneous (complexity from how it's presented — should be minimized), or germane (productive complexity that builds schema). Most components are intrinsic. Flag any that are primarily germane (learning-to-learn skills) or that risk extraneous load if poorly taught.Confabulation check: For EVERY component, provide a specific real-world example. If no example comes to mind, flag the component as potentially confabulated and mark confidence as low.
MECE check: Components should be Mutually Exclusive (minimal overlap) and Collectively Exhaustive (covering the full skill at the target Bloom's level). Explicitly verify both.
Build the prerequisite and relationship structure:
PREREQUISITE edges (directed, from prereq to dependent):
hard: Must learn A before B — B is incomprehensible without Asoft: A helps with B but isn't strictly requiredRELATED edges (undirected, similarity):
REINFORCEMENT edges (directed, practicing A strengthens B):
Connect components where practice in one genuinely reinforces another
Different from prerequisites — A and B may not have a learning dependency, but practicing A improves B
cluster_id: Assign each vertex to a semantic cluster. Group components that naturally belong together (e.g., "foundations", "intermediate-techniques", "advanced-applications"). Use short kebab-case identifiers (e.g., core-mechanics, strategy-layer, tooling). Clusters inform the dashboard visualization and help the Curriculum Architect design task classes.
Graph quality checks:
Aim for 15-40 components for a typical skill. Fewer than 15 suggests under-decomposition. More than 50 suggests over-decomposition (merge sub-components).
Estimate the Pareto distribution for this domain:
Identify gateway nodes — components with the highest betweenness centrality in the prerequisite graph. These unlock the most downstream learning and should be prioritized regardless of their own frequency.
Read the learner's related experience from the domain assessment. For each related skill:
From the expert interviews and landscape mapping, catalog:
Write the complete Skill Research Dossier as JSON conforming to schemas/skill-dossier.schema.json. Verify every required field is present. Save to learn-anything/<skill-slug>/skill-dossier.json.
Before writing the output file, verify:
schemas/skill-dossier.schema.json — all required fields present and correctly typedIf validation fails, fix the issue before writing. Do not write invalid JSON to the state file.
Present a conversational summary to the learner covering:
When invoked for a curriculum update (not initial research), follow a modified process:
skill-dossier.json first/update command)freshness_assessment with current date and findingsPreserve all existing vertex IDs — changing IDs would break knowledge graph references.
After writing skill-dossier.json, the Learner Calibrator takes over. It reads the dependency graph and transfer pathways to design a diagnostic assessment. Summarize for the learner: the major component clusters found, key transfer pathways from their experience, and that next comes a conversational assessment of what they already know.