Interactive code learning through skeleton-based exercises. Use when a user wants to deeply learn a codebase, library, or programming concept by building it themselves. Triggers: "learn this codebase", "study this project", "teach me how this works", "I want to understand this library", "code dojo", "learn by building", "help me learn", "interactive coding exercise". Creates step-by-step exercises where the user fills in TODO implementations verified by tests or type checks.
From workflownpx claudepluginhub seonghyeonkimm/my-claude-code-config --plugin workflowThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Turn any codebase or concept into hands-on exercises where the learner writes the code.
The learner must write the code, not read it. Claude provides:
// TODO blanks for core logicAnalyze the target codebase to identify teachable concepts. Use the Explore agent to understand:
Produce a learning path: ordered list of concepts, smallest-first.
For each step in the learning path, create two files:
Skeleton file (mini/stepN.ts or appropriate extension):
// TODO: ... at each location the learner must implementTest file (mini/stepN.test.ts or appropriate):
1. Present the step: explain what concept this covers and why
2. Learner edits the skeleton
3. Run tests (or type check), show results
4. If failures: point to the failing test name and the relevant concept, not the fix
5. If all pass: give brief feedback on their approach, then present next step
Hints must teach the concept, not give the answer.
Bad hint (too direct):
// Hint: T extends string ? true : false
Good hint (teaches the tool):
// Key concept: conditional types use the same structure as ternary operators.
// A extends B ? C : D
// "If A is assignable to B, then C, else D"
Bad hint (gives the code):
// Hint: return Object.keys(pattern).every(k => k in value && matchPattern(...))
Good hint (points to the building blocks):
// You need to check every key in the pattern.
// Useful tools: Object.keys(), the `in` operator, Array.every()
Rules for hints:
return false, return undefined as any, or type X = unknown as placeholders.When the learner submits their implementation:
Place all exercise files in a mini/ directory within the project:
project/
mini/
step1.ts # Skeleton
step1.test.ts # Tests
step2.ts
step2.test.ts
...
The skeleton/test format adapts to the project language:
tsc --noEmit for type exercisesgo test with table-driven testscargo test with #[test] functionsFor type-level exercises (TypeScript), use compile-time assertions:
type Expect<T extends true> = T;
type Equal<A, B> = (<T>() => T extends A ? 1 : 2) extends (<T>() => T extends B ? 1 : 2) ? true : false;
// Test: type _t = Expect<Equal<MyType<Input>, Expected>>;
Verify with tsc --noEmit --strict instead of Jest.