From thinking-frameworks-skills
Generates counterfactual framings ('what if this component were absent?') to reveal technical elements' functions by describing breakdowns. For intuition-building in analogy sets.
npx claudepluginhub lyndonkl/claude --plugin thinking-frameworks-skillsThis skill uses the workspace's default tool permissions.
- [Workflow](#workflow)
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Related skills: One of the 5 archetypes in generate-analogy-set. Can also be invoked standalone when the writer specifically wants the counterfactual angle without the full 5-set.
For topic T:
- [ ] Step 1: Identify the component / mechanism whose function the writer wants to illuminate
- [ ] Step 2: Propose the subtraction — what if this component were absent?
- [ ] Step 3: Describe the concrete system that results (what you'd have instead)
- [ ] Step 4: Describe what breaks — performance, correctness, expressivity, efficiency
- [ ] Step 5: Return the counterfactual framing statement
Three sub-archetypes, pick whichever fits best:
Topic: Attention (in Transformers).
Ablation: "Remove attention from a transformer and you have a stack of residual MLPs per token — no information ever flows between token positions within a layer. The model can still transform each token independently, but 'context' is gone. That absence is what attention is 'doing.'"
Substitution: "Replace attention with fixed convolutions (the RNN/CNN alternative). You get locality — each token sees its neighbors — but the model can't arbitrarily connect token 1 to token 500. Attention's gift is not the operation, it's the arbitrary-range addressing."
Inversion: "Invert attention: instead of softmax-weighted averaging, what if the model picked exactly one token to copy from? That's hard-attention, and it turns out to be worse for training — soft interpolation makes the loss surface navigable. Attention is soft by gradient-descent necessity, not by design choice."
Pick one (usually ablation for a 5-framing set). The others can become a standalone post later.