From thinking-frameworks-skills
Computes 0-10 intuition-density score for seed text using 8 signals: analogy presence, worked examples, counterfactuals, reframes, biology-AI transfers, questions, hedges, math-metaphors. Outputs score and signals for pipeline frontmatter enrichment.
npx claudepluginhub lyndonkl/claude --plugin thinking-frameworks-skillsThis skill uses the workspace's default tool permissions.
- [The 8 signals](#the-8-signals)
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Related skills: Called by ingest-inbox-item step 3. Reads shared-context/style-guide.md (for em-dash reframe pattern). The score enters the seed's frontmatter as intuition_density.score; the triggered signals are recorded as intuition_density.signals.
Each signal is detected by a concrete pattern (not "LLM vibes"). Weighted sum, clamped to [0, 10].
| Signal | Detection rule | Weight |
|---|---|---|
analogy_present | explicit "like", "think of it as", "is secretly", "is to X what Y is to Z", em-dash reframe | 2 |
concrete_worked_example | numbered or named instance with specific numbers / entities (names a system, shows arithmetic) | 2 |
counterfactual_offered | "if it were X instead", "unlike Y", "this is why Z doesn't work" | 1 |
reframe_against_default | "not X — Y", "people say X but", em-dash reframe | 1 |
biology_to_ai | biology vocabulary (antibody, neuron, immune, evolution, synapse, crypt, DNA) in an AI context | 1 |
question_posed | interrogative sentence that drives the piece forward | 1 |
hedge_calibrated | "I do not know", "I am not sure", explicit uncertainty with scope | 1 |
math_to_metaphor_handoff | equation or formal statement followed by prose restatement | 1 |
Max weight sum: 10. Min: 0.
Score one seed body:
- [ ] Step 1: Run each of the 8 detection patterns over body + title
- [ ] Step 2: For each signal that fires, record in signals list
- [ ] Step 3: Sum weights; clamp to [0, 10]
- [ ] Step 4: Return {score: int, signals: [str]}
analogy_present: regex for the markers above, AND the analogy must map source → target (a simile that names only one side doesn't count).concrete_worked_example: presence of numbers + a named entity. "3B params", "$11.35", "fifty queries" — yes. "a model" — no.counterfactual_offered: look for the 3 phrase patterns above, OR an explicit if-not construction.reframe_against_default: em-dash reframe pattern (X — actually Y) or explicit "not X / rather Y".biology_to_ai: biology vocabulary present AND the essay is an AI/ML context (inferred from body topic tags if available).question_posed: ?-terminated sentence that is not rhetorical filler. Excludes questions in a quoted Q&A format.hedge_calibrated: "I do not know" (full sentence, not "I don't know what to order for dinner"), "I am not claiming", "I can't prove", specific-scope hedges ("on n=1", "in the three teams I've tested").math_to_metaphor_handoff: inline math/equation/formal statement (LaTeX or prose-math) followed within 2 sentences by a prose restatement.Input body (dropout-as-ensemble):
had a thought while running — dropout is secretly an ensemble method. each forward pass is a different sub-network. so at test time when you turn dropout off and scale, you're averaging predictions across exponentially many thinned networks. this is why it generalizes. not "regularization" in the L2 sense. more like bagging. reminds me of how the immune system doesn't pick one antibody — it runs a population and lets the best ones dominate. dropout is antibody diversity for weights.
Detection run:
analogy_present — fires (dropout is antibody diversity for weights). +2concrete_worked_example — fires (each forward pass is a different sub-network, specific mechanism). +2counterfactual_offered — fires (not "regularization" in the L2 sense). +1reframe_against_default — fires (more like bagging, reframe against "regularization"). +1biology_to_ai — fires (immune system / antibody). +1question_posed — no.hedge_calibrated — no.math_to_metaphor_handoff — no.Output: {score: 7, signals: [analogy_present, concrete_worked_example, counterfactual_offered, reframe_against_default, biology_to_ai]}.
low_commentary: true above 3 (capped by caller).{score, signals}.