From thinking-frameworks-skills
Scans Substacker drafts for 10 AI-generated slop signatures like meta-framing openers, zombie nouns, prompt residue, and buzzword stuffing. Use when drafts feel generic after voice-check.
npx claudepluginhub lyndonkl/claude --plugin thinking-frameworks-skillsThis skill uses the workspace's default tool permissions.
- [The 10 signatures](#the-10-signatures)
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Related skills: Called by the Editor voice pass. Consumes hedge-cluster count from hedge-detector (S8). Emits the "Slop signatures" subsection.
Fixed list. Each either clean or flagged with the offending span.
| # | Signature | Detection |
|---|---|---|
| S1 | Meta-framing opener | First paragraph contains In this post, This article, We will explore, Let's dive into, Today we'll look at |
| S2 | List-carrying-argument | Any bulleted list where the argument collapses if bullets are removed. Test: does the prose still stand without the list? |
| S3 | Zombie nouns (Sword) | >3 nominalizations per 100 words (suffixes: -ation, -ity, -ment, -ence on abstract nouns) |
| S4 | Generic examples | "a company" / "a model" / "a user" with no specific name, scale, dataset |
| S5 | No first-person | Zero I, my, we-as-me in a >800-word reflective essay |
| S6 | Prompt residue | Let's break this down, To summarize, In conclusion, Key takeaways, Let me explain |
| S7 | Outline-shaped paragraphs | >60% of paragraphs follow same syntactic shape: topic → 3 supporting sentences → transition |
| S8 | Hedge cluster | ≥2 epistemic-weakness hedges within 50 words (from hedge-detector) |
| S9 | Buzzword stuffing | ≥3 terms from {game-changer, paradigm shift, under the hood, delve, unpack, dive into} in a single draft |
| S10 | Flattened uncertainty | Any small-N caveat that appears in corpus/drafts/notes/ but was removed in the submitted draft (requires notes; else skip this signature) |
Slop scan draft D:
- [ ] Step 1: For each signature, run detection rule
- [ ] Step 2: Mark each signature as clean | flagged (with quote)
- [ ] Step 3: Tier-1 signatures: S1, S2, S6 (generic framing + prompt residue)
- [ ] Step 4: Tier-2 signatures: S3, S4, S5, S7, S9
- [ ] Step 5: Emit the slop signatures subsection with each labeled clean/flagged
Count suffix hits (-ation, -ity, -ment, -ence, -ness, -ance) on abstract nouns per 100 words. >3 = flag. Example: "provides analysis of" → nominalized; "analyzes" → active.
Flag an example if it uses only generic pronouns / nouns without a specific anchor:
Parse paragraphs; count those with the shape:
60% of paragraphs following this shape → the draft reads like an AI-generated outline expanded.
Draft fragment:
In this post, we'll explore why RAG beats fine-tuning.
First, let's define RAG. It's a technique where models retrieve documents before generating. A company might use RAG for their customer service chatbot.
Second, fine-tuning involves training. A team might fine-tune to adapt style.
Third, RAG has benefits. Fine-tuning has drawbacks. It could be argued that hybrid works.
To summarize, both approaches have merit.
Detections:
Output: 5 signatures flagged (S1, S2, S4, S6, S7). Tier-1: S1, S2, S6 = 3 tier-1 slop violations.
hedge-detector. Hedge clusters flow from hedge-detector into S8 as an input, not a separate scan here.hedge-detector cluster count as S8 input.