From humanize
Identify and remove AI writing patterns to make text sound natural and human-written. Use when humanizing commit messages, PR descriptions, review comments, docs, changelogs, or release notes. Also for de-slopping text that sounds robotic, has AI vibes, or reads like ChatGPT output.
npx claudepluginhub smykla-skalski/sai --plugin humanizeThis skill is limited to using the following tools:
<!-- justify: CF-side-effect Edit/Write are used on user-provided files, not infrastructure - safe to auto-invoke -->
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Retrieves current documentation, API references, and code examples for libraries, frameworks, SDKs, CLIs, and services via Context7 CLI. Ideal for API syntax, configs, migrations, and setup queries.
Uses ctx7 CLI to fetch current library docs, manage AI coding skills (install/search/generate), and configure Context7 MCP for AI editors.
Remove AI writing patterns from text and replace them with natural, human-sounding alternatives. Uses two complementary sources:
Designed for prose text: commit messages, PR descriptions, docs, changelogs, blog posts, review comments. Not for code, structured data (JSON/YAML), or text where AI patterns are intentional.
Parse from $ARGUMENTS:
| Flag | Default | Purpose |
|---|---|---|
| (positional) | — | File path to humanize. Prompt user if omitted |
--score-only | off | Report detected patterns without rewriting |
--dry-run | off | Output to chat instead of editing in-place |
Default: edit the file in-place, fixing all detected patterns regardless of severity (including faint ones). Use --dry-run to preview changes without modifying the file.
The skill detects 24 AI writing patterns organized into five categories:
Read references/patterns.md for full pattern descriptions with words-to-watch lists and before/after examples (used in Phase 2 scan and Phase 5 verification).
Read references/elements-of-style.md for composition principles used during the Phase 4 rewrite (active voice, concrete language, omitting needless words, sentence variety, emphasis placement).
$ARGUMENTS for file path and flags.Spawn a general-purpose agent to isolate the 280+ line pattern catalog from the main context. The agent reads the full catalog, scans the input, and returns only compact results.
Use TaskCreate to spawn the agent with this prompt:
You are a pattern-detection agent. Your job: scan the provided text against
every pattern in the catalog and return ONLY the hits.
CATALOG: Read the file at {absolute path to references/patterns.md}
TEXT TO SCAN:
<input>
{paste the full input text here}
</input>
INSTRUCTIONS (ultrathink — systematically check every pattern, do not skip any):
1. Read the catalog file in full.
2. Check the input text against all 24 AI writing patterns.
3. For each match, record: pattern ID, pattern name, quoted offending text,
severity (faint | clear | glaring).
4. Return ONLY a fenced code block with one JSON array of hit objects:
[{"id": "P01", "name": "...", "quote": "...", "severity": "..."}]
5. If no patterns detected, return an empty array: []
6. Do NOT rewrite anything. Do NOT add commentary outside the code block.
Set the agent description to "humanize: pattern scan".
Poll the agent with TaskGet until it completes.
Parse the JSON array from the agent's output. This is the hit list - store it for Phase 4.
If --score-only, skip to Phase 6 (Report) using the hit list directly.
Read references/voice-guide.md in full before starting this phase.
Read references/elements-of-style.md in full before starting this phase.
This phase requires ultrathink. Reason through competing constraints (pattern removal, voice injection, composition principles, meaning preservation) before rewriting each section.
Use the hit list from the Phase 2 agent to fix every detected pattern regardless of severity. Even faint tells get fixed. Apply fixes in this order:
Preserve the original meaning. Never add information the source text does not contain because invented facts undermine trust even when the prose sounds better. Keep technical accuracy intact - style improvements that sacrifice correctness make the text worse, not better.
Re-read the hit list from Phase 2 and the composition principles from references/elements-of-style.md before checking. This anchors verification against the same criteria used during the rewrite.
Check the rewritten text:
If any check fails, revise the affected sections and re-verify.
Output a pattern report:
| Column | Content |
|---|---|
| # | Sequential number |
| Pattern | Pattern name from the catalog |
| Instance | Quoted offending text from the original |
| Fix | What replaced it (or "removed" if stripped) |
Include a summary line: patterns detected count, category count, and overall severity (Minor, Moderate, Heavy).
If --score-only, stop here.
--dry-run: output the rewritten text to chat.Output:
The framework speeds up common tasks like scaffolding and test generation. The team built it after noticing developers spent 40% of sprint time on boilerplate.
Patterns fixed: AI vocabulary (Additionally, groundbreaking, enhance), significance inflation (testament, commitment to fostering), copula avoidance (serves as), superficial -ing (showcasing), promotional language (rapidly evolving landscape) </example>
<example> Score-only mode (no rewriting):/humanize docs/architecture.md --score-only
</example>
<example>
Dry-run preview:
/humanize CHANGELOG.md --dry-run
</example>