From vibesubin
Writes AI-optimized docs like README, CLAUDE.md, AGENTS.md, commit messages, PR descriptions using tables, checklists, absolute paths, invariants for cold project starts.
npx claudepluginhub subinium/vibesubin --plugin vibesubinThis skill is limited to using the following tools:
Documentation in this pack has a different primary audience than usual. Most docs are written for a human who will skim them once. These are written for an **AI assistant in a future session** — same operator, same model, but zero context from this conversation.
Updates README.md, CLAUDE.md, AGENTS.md, and CONTRIBUTING.md by analyzing codebase structure and fixing discrepancies. Use for initializing projects or syncing docs in git repos.
Guide documentation structure, content requirements, and project documentation best practices. Use when: creating README, documentation, docs folder, project setup, technical docs. Keywords: README, docs, documentation, CONTRIBUTING, CHANGELOG, ARCHITECTURE, API docs, 文件, 說明文件, 技術文件.
Writes READMEs, API references, architecture docs, user guides, and inline comments for codebases, libraries, CLIs, APIs. Audits docs for accuracy, clarity, completeness.
Share bugs, ideas, or general feedback.
Documentation in this pack has a different primary audience than usual. Most docs are written for a human who will skim them once. These are written for an AI assistant in a future session — same operator, same model, but zero context from this conversation.
AI reads tables faster than prose. It follows explicit file paths better than vague references. It needs "always do X, never do Y" stated declaratively, not implied. It re-reads the same file every session, so the payoff for writing once is large.
This skill produces docs in that style. Humans benefit too — the structure is also friendlier for cold skimmers.
Every doc this skill produces is built from a fixed set of sections. The section tree below is the vocabulary. Different doc types use different subsets.
Doc
├── Header
│ ├── Title (one line, noun phrase)
│ └── One-sentence description (what this is, in plain terms)
│
├── Orientation (always first)
│ ├── Why this exists # one paragraph, not three
│ ├── Target audience # who reads this
│ └── Status # early/stable/deprecated, last updated
│
├── Repo map (for README / CLAUDE.md)
│ └── Directory tree with one-line comment per file
│ (absolute paths, in backticks, no wildcards)
│
├── Invariants (for CLAUDE.md / AGENTS.md)
│ ├── Never-do list # hard rules, reasons stated
│ ├── Always-do list # mandatory rituals, verification command
│ └── Trade-offs # decisions already made, don't re-litigate
│
├── How-to (for README)
│ ├── Quick start # 5 minutes from clone to running
│ ├── Environment variables # every var, with example value
│ ├── Common tasks # top 5 things the operator does weekly
│ └── Troubleshooting # top 5 errors the operator hits
│
├── Architecture (for README / ARCHITECTURE.md)
│ ├── Text diagram # ASCII art, not Mermaid (AI parses text better)
│ ├── Data flow # request → handler → service → DB
│ └── Key modules # what each major module does and owns
│
├── Change log (for CHANGELOG.md)
│ └── Per-version entries with breaking / features / fixes / security
│
├── Conventions (for CONTRIBUTING.md)
│ ├── Language # of code, of docs, of commits
│ ├── Branch strategy # one-paragraph summary
│ ├── Commit prefix # feat / fix / chore / refactor / docs / test / ci / perf
│ └── Review process # if any
│
└── Pointers (for all docs)
└── Links to deeper docs, with one-line "what you'll find there"
A README for a small project uses: Header + Orientation + Repo map + How-to + Pointers. A CLAUDE.md uses: Header + Invariants + Repo map + Pointers. A CONTRIBUTING.md uses: Header + Conventions + Pointers.
Every README, CLAUDE.md, commit message, and PR body this skill produces is a potential leak surface. LLM-generated docs tend to be chatty and accidentally embed details that should not live in a file the operator will push to a public remote. Apply this table before emitting any artifact.
| Category | Always safe | Redact unless obviously public | Never write |
|---|---|---|---|
| File paths | Repo-relative (src/auth/session.ts) | — | Absolute paths with user/hostname (/Users/alice/..., C:\Users\..., /home/bob/...) |
| Hostnames / URLs | Public docs URLs, the project's own domain, localhost | Staging domains, internal IPs behind a company VPN | Literal production database hostnames, internal office network IPs |
| Credentials | Placeholder tokens (__REPLACE_ME__, <YOUR_TOKEN>) | — | Real tokens, passwords, API keys, SSH private keys |
| User identities | Anonymized labels (operator, admin, reviewer) | First names only when the project is tiny and they're already public | Full real names, emails, phone numbers, GitHub handles of private collaborators |
| Customer data | Schema names, column descriptions | Sample rows with synthetic data | Real rows, real user IDs, real PII, dumps from production |
| Infrastructure | Public service names (GitHub Actions, Fly.io) | Internal service nicknames | Internal IPs, on-call phone numbers, pager schedules |
| Business context | Public product name, public features | High-level roadmap | Unannounced features, unreleased pricing, confidential partnerships |
Rule of thumb: if an artifact will be committed to a public repository, assume a competitor and a reporter are reading it. Redact accordingly.
When in doubt, emit a placeholder and tell the operator: "I've written <YOUR_DB_HOSTNAME> where the real hostname would go — replace it before you push, or leave it if the repo is private and the hostname is already public."
Every sentence written into a document must be either a verifiable fact or an opinion that is clearly labeled as one. LLM-generated docs drift toward enthusiastic framing ("fast", "best-in-class", "production-ready") because LLMs were trained on marketing copy and the enthusiastic version is the higher-probability next token. That drift is a bug in this pack.
The rules below are enforced on every artifact this skill produces. A violation is not a style preference; it's a failure to ship.
No unbacked adjectives. "Fast", "lightweight", "robust", "production-ready", "scalable", "best-in-class", "seamless", "intuitive", "blazing", "modern" are marketing words. Delete them or back them immediately with a measurement, a link, or a date. A library is not "fast"; it's "3.2× faster than the baseline on ". A framework is not "production-ready"; it's "running in production at since ".
No superlatives without comparison. "The best", "the most", "the only", "the simplest", "the cleanest" — if the comparison is not explicitly in the text, delete the superlative. "Simple" with a 3-line code example is fine; "the simplest possible" with nothing to compare against is marketing.
No marketing metaphors. "Battle-tested" → "running in production since YYYY-MM" or delete. "Zero-config" → explicit list of defaults. "Plug and play" → concrete install steps. "Out of the box" → name the box.
No weasel hedging either. The opposite failure mode is prose that commits to nothing: "may work", "can be useful for", "might help", "generally well-suited", "depending on your needs", "in some cases". Replace with a concrete yes/no and the condition, or delete.
Every capability claim has a verification command. If the doc says "this passes type check", the exact command that verifies it is on the next line. If the doc says "works on Python 3.11+", the CI matrix or a python_requires entry backs it up. If neither exists, drop the claim.
Numbers are specific or absent. "About 500 lines" is fine. "A few hundred lines" is not. "Roughly 10,000 users" is fine if you cite the source; "many users" is not.
Status flags are load-bearing. "early", "stable", "deprecated", "experimental" go in the header of every doc and they commit the writer to a maturity claim. Never default to "stable" to sound confident — if you're not sure, "early" is the honest answer.
No self-congratulation. The doc does not compliment the project it describes. No "cleanly written", no "elegant API", no "beautiful code". Those are the reader's call, not the writer's.
This rule applies to the artifact the skill emits, not to how the operator ends up writing their own commits later. When the operator asks for a README and the repo genuinely is fast, the doc cites the benchmark; when the operator asks for a README and there is no benchmark, the doc does not say "fast" at all.
This pack is published as open source. Do a final sanity pass on every artifact before you show it to the operator or write it to disk.
Checklist:
pnpm-lock.yaml, uv.lock, Cargo.toml, Gemfile, composer.json, etc.)If any of the six fail, fix the artifact before returning it. Open-source docs get screenshotted and quoted out of context; assume zero forgiveness.
src/auth/session.ts, not "in the auth module." Always relative to the repo root, never absolute. Never write /Users/<name>/..., /home/<name>/..., C:\Users\..., or any path that leaks a local machine — those paths are different on every machine and betray internal directory structure. If you need to reference a user's home, use ~/ in prose and an environment variable ($HOME) in commands..env" not "we try to avoid committing secrets."# <Project name>
<One-sentence description>
> Status: <early | stable | deprecated>. Last updated: <YYYY-MM-DD>.
## What this is
<One paragraph. Why it exists, who uses it, what problem it solves.>
## Quick start
```bash
<exact commands — clone, install, run>
| Name | Required? | Example | What it's for |
|---|---|---|---|
EXAMPLE_VAR | yes | hello |
<command> — <command> — | Symptom | Cause | Fix |
|---|---|---|
<error message> |
CLAUDE.md — AI-facing invariants and conventionsARCHITECTURE.md — data flow and module map (if applicable)CONTRIBUTING.md — how to contribute
Full template with placeholders in `templates/README.template.md`.
### CLAUDE.md / AGENTS.md (for AI maintainers)
Two names, one purpose. Claude Code uses `CLAUDE.md`, some other agent frameworks use `AGENTS.md`. The content is identical; only the filename changes. Default to `CLAUDE.md` unless the operator says otherwise.
```markdown
# <Project name> — AI operator guide
Read this file first in every new session. It encodes the invariants.
## 🛑 Never do
1. <Hard rule>. Reason: <one sentence>.
2. <Hard rule>. Reason: <one sentence>.
## ✅ Always do
1. <Ritual>. Verification: `<command>`.
2. <Ritual>. Verification: `<command>`.
## 📋 Change type → file matrix
| What you want to change | Files to touch (in order) | Verification |
|---|---|---|
| Add a route | `routes/` → `handlers/` → test | `<command>` |
| Migrate schema | `migrations/` → `models/` → test | `<command>` |
| Add env var | `.env.example` → `config/` → README §env | `<command>` |
## 🔒 Invariants
| Invariant | Violation symptom |
|---|---|
| <Rule> | <What breaks if you violate it> |
## 🗺️ Repo map
<tree with one-line comments>
## 🚀 Deployment
<How the project deploys. One paragraph. Link to deeper docs.>
## 🎭 Recently decided (don't re-argue)
- <Decision>. Reason: <why>. Date: <when>.
## More
- [`README.md`](./README.md) — overview and quick start
Full template in templates/CLAUDE.template.md. Same content copies to AGENTS.md verbatim for non-Claude agent frameworks — just rename the file.
Format: conventional commits with Korean or English body, explicit "why" and "how verified."
<type>(<scope>): <subject in imperative, lowercase, no period>
<Body paragraph 1 — the motivation. Why did this change need to happen?
What problem does it fix or what capability does it add? Be specific.>
<Body paragraph 2 — the approach. Which files and which functions were
touched. Any non-obvious decisions go here. Anyone reading git blame in
six months should understand why this commit looks the way it does.>
## Verification
- <command 1 run, result>
- <command 2 run, result>
## Risks
- <One risk the reviewer should know about. Omit section if no risk.>
Type is one of: feat, fix, refactor, chore, docs, test, ci, perf, style, build. Use the one that matches the dominant change; don't invent new types.
Scope is the module or feature affected, in kebab-case.
Subject is under 72 characters, imperative (add, fix, move, not added/adds).
Full template in templates/commit.template.md.
## Summary
<1–3 sentences. What this PR does and why.>
## Changes
- <file or section> — <what changed, one line>
- <file or section> — <what changed, one line>
## Verification
| Check | Result |
|---|---|
| Typecheck | ✅ `<command>` |
| Lint | ✅ `<command>` |
| Tests | ✅ N/M passing (`<command>`) |
| Smoke | ✅ <local run check> |
| Call-site closure (if refactor) | ✅ <count before/after> |
## Risks
<Things the reviewer should specifically watch for. If none, say "none."
Be honest — reviewers appreciate flagged risks more than surprised risks.>
## For the reviewer / next AI session
<What to read first. Anything counter-intuitive. Any "I tried X, it didn't
work, so I used Y" notes.>
## After merge
- [ ] <action the reviewer or maintainer should take>
- [ ] <action, if any>
Full template in templates/pr.template.md.
HISTORY.md if anywhere.refactor-verify's references.Use this skill when:
Don't use this skill when:
When invoked from /vibesubin (the umbrella skill's parallel sweep), this skill runs in read-only audit mode. Do not write, edit, or create any documentation files. Do not touch README.md, CLAUDE.md, AGENTS.md, commit messages, or PR bodies.
Instead, produce a findings-only report:
/write-for-ai will rewrite or fill in the gaps when invoked directly.The operator reviews the sweep report and, if they want the fixes applied, invokes /write-for-ai directly — which then runs the full write/verify procedure.
How to tell: the task context from the umbrella will include a sweep=read-only marker or an explicit "produce findings only, do not edit" instruction. Obey it. If the operator invokes this skill by name, the full procedure applies and editing is expected.
When the task context contains the tone=harsh marker (usually set by the /vibesubin harsh umbrella invocation, but can also come from direct requests like "don't sugarcoat" / "brutal review" / "매운 맛" / "厳しめ"), switch output rules on both the doc audit and the direct-invocation report:
CLAUDE.md, so every new AI session starts blind", "the README still documents npm install but the repo is on pnpm", "the env-var table is 40% stale — STRIPE_KEY and SENTRY_DSN are missing". No preamble.README.md:14 claims production-ready with no production example — either cite a user, drop the claim, or downgrade the status to early."Harsh mode does not invent missing docs, exaggerate staleness, or become rude. Every harsh statement must cite the same file, line, or git-log evidence the balanced version would cite. The change is framing, not substance.
refactor-verify (grep old doc's concrete terms, verify they appear in the new doc, or are deliberately dropped)setup-ci to make sure commands in the README match the workflow file.env, .gitignore, or anything in the env-var table → coordinate with manage-secrets-env for the canonical defaults and the lifecycle wordingproject-conventionsThe AI-friendly doc principles, section-by-section README structure, and commit/PR conventions are inlined in this SKILL.md rather than split into reference files. One readable file beats six fragmented ones.
Working templates in templates/:
templates/README.template.md — full starter README with every section labeledtemplates/CLAUDE.template.md — AI-facing invariants doc. Same content works as AGENTS.md for non-Claude agent frameworks — copy and rename.templates/commit.template.md — conventional commit body with verification notestemplates/pr.template.md — PR description with goal / changes / verification / risks