From forge
Audits documentation against codebase reality across 6 phases (discovery, comparison, examples, links, config, structure), then produces remediation audit doc.
npx claudepluginhub hatmanstack/claude-forge --plugin forgeThis skill is limited to using the following tools:
You coordinate a documentation drift audit of a codebase. The doc auditor runs as a separate agent with its own context window.
Audits codebase documentation for accuracy, completeness, and freshness by comparing against code structure. Auto-fixes small discrepancies in fix mode, reports structural changes. Works with any language/framework.
Audits markdown documentation against current codebase for accuracy, broken links, incorrect code examples, docstring coverage, and staleness. Prioritizes stale, recent, or all files.
Audits project docs for staleness, gaps, and misorganization using python scripts and haiku agents. Remediates via specialized agents with quality gate. Use after features, refactors, upgrades, or periodically.
Share bugs, ideas, or general feedback.
You coordinate a documentation drift audit of a codebase. The doc auditor runs as a separate agent with its own context window.
$ARGUMENTS is optional context — the repo path, specific docs to focus on, or scope constraints. If empty, audit the current working directory.
Ask scoping questions one at a time, preferring multiple choice. Wait for each answer before asking the next.
The doc audit runs 6 detection phases: discovery, comparison (drift/gaps/stale), code examples, link integrity, config/environment, and structure. It compares documentation claims against actual code behavior.
Question 1 — Known pain points give the auditor a starting hypothesis:
Are there parts of the documentation you already know are wrong or outdated?
Stale READMEs, broken examples, missing API docs, etc.
A) Yes (tell me which docs and what's wrong)
B) No — scan everything with fresh eyes
Question 2 — Scope and constraints in one question:
What documentation should I audit, and is anything off-limits?
A) All docs, no constraints
B) All docs, but skip specific files (tell me which)
C) Specific directories only (tell me which)
D) README and API docs only
Question 3 — Language stack determines which auto-generation tools are available (typedoc for TS, sphinx for Python, swagger for REST APIs):
What's the primary language stack?
A) JS/TS — typedoc, swagger-jsdoc available
B) Python — sphinx, mkdocstrings available
C) Both
Question 4 — Prevention tooling. What automated checks to add so documentation drift becomes a CI failure instead of a periodic cleanup:
What drift prevention tooling should I add after fixing the docs?
A) Markdown linting (markdownlint) + link checking (lychee) — catches formatting issues and broken links on every PR
B) Auto-generated API docs (typedoc/sphinx) — single source of truth lives in code, not prose
C) Both A and B
D) None — just fix the existing docs, no new tooling
Generate the directory name: YYYY-MM-DD-docs-slug
docs-ragstack, docs-api)docs/plans/YYYY-MM-DD-docs-slug/Create the directory.
You (the orchestrator) must read the role prompt file and embed its contents in the agent's prompt. Agents cannot access skill directory files.
skills/pipeline/doc-auditor.md — store contents as AUDITOR_PROMPT<role_prompt>
[Contents of doc-auditor.md]
</role_prompt>
<task>
Audit documentation in the current working directory against codebase reality.
Doc scope: [from Step 1]
Constraints: [from Step 1]
</task>
Verify the auditor's output contains DOC_AUDIT_COMPLETE. If missing, the agent may have been truncated — report to the user and do NOT write doc-audit.md with partial data.
If signal present, Write docs/plans/YYYY-MM-DD-docs-slug/doc-audit.md:
---
type: doc-health
date: YYYY-MM-DD
prevention_scope: [from Step 1 — what tooling to add]
language_stack: [from Step 1]
---
# Documentation Audit: [repo name]
## Configuration
- **Prevention Scope:** [from Step 1]
- **CI Platform:** [from Step 1]
- **Language Stack:** [from Step 1]
- **Constraints:** [from Step 1]
## Summary
- Docs scanned: N files
- Code modules scanned: M
- Findings: X drift, Y gaps, Z stale, W broken links
## Findings
[Full auditor output organized by category:
DRIFT, GAPS, STALE, BROKEN LINKS, STALE CODE EXAMPLES, CONFIG DRIFT, STRUCTURE ISSUES]
Append an entry to .claude/skill-runs.json in the repo root. If the file does not exist, create it with an empty array first.
{
"skill": "doc-health",
"date": "YYYY-MM-DD",
"plan": "YYYY-MM-DD-docs-slug"
}
Audit complete: docs/plans/YYYY-MM-DD-docs-slug/doc-audit.md
Findings: X drift, Y gaps, Z stale, W broken links
Prevention tooling selected: [list]
To remediate, run:
/pipeline YYYY-MM-DD-docs-slug
/pipeline after all remediation is complete.