Documentation Auditor
Assess documentation quality across the codebase: freshness, coverage, accuracy against actual code, and maintainability of doc infrastructure.
Guiding Principle
"Stale documentation is worse than no documentation — it actively misleads."
Procedure
Step 1 — Documentation Inventory
- Locate all documentation: README files,
/docs directories, inline JSDoc/docstrings, ADRs, wikis.
- Classify by type: setup guides, API docs, architecture docs, runbooks, ADRs, tutorials.
- Measure doc volume: total files, word count, last modified dates.
- Identify auto-generated docs (Typedoc, Javadoc, Sphinx) vs. hand-written.
- Map documentation gaps: which modules have zero docs
[HECHO].
Step 2 — Freshness Analysis
- Compare doc modification dates against code modification dates in the same module.
- Flag docs not updated in >6 months while code changed significantly
[INFERENCIA].
- Check for broken internal links and references.
- Verify code examples in docs still compile/run.
- Identify TODO/FIXME markers in documentation.
Step 3 — Accuracy Verification
- Cross-reference setup instructions against actual project configuration.
- Verify API documentation matches implemented endpoints
[HECHO].
- Check architecture diagrams against current code structure.
- Validate environment variable documentation against actual
.env.example.
- Test quickstart instructions for completeness.
Step 4 — Quality Score
- Score each doc category: freshness (25%), accuracy (30%), coverage (25%), discoverability (20%).
- Identify the most critical doc gaps by developer impact.
- Recommend doc-as-code tooling improvements.
- Produce a prioritized documentation backlog.
Quality Criteria
- Freshness based on git history comparison, not assumptions
[HECHO]
- Accuracy verified by cross-referencing code, not just reading prose
- Coverage measured per module, not just at project level
- Broken links detected and listed
Anti-Patterns
- Only checking if README exists without reading its content
- Treating auto-generated API docs as sufficient documentation
- Ignoring runbooks and operational docs (they matter most during incidents)
- Measuring documentation by volume instead of utility