From nickcrew-claude-ctx-plugin
Assesses documentation quality across readability, consistency, audience fit, and prose clarity. Produces scored reviews with actionable findings before releases or doc reviews.
npx claudepluginhub nickcrew/claude-cortexThis skill uses the workspace's default tool permissions.
Assess whether documentation is well-written, consistent, and appropriate for its
Reviews docs for novices, rating clarity, completeness, actionability, structure (1-5). Flags issues in READMEs/API docs/guides with prioritized concrete fix suggestions.
Reviews documentation quality using checklists for voice, tone, structure, completeness, clarity, and technical accuracy. For draft reviews, audits, and style compliance checks.
Assesses editorial quality of AsciiDoc docs per CQA P13-P14, Q1-Q5, Q18, Q20: grammar, content type, scannability, readability, no fluff, Red Hat style, tone.
Share bugs, ideas, or general feedback.
Assess whether documentation is well-written, consistent, and appropriate for its audience. The output is a scored review with specific findings — not rewrites.
| Resource | Purpose | Load when |
|---|---|---|
references/quality-dimensions.md | Scoring rubrics for each dimension | Always (Phase 1) |
references/style-checklist.md | Concrete style rules for common issues | Phase 2 (review pass) |
Phase 1: Scope → Identify docs to review and their intended audience
Phase 2: Review → Score each doc across quality dimensions
Phase 3: Synthesize → Aggregate findings, identify patterns
Phase 4: Report → Produce the scored quality review
Before reviewing, establish context:
references/quality-dimensions.md to calibrate scoringThe audience and doc type determine which dimensions matter most. A reference page has different quality criteria than a tutorial.
Score each document across five dimensions. Read the full document before scoring.
How easily can the target audience read and understand this?
| Score | Criteria |
|---|---|
| 5 | Clear, concise prose. Short paragraphs. Active voice. Appropriate vocabulary for audience |
| 4 | Generally clear. Minor instances of passive voice, long sentences, or unnecessary jargon |
| 3 | Readable but effortful. Multiple long paragraphs, some jargon without definition, occasional ambiguity |
| 2 | Difficult. Dense prose, heavy jargon, passive constructions, unclear antecedents |
| 1 | Impenetrable. Wall of text, undefined terms, ambiguous instructions, no structure |
What to check:
Does this doc use the same terms, formatting, and conventions throughout — and match the rest of the doc set?
| Score | Criteria |
|---|---|
| 5 | Consistent terminology, formatting, heading style, code block conventions, and tone throughout |
| 4 | Minor inconsistencies (e.g., "config" vs "configuration" in different sections) |
| 3 | Noticeable inconsistencies across sections but each section is internally consistent |
| 2 | Frequent inconsistencies in terminology, formatting, or conventions |
| 1 | No discernible consistency — reads like it was written by different people at different times |
What to check:
## vs ###, capitalization styleIs the content calibrated to the right level for its intended readers?
| Score | Criteria |
|---|---|
| 5 | Perfectly pitched. Prerequisites stated. Appropriate depth. No unexplained leaps |
| 4 | Mostly well-calibrated. Occasional assumption of knowledge not established |
| 3 | Uneven. Some sections too basic, others too advanced. Prerequisites unclear |
| 2 | Significant mismatch. Beginner docs assume expert knowledge, or expert docs over-explain basics |
| 1 | Wrong audience entirely. Content pitched at a different reader than intended |
What to check:
Can a reader find what they need without reading linearly?
| Score | Criteria |
|---|---|
| 5 | Logical heading hierarchy. Scannable sections. Tables for reference data. Clear entry points |
| 4 | Good structure. Minor issues with heading granularity or section ordering |
| 3 | Adequate structure but some sections are too long or headers don't reflect content |
| 2 | Poor structure. Key information buried in prose. Headings misleading or inconsistent |
| 1 | No useful structure. Single long section with no headings, or headings that don't help navigation |
What to check:
Can the reader do something after reading? (Weighted differently by doc type.)
| Score | Criteria |
|---|---|
| 5 | Clear next steps. Commands are copy-pasteable. Examples are complete and runnable |
| 4 | Mostly actionable. Minor gaps in examples or steps |
| 3 | Partially actionable. Some instructions unclear or missing context |
| 2 | Weakly actionable. Reader knows about the topic but not how to apply it |
| 1 | Not actionable. Pure description with no path to action |
What to check:
| Dimension | Reference | Tutorial | Guide | Explanation | README |
|---|---|---|---|---|---|
| Readability | 1.0 | 1.2 | 1.2 | 1.3 | 1.2 |
| Consistency | 1.2 | 0.8 | 1.0 | 0.8 | 1.0 |
| Audience Fit | 0.8 | 1.3 | 1.2 | 1.2 | 1.3 |
| Structure | 1.3 | 1.0 | 1.0 | 0.8 | 1.0 |
| Actionability | 1.0 | 1.5 | 1.3 | 0.5 | 1.0 |
After scoring individual docs, look for patterns:
Systemic issues are more valuable to fix than individual ones — fixing the pattern fixes many docs at once.
# Documentation Quality Review
**Review date:** YYYY-MM-DD
**Scope:** [files or sections reviewed]
**Target audience:** [who these docs serve]
**Doc type:** [reference / tutorial / guide / explanation / mixed]
---
## Summary
[2-3 sentences: overall quality assessment]
Overall scores (weighted):
| Dimension | Raw Score | Weight | Weighted |
|-----------|-----------|--------|----------|
| Readability | N/5 | Nx | N |
| Consistency | N/5 | Nx | N |
| Audience Fit | N/5 | Nx | N |
| Structure | N/5 | Nx | N |
| Actionability | N/5 | Nx | N |
| **Total** | | | **N / 25** |
Quality grade: [A (22-25) / B (18-21) / C (14-17) / D (10-13) / F (<10)]
---
## Findings
### Critical (must fix before publish)
| # | File | Dimension | Issue | Fix |
|---|------|-----------|-------|-----|
| 1 | [path:line] | [dimension] | [specific problem] | [specific fix] |
### Major (should fix)
| # | File | Dimension | Issue | Fix |
|---|------|-----------|-------|-----|
### Minor (nice to fix)
| # | File | Dimension | Issue | Suggestion |
|---|------|-----------|-------|------------|
---
## Systemic Issues
### [Issue pattern name]
**Affected docs:** [list]
**Dimension:** [which]
**Pattern:** [what's happening across docs]
**Recommended fix:** [one-time fix that addresses all instances]
---
## Per-Document Scores
| Document | Read. | Cons. | Aud. | Struct. | Action. | Weighted Total |
|----------|-------|-------|------|---------|---------|----------------|
| [path] | N | N | N | N | N | N |
---
## Strengths
[What's working well — cite specific examples worth emulating]
doc-maintenance → Structural health (links, orphans, folders)
doc-claim-validator → Semantic accuracy (do claims match code?)
doc-completeness-audit → Topic coverage (is everything documented?)
doc-quality-review → Prose quality (is it well-written?)
doc-architecture-review → Information architecture (is it findable?)
docs/archive/ — they are historicalreferences/quality-dimensions.md — Detailed scoring rubrics with examples for each score levelreferences/style-checklist.md — Concrete style rules for the most common quality issues