From cqa-tools
Assesses editorial quality of AsciiDoc docs per CQA P13-P14, Q1-Q5, Q18, Q20: grammar, content type, scannability, readability, no fluff, Red Hat style, tone.
npx claudepluginhub redhat-documentation/redhat-docs-agent-tools --plugin cqa-toolsThis skill is limited to using the following tools:
| # | Parameter | Level |
Reviews documentation for content quality including logical flow, user journey alignment, scannability, conciseness, fluff removal, and customer focus. Use for organization checks, peer reviews, or tightening verbose content.
Assesses documentation quality across readability, consistency, audience fit, and prose clarity. Produces scored reviews with actionable findings before releases or doc reviews.
Applies Strunk's Elements of Style rules for clear, concise prose in documentation, commit messages, error messages, UI text, reports, and explanations.
Share bugs, ideas, or general feedback.
| # | Parameter | Level |
|---|---|---|
| P13 | Grammatically correct American English | Required |
| P14 | Correct content type matches actual content | Required |
| Q1 | Scannable: sentences <= 22 words avg, paragraphs 2-3 sentences | Required |
| Q2 | Clearly written and understandable | Important |
| Q3 | Simple words (no "utilize", "leverage", "in order to") | Important |
| Q4 | Readability score (11-12th grade level) | Important |
| Q5 | No fluff ("This section describes...", "as mentioned") | Important |
| Q18 | Content follows Red Hat style guide | Required |
| Q20 | Appropriate conversational tone (2nd person, professional) | Important |
Some repos use modules/ instead of topics/ for content files. All topics/ references in this skill apply equally to modules/. The automation scripts accept --scan-dirs to override the default scan directories.
:_mod-docs-content-type: matches actual content.Procedure with ordered list.Procedure| Metric | Target | Hard limit |
|---|---|---|
| Sentence length | <= 22 words average | Flag sentences > 30 words |
| Paragraph length | 2-3 sentences | Flag paragraphs > 4 sentences |
| Lists | Use bulleted lists for 3+ items | Flag inline enumerations |
Only check actual prose text in topics/ and assemblies/ files:
[role="_abstract"])* or . )include::)----, ...., ++++ delimiters):attr: lines), directives (include::, image::, ifdef::).Example, .Procedure, .Prerequisites)|, |=== delimiters)term:: entries) — the term itself is not a sentence[source,yaml], [role="_abstract"], [id="..."])// ...)= , == )+ on its own line)* xref:...[], * link:...[]){prod-short}, {orch-name}) count as the number of words they resolve to\command``) count as 1 word regardless of content* , . , .. ) are not wordsRead each prose paragraph in every file in topics/ and assemblies/. For each paragraph:
. followed by uppercase, ? , ! )Each list item is an independent unit — do not concatenate consecutive list items into a single "paragraph."
When a sentence exceeds 30 words, split it using these patterns:
| Pattern | Split point | Example |
|---|---|---|
| "..., so that..." | Split at ", so that" | "Configure X. This allows Y." |
| "..., as..." (causal) | Split at ", as" | "X happens. The reason is Y." |
| "..., which..." (non-restrictive) | Split at ", which" | "X does Y. It also does Z." |
| "... to ... to ..." (chained infinitives) | Split after first purpose | "Do X. This enables Y." |
| "..., or ..." (alternative actions) | Split at ", or" | "Do X. Alternatively, do Y." |
| Inline enumeration | Convert to bulleted list | The supported values are:\n* X\n* Y\n* Z |
| Abstract with WHAT + WHY | Split into two sentences | "Do X to achieve Y." → "Do X. This achieves Y." |
A paragraph is a block of consecutive prose lines separated by blank lines. For each paragraph:
These patterns look like long sentences/paragraphs but are structured content:
| Pattern | Why it's not a scannability issue |
|---|---|
Definition list entries (term:: description) | Renders as formatted key-value pairs, not prose |
Consecutive bullet items (* item1\n* item2\n* item3) | Each item is independent; they are not one paragraph |
Procedure sub-steps (.. step1\n.. step2) | Ordered sub-steps render as a nested list |
| CSV-like metric tables | Renders as structured data |
| Code block annotations with backtick-heavy content | Technical identifiers inflate word count |
Link-heavy sentences (URLs inside link:...[text]) | URLs inflate raw character/word count |
Check for inline enumerations that would be more scannable as lists:
Verify that complex procedures and architectural concepts have supporting diagrams:
| Score | Criteria |
|---|---|
| 4 | 0 prose sentences > 30 words, overall avg <= 22 words/sentence, no paragraphs > 4 sentences, lists used for enumerations, graphics where needed |
| 3 | 1-5 sentences > 30 words (borderline cases like 31-33 words), avg <= 22, minor paragraph length issues |
| 2 | Multiple sentences > 30 words, avg > 22 in several files, long paragraphs common |
| 1 | Scannability not assessed or widespread issues |
Minimalism focuses documentation on readers' needs through five principles:
For each file in topics/ and assemblies/, check the main title (= ) and subsection headings (== ):
| Metric | Target | Flag |
|---|---|---|
| Word count | 3-11 words (resolved) | Flag titles under 3 words or over 11 words |
| Character count | 50-80 characters (resolved) | Titles under 50 chars acceptable if clear. Flag titles over 80 chars |
Attribute resolution for word counting:
{prod-short} = 3 words, {prod} = 5 words, {ocp} = 3 words{orch-name} = 1 word, {devworkspace} = 2 wordsWhen fixing long titles: Use shorter attribute forms ({prod-short} instead of {prod}, {orch-name} instead of {ocp}) to reduce word/character count while preserving meaning.
Acceptable exceptions: Single Kubernetes resource names as subsection headings in reference/concept files (e.g., == DevWorkspaceTemplate) are acceptable when the parent section provides context. Two-word titles like "Server components" or "Creating workspaces" are acceptable if clear and descriptive.
.Verification sections.Procedure — use a separate .Verification section| Score | Criteria |
|---|---|
| 4 | Content understandable on first read, all minimalism principles applied, titles 3-11 words, correct pronoun usage, verification sections where meaningful |
| 3 | Minor clarity issues (1-3 ambiguous sentences), a few titles outside range, minor pronoun issues |
| 2 | Multiple clarity issues, minimalism principles not consistently applied, many short/long titles |
| 1 | Content frequently unclear, minimalism not applied |
Flag and replace:
python3 ${CLAUDE_PLUGIN_ROOT}/skills/cqa-assess/scripts/check-simple-words.py "$DOCS_REPO"
Scans prose in topics/ and assemblies/ for 14 patterns: 10 complex words and 4 phrasal verbs ("make sure", "set up", "find out", "carry out"). Excludes code blocks, comments, attributes, and table content. Reports each violation with file, line, matched word, replacement, and context. Exits 0 (pass) or 1 (issues found).
The readability score is computed using the Flesch-Kincaid formula:
FK Grade = 0.39 * (words/sentences) + 11.8 * (syllables/words) - 15.59
| Level | Grade | Meaning |
|---|---|---|
| Ideal | <=10 | 9th-10th grade, accessible to non-native English speakers |
| Minimum | <=12 | 11th-12th grade, Red Hat customer content average |
| Advanced | >12 | Above minimum, review for simplification |
python3 ${CLAUDE_PLUGIN_ROOT}/skills/cqa-assess/scripts/check-readability.py "$DOCS_REPO"
Computes Flesch-Kincaid Grade Level for prose in topics/ and assemblies/. Resolves AsciiDoc attributes to their actual text for accurate syllable counting. Reports overall grade, per-file grades, and grade distribution. Exits 0 (overall <=12) or 1 (overall >12).
| Score | Criteria |
|---|---|
| 4 | Overall FK grade <=10 (ideal range), no complex words (Q3), avg words/sentence <=22 |
| 3 | Overall FK grade 10-12, minor issues in individual files due to jargon density |
| 2 | Overall FK grade >12, multiple files with high grades from genuinely complex prose |
| 1 | Readability not assessed or widespread complexity issues |
Flag and rewrite:
python3 ${CLAUDE_PLUGIN_ROOT}/skills/cqa-assess/scripts/check-fluff.py "$DOCS_REPO"
Scans prose in topics/, assemblies/, and snippets/ for 11 fluff patterns. Excludes code blocks, comments, attributes, and table content. Reports each violation with file, line, matched text, fix guidance, and context. Exits 0 (pass) or 1 (issues found).
| Score | Criteria |
|---|---|
| 4 | 0 fluff patterns found, no self-referential abstracts, no unnecessary introductions |
| 3 | 1-3 minor fluff patterns (borderline cases like "as described in" with xref) |
| 2 | Multiple fluff patterns, self-referential abstracts common |
| 1 | Fluff not assessed or widespread issues |
Future tense avoidance:
Active voice:
Anthropomorphism:
Possessives of brand/product names:
Phrasal verbs:
Parallelism in lists:
python3 ${CLAUDE_PLUGIN_ROOT}/skills/cqa-assess/scripts/check-simple-words.py "$DOCS_REPO"
The simple words script checks for phrasal verbs ("make sure", "set up", "find out", "carry out") alongside complex words (14 patterns total). For future tense, passive voice, and anthropomorphism, use grep-based searches or the cqa-tools:cqa-editorial skill methodology (contextual LLM analysis required to distinguish valid from invalid uses).
Red Hat product documentation is enterprise documentation for experienced administrators and developers. Per IBM Style, this typically falls under "less conversational" — professional, direct, second-person, no contractions. Determine the appropriate level based on the product's target audience.
| Level | Audience | Example |
|---|---|---|
| Most conversational | Marketing, "try and buy" | "Build your dream app." |
| Fairly conversational | New users, getting started | "In minutes, you can set dates and dive in." |
| Less conversational | Experienced users (typical RH product docs) | "Configure OAuth to allow users to interact with Git repositories." |
| Least conversational | API docs, expert audience | "The SObject rows resource retrieves field values." |
Be aware of words spelled the same but with different meanings. Avoid using homographs close together in a sentence. Common homographs in technical docs: application, attribute, block, coordinates, number, object, project.
| Score | Criteria |
|---|---|
| 4 | 0 contractions, 0 first person, 0 informal words in prose, consistent 2nd person, no exclamations/questions for effect, appropriate for global audience |
| 3 | 1-3 informal words or minor tone inconsistencies |
| 2 | Multiple contractions, first person usage, or informal language patterns |
| 1 | Tone not assessed or widespread informality |
See scoring-guide.md.