From jtbd-tools
Runs end-to-end JTBD workflow for AsciiDoc repos with master.adoc: analyze JTBD records, generate TOC, compare structures, produce consolidation reports. Batch processing supported.
npx claudepluginhub redhat-documentation/redhat-docs-agent-tools --plugin jtbd-toolsThis skill is limited to using the following tools:
End-to-end JTBD analysis workflow for AsciiDoc documentation repositories that use `master.adoc` entry points (e.g., RHOAI, RHEL AI, Satellite). Runs all 4 steps in sequence:
Runs end-to-end JTBD workflow on topic map-based repos like OpenShift docs: analyzes maps, generates TOCs, compares structures, consolidates reports. Batch processes multiple books.
Scaffolds PartMe/Octo-style product documentation trees with Markdown templates for 10 root docs, 7 per-version docs, and module triplets. Use for repo initialization, auditing structures, or generating scaffolds.
Guide documentation structure, content requirements, and project documentation best practices. Use when: creating README, documentation, docs folder, project setup, technical docs. Keywords: README, docs, documentation, CONTRIBUTING, CHANGELOG, ARCHITECTURE, API docs, 文件, 說明文件, 技術文件.
Share bugs, ideas, or general feedback.
End-to-end JTBD analysis workflow for AsciiDoc documentation repositories that use master.adoc entry points (e.g., RHOAI, RHEL AI, Satellite). Runs all 4 steps in sequence:
# Single book
/jtbd-workflow-adoc path/to/master.adoc --variant self-managed
# With domain-specific research personas
/jtbd-workflow-adoc path/to/master.adoc --variant self-managed --research-file ~/my-project/research.yaml
# Custom output
/jtbd-workflow-adoc path/to/master.adoc --variant self-managed --output analysis/my-project/my-book/
# Batch
/jtbd-workflow-adoc --docs-file docs.txt --batch --batch-size 5
master.adocifdef resolution (e.g., self-managed, cloud-service)--docs-file)gem install asciidoctor-reducer
If Ruby/gem is not available:
brew install ruby # macOS
This step is identical to the /jtbd-analyze-adoc skill. Full details are in that skill; the key steps are summarized here.
Verify asciidoctor-reducer is available:
which asciidoctor-reducer
If not found, tell the user to install it and stop.
Determine the document name from the path:
master.adoc, use the parent directory nameRun reducer to flatten the assembly/book:
asciidoctor-reducer <path-to-master.adoc> -o <output-dir>/<doc>-reduced.adoc
If --variant is specified, set the attribute:
asciidoctor-reducer -a <variant> <path-to-master.adoc> -o <output-dir>/<doc>-<variant>-reduced.adoc
Verify reduction: check output has no remaining include:: directives (except in code blocks)
include:: directives to build the full include tree<output-dir>/<doc>-include-graph.jsonRed Hat modular docs naming conventions:
con-*.adoc = CONCEPT (explanatory content)proc-*.adoc = PROCEDURE (step-by-step instructions)ref-*.adoc = REFERENCE (tables, lists, specifications)snip-*.adoc = SNIPPET (reusable fragments)assembly-*.adoc = ASSEMBLY (collection of modules)= Level 1 (Document/Book Title)
== Level 2 (Chapter)
=== Level 3 (Section)
==== Level 4 (Subsection)
| Reduced File Size | Strategy |
|---|---|
| < 500 lines | Single pass |
| >= 500 lines | Chunked subagent processing |
If --research-file is provided:
If --research-file is NOT provided, use generic persona detection from methodology.md.
Read entire reduced file, apply methodology from methodology.md (plus research overlay if provided), generate all records, write JSONL.
== headings)Task(subagent_type="general-purpose", prompt=chunk_analysis_prompt)
methodology.md (read the reference file)schema.md (read the reference file)--research-file was provided)Match each record's section to the include graph, update evidence and notes with source module paths and types.
Example enriched record:
{
"evidence": "deploying-models-reduced.adoc -> Section 'Deploying models...', lines 45-120 [module: upstream-modules/proc-deploying-models.adoc, type: PROCEDURE]",
"notes": "Source module: upstream-modules/proc-deploying-models.adoc (PROCEDURE). Assembly: assemblies/deploying-models.adoc"
}
Save to <output-dir>/:
<doc>-jtbd.jsonl — JTBD records (one JSON per line)<doc>-jtbd.csv — CSV version (array fields joined with ; )<doc>-include-graph.json — Include graph with module types<doc>-<variant>-reduced.adoc (or <doc>-reduced.adoc if no variant)If --output is not specified:
analysis/<book-name>-adoc/<doc>/<book-name> is derived from the repository or parent directory nameRead toc-guidelines.md and example-toc.md for the complete formatting rules.
<doc>-jtbd.jsonl from the output directorytoc-guidelines.md<output-dir>/<doc>-toc-new_taxonomy.md-> Lines X-Y: Section Title format=/==/=== not #/##/###. Map = Title to document title, == Chapter to chapters.Read comparison-guide.md for the complete formatting rules.
.adoc file as the "source document"=, ==, ===)<doc>-jtbd.jsonl for proposed structurecomparison-guide.md<output-dir>/<doc>-comparison.md= Title headings map to document/book title== Chapter headings map to chapters=== Section headings map to sections# headingsRead consolidation-guide.md for the complete formatting rules.
<doc>-jtbd.jsonl — JTBD records<doc>-toc-new_taxonomy.md — TOC (for proposed structure)<doc>-comparison.md — Comparison (for current structure context)<doc>-*-reduced.adoc — Source document (reduced)consolidation-guide.md<output-dir>/<doc>-consolidation-report.md.adoc headings/jtbd-workflow-adoc --docs-file docs.txt --batch --batch-size 5
docs.txt lists paths to master.adoc files, one per line:
~/Documents/RHAI_DOCS/deploying-models/master.adoc
~/Documents/RHAI_DOCS/creating-a-workbench/master.adoc
~/Documents/RHAI_DOCS/working-on-projects/master.adoc
You can also include --variant and --research flags which apply to all documents:
/jtbd-workflow-adoc --docs-file docs.txt --variant self-managed --research redhat-ai --batch --batch-size 5
--batch-size N controls how many to process (default 5, max 10)| # | Document | Records | Main Jobs | Status |
|---|----------|---------|-----------|--------|
| 1 | deploying-models | 52 | 14 | Done |
| 2 | creating-a-workbench | 28 | 8 | Done |
| 3 | working-on-projects | 35 | 11 | Done |
For processing more than 10 documents, use the Python batch-runner script:
python3 plugins/jtbd-workflow-adoc/scripts/batch-runner.py \
--docs-file docs.txt \
--variant self-managed \
--research redhat-ai \
--batch-size 5
This splits items into groups and invokes claude for each group, handling failures and providing resume capability.
For each document processed, the following files are produced:
| File | Step | Description |
|---|---|---|
<doc>-jtbd.jsonl | 1 | JTBD records |
<doc>-jtbd.csv | 1 | CSV version of records |
<doc>-*-reduced.adoc | 1 | Reduced (flattened) AsciiDoc |
<doc>-include-graph.json | 1 | Include graph with module types |
<doc>-toc-new_taxonomy.md | 2 | JTBD-oriented TOC |
<doc>-comparison.md | 3 | Current vs proposed comparison |
<doc>-consolidation-report.md | 4 | Stakeholder consolidation report |
include:: directives).adoc headingsThe --research-file flag lets you provide a YAML file with domain-specific personas, schema extensions, and canonical jobs. Without it, the skill uses generic persona detection from the documentation content.
name: "My Project"
version: "1.0"
description: "Research overlay for My Project documentation"
# Domain-specific personas (override generic role detection)
personas:
- id: sysadmin
name: "Sam the Systems Administrator"
role: "Manages hosts, patching, and content lifecycle"
archetype: "THE OPERATOR" # optional
loop: "outer" # inner | outer | cross-cutting (optional)
key_skills: # optional
- "Host management"
- "Content views"
- "Patching"
pain_points: # optional
- "Complex content management workflows"
- "Slow patching cycles across large fleets"
key_quote: "I need to patch 500 hosts and I can't afford downtime." # optional
- id: deveng
name: "Dana the Developer"
role: "Builds and deploys applications on the platform"
archetype: "THE BUILDER"
loop: "inner"
key_skills:
- "Application development"
- "CI/CD pipelines"
pain_points:
- "Disconnected from operational environment constraints"
# Additional fields added to every JTBD record
schema_extensions:
- field: "compliance_framework"
type: "enum"
values: ["STIG", "CIS", "PCI-DSS", "HIPAA", "none"]
description: "Applicable compliance framework for this job"
- field: "operational_impact"
type: "enum"
values: ["high", "medium", "low"]
description: "Impact on production operations if this job fails"
- field: "teams_involved"
type: "array"
description: "Teams that collaborate on this job"
# Canonical jobs from research for matching/validation
canonical_jobs:
setup:
- "Register and provision hosts"
- "Configure content sources"
operations:
- "Patch hosts across environments"
- "Monitor compliance status"
lifecycle:
- "Promote content across environments"
# Jobs flagged as strategic priorities
strategic_priorities:
- "Patch hosts across environments"
- "Monitor compliance status"
# Pain point patterns to detect in documentation
pain_point_patterns:
- pattern: "manual"
maps_to: "Automation opportunity"
- pattern: "drift"
maps_to: "Compliance monitoring gap"
- pattern: "complex"
maps_to: "Simplification opportunity"
When --research-file is provided, the skill:
strategic_priority: true on matching records.pain_points field.| Section | Required | Purpose |
|---|---|---|
name, version | Yes | Config identity |
description | No | Human-readable description |
personas | No | Domain-specific persona definitions |
schema_extensions | No | Additional fields for JTBD records |
canonical_jobs | No | Reference jobs from research for alignment |
strategic_priorities | No | Jobs to flag as high-priority |
pain_point_patterns | No | Text patterns to detect and capture |
All sections except name and version are optional. You can provide just personas, just schema extensions, or any combination.
This skill references shared methodology and guideline files in the reference/ directory:
methodology.md — JTBD extraction rules (from /jtbd-analyze-adoc)schema.md — Record schema (from /jtbd-analyze-adoc)toc-guidelines.md — TOC formatting rules (from /jtbd-toc)example-toc.md — Example TOC output (from /jtbd-toc)comparison-guide.md — Comparison rules (from /jtbd-compare)consolidation-guide.md — Consolidation rules (from /jtbd-consolidate)