From jtbd-tools
Extracts Jobs-To-Be-Done records from technical markdown documentation using specified methodology. Handles large files via chunked subagent processing for user goal analysis.
npx claudepluginhub redhat-documentation/redhat-docs-agent-tools --plugin jtbd-toolsThis skill uses the workspace's default tool permissions.
Extract Jobs-To-Be-Done records from technical documentation using the methodology defined in [methodology.md](../../reference/methodology.md).
Extracts Jobs-To-Be-Done records from modular AsciiDoc documentation repos. Analyzes assemblies, includes, conditionals via reduction, source mapping, and chunked analysis for user goals.
Extract work items in batch from existing documents (specs, PRDs, research, plans, meeting notes, design docs). Use whenever the user wants to capture, pull out, or convert requirements, work items, user stories, bug reports, or actionable tasks from existing files into structured work items in meta/work/ — even if they don't say "extract" explicitly.
Generates ordered task YAML from PRD markdown file with sequential IDs, dependencies, descriptions, and acceptance criteria. Use after creating or reviewing a PRD for implementation planning.
Share bugs, ideas, or general feedback.
Extract Jobs-To-Be-Done records from technical documentation using the methodology defined in methodology.md.
# Analyze a single document
/jtbd-analyze docs_raw/rhoai/creating-a-workbench.md
# With research config overlay
/jtbd-analyze docs_raw/rhoai/creating-a-workbench.md --research redhat-ai
# With custom output directory (for A/B testing)
/jtbd-analyze docs_raw/rhoai/creating-a-workbench.md --output analysis/rhoai/creating-a-workbench-skill/
analysis/<project>/<doc>/--output path or default analysis/<project>/<doc>/<doc>-jtbd.jsonl| Document Size | Strategy |
|---|---|
| < 500 lines | Single pass processing |
| >= 500 lines | Chunked subagent processing |
For documents under 500 lines:
For documents 500+ lines:
Task(subagent_type="general-purpose", prompt=chunk_analysis_prompt)
The complete JTBD extraction methodology is in methodology.md. Key points:
When [situation], I want to [motivation], so I can [expected outcome]
| Level | Description | Count per Guide |
|---|---|---|
main_job | Stable, outcome-focused goals | ~10-15 |
user_story | Persona-specific implementation paths | 2-7 per main job |
procedure | Step-by-step instructions (reference only) | Skip or note in evidence |
| Stage | Verbs | Examples |
|---|---|---|
| Define | understand, choose, select | Choosing architecture approach |
| Locate | find, access, discover | Finding available resources |
| Prepare | set up, configure, install | Setting up environment |
| Confirm | verify, validate, check | Verifying deployment readiness |
| Execute | deploy, run, train, build | Deploying a model |
| Monitor | monitor, track, observe | Tracking performance metrics |
| Modify | optimize, adjust, tune | Tuning resource allocation |
| Conclude | clean up, remove, archive | Decommissioning resources |
Before classifying as main_job, apply the "Why vs How" ladder:
Red flags (likely tasks, NOT main_jobs):
Green flags (likely main_jobs):
Each record follows the JTBDRecord schema (see schema.md):
{
"doc": "creating-a-workbench.md",
"section": "Chapter 2: Configuring workbenches",
"job_statement": "When setting up a development environment, I want to configure workbench resources, so I can ensure adequate compute for my experiments.",
"job_type": "core",
"persona": "Data scientist",
"job_map_stage": "Configure",
"granularity": "main_job",
"parent_job": null,
"prerequisites": ["Create a project", "Have cluster admin approval"],
"related_jobs": ["Configure data connections", "Set up notebook images"],
"desired_outcomes": [
"Minimize time to get a working environment",
"Reduce likelihood of resource contention"
],
"evidence": "creating-a-workbench.md -> Chapter 2, lines 45-120",
"notes": "Main job covering workbench configuration options"
}
By default, records are saved to:
analysis/<project>/<doc>/<doc>-jtbd.jsonl
analysis/<project>/<doc>/<doc>-jtbd.csv
Example:
analysis/rhoai/creating-a-workbench/creating-a-workbench-jtbd.jsonl
analysis/rhoai/creating-a-workbench/creating-a-workbench-jtbd.csv
When --output is specified, records are saved to:
<output-path>/<doc>-jtbd.jsonl
<output-path>/<doc>-jtbd.csv
Example with --output analysis/rhoai/creating-a-workbench-skill/:
analysis/rhoai/creating-a-workbench-skill/creating-a-workbench-jtbd.jsonl
analysis/rhoai/creating-a-workbench-skill/creating-a-workbench-jtbd.csv
After writing the JSONL file, the skill converts it to CSV format for spreadsheet viewing. The CSV is written to the same output directory as the JSONL file.
When spawning subagents for chunked processing, each chunk prompt includes:
methodology.mdAfter all chunks complete:
When --research <config-name> is specified:
jtbd/research/<config-name>.yamlloop: Inner/outer loop classificationgenai_phase: Development vs productionstrategic_priority: Is this a priority job from research?pain_points: User-reported friction pointsteams_involved: Cross-team collaboration# Step 1: Analyze document
/jtbd-analyze docs_raw/rhoai/creating-a-workbench.md
# Output shows:
# - Document size and processing strategy
# - Progress as sections are analyzed
# - Summary of extracted records
# - Path to JSONL output
# Step 2: Review results
cat analysis/rhoai/creating-a-workbench/creating-a-workbench-jtbd.jsonl
# Step 3: Generate TOC from records
/jtbd-toc analysis/rhoai/creating-a-workbench/
Before completing analysis, verify:
parent_job set