This skill should be used when the user asks to "create a spec", "write a design doc", "start a new lab project", "I have an idea for a lab", "vet this against existing content", or "refine the spec". It handles intake, RCARS vetting, and spec refinement for RHDP Publishing House projects.
npx claudepluginhub rhpds/rhdp-publishing-house-skillsThis skill uses the workspace's default tool permissions.
---
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Processes PDFs: extracts text/tables/images, merges/splits/rotates pages, adds watermarks, creates/fills forms, encrypts/decrypts, OCRs scans. Activates on PDF mentions or output requests.
Share bugs, ideas, or general feedback.
You handle the first three phases of the Publishing House lifecycle:
ALWAYS complete these steps first:
publishing-house/manifest.yaml to understand project state@rhdp-publishing-house/skills/intake/references/spec-guidelines.md@rhdp-publishing-house/skills/intake/references/module-outline-template.mdproject.autonomy in manifest (supervised, semi, or full)The orchestrator resolves MCP availability and user email before dispatching intake. These are available in session context:
If MCP is unavailable:
If the user provides existing documents (design doc, manifest, Google Doc, outline, meeting notes, or any other format):
If all answers are present, intake becomes a confirmation step rather than a multi-question interview.
If parsed values conflict between documents, present the conflict and ask the user to resolve it.
Ask the user ONE question with two clear options:
How would you like to start?
- I have a spec or design doc (file, URL, or paste)
- I have an idea I want to develop
When user provides an existing spec:
Read the document
gws cat <url>Parse against spec template format
spec-guidelines.mdIdentify gaps
Ask about each gap ONE at a time
Write normalized spec to publishing-house/spec/design.md
Generate per-module outlines in publishing-house/spec/modules/module-NN-<title>.md
Update manifest
The user has an idea — anything from a vague topic to a content gap description to a rough outline. Start conversational, get structured later.
Ask ONE open-ended question:
"Tell me about your idea."
Accept whatever the user provides. This could be:
Do NOT immediately ask structured questions. Read what the user gave you first.
After reading the user's initial description:
Extract what you already know — identify which of the required fields can be inferred from the description (topic, products, audience hints, complexity, type, etc.)
Ask targeted follow-ups for what's missing — one at a time, in natural conversation order based on what you still need. Don't follow a fixed sequence. If the user mentioned "a 2-hour workshop for platform engineers using RHACS," you already have type, duration, audience, and products — don't re-ask those.
The required fields you need to capture (in whatever order makes sense):
self_published: anchor on the known base before asking — "Your Field Source CI provides an OCP cluster with GitOps (ArgoCD) pre-installed. Beyond that, what should already be running when a student logs in, and what do they set up themselves during the lab?" Do not ask the user to describe the cluster or base platform — that is fixed by the deployment mode.
For rhdp_published: the base depends on infrastructure type (determined during automation 7a). For now, ask what the learner experience should be — what's pre-deployed vs. what students do.After gathering all required information:
design.md following template formatmodules/ directoryDo NOT create Showroom or automation repos during intake. The orchestrator handles repo creation at phase gates — Showroom is set up before the first writing dispatch, and automation before the first automation code dispatch. Intake only needs to capture the project requirements and generate the spec.
design.md must include:
Module outline files must:
module-01-<short-title>.md, module-02-<short-title>.md, etc.Manifest update after Intake:
project:
name: "Lab Title"
id: "lab-short-id"
created: "2026-04-09"
owner_name: "Full Name" # Display name of project owner
owner_github: "githubuser" # GitHub username of project owner
type: "workshop" # or demo
deployment_mode: "rhdp_published" # rhdp_published | self_published | express
autonomy: "supervised" # or semi, full
integrations:
showroom_repo: null # Set by orchestrator before writing phase
automation_repo: null # Set by orchestrator before automation phase
lifecycle:
current_phase: "vetting"
phases:
intake:
status: "completed"
completed_date: "2026-04-09 14:30"
artifacts:
- "publishing-house/spec/design.md"
- "publishing-house/spec/modules/module-01-*.md"
- "publishing-house/spec/modules/module-02-*.md"
writing:
modules:
- id: "module-01"
title: "Module Title"
status: "pending"
- id: "module-02"
title: "Another Module"
status: "pending"
automation:
needs_automation: true # or false
ph_rcars_query MCP tool is available
lifecycle.phases.vetting.status: skipped and lifecycle.phases.vetting.skip_reason: "MCP unavailable" in manifestBuild query from spec: Combine these elements from the project specification into a single natural language query:
Example query: "Workshop teaching OpenShift GitOps with ArgoCD for intermediate users, covering deployment strategies, rollbacks, and multi-cluster management"
Call MCP tool:
Use the ph_rcars_query tool with the combined query string. The tool handles:
Handle ph_rcars_query results:
If result status is "completed":
result.resultIf result status is "unavailable" or "timeout":
result.errorlifecycle.phases.vetting.status: error and lifecycle.phases.vetting.error: "{error}" in manifestIf result status is "failed":
result.errorWrite the vetting results to publishing-house/reviews/rcars-vetting.md:
Set vetting status in manifest:
lifecycle.phases.vetting.status: approved (no significant overlap), review (overlaps found, user acknowledged), skipped, or errorlifecycle.phases.vetting.query: the query string usedlifecycle.phases.vetting.matches_count: number of matches foundlifecycle.phases.vetting.completed_at: ISO timestampAfter vetting, proceed to Deployment Mode Selection to choose the project's deployment path.
After vetting completes (or is skipped), present the deployment mode choice. This determines the project's path through the rest of the Publishing House lifecycle.
If MCP is unavailable, present only two options and explain why:
Choose your deployment mode:
- Onboarded (rhdp_published) -- Full RHDP pipeline: AgnosticV catalog, code reviews, published in RHDP
- Self-published (self_published) -- GitOps repo, Field Source CI, self-managed publishing
Express mode is not available this session -- it requires a portal connection for intake data storage. Configure your API key to enable it.
If MCP is available, present all three options:
Choose your deployment mode:
- Onboarded (rhdp_published) -- Full RHDP pipeline: AgnosticV catalog, code reviews, published in RHDP
- Self-published (self_published) -- GitOps repo, Field Source CI, self-managed publishing
- Express -- Disposable demo environment. PH helps you find a base, you order a Babylon environment, customize it, and walk away. No content repo, no lifecycle tracking.
Do NOT steer the user toward any mode. Present options neutrally. User selects.
The express mode follows a separate, shorter flow. The express flow does not create a local manifest or git repo -- all data goes to the portal DB.
Express intake questions — keep it tight. Only ask about:
Do NOT ask about:
Two to three targeted follow-ups max, then move to RCARS.
Step E1: Express RCARS Base-Finding Query
Run a second RCARS query for express base-finding, focused on infrastructure rather than content overlap:
Build query from the intake data gathered so far:
Call ph_rcars_query with this infrastructure-focused query.
Handle results:
If matches found: Present the top 3-5 matches as potential base CIs:
Based on your requirements, these existing catalog items could serve as your base environment:
- agd_v2/ocp-cnv (OCP with CNV) -- Provides OCP 4.x cluster with CNV operator
- agd_v2/ocp-gpu (OCP with GPU) -- Provides OCP cluster with GPU nodes
Which one looks closest to what you need? Or describe what's different about your needs.
If no matches: Inform the user:
RCARS did not find a close infrastructure match. You may need to work with the platform team to identify or create an appropriate base catalog item.
Record the selected base CI (or "none found") for the express metric.
Note on quality: This base-finding query relies on content analysis as a proxy for infrastructure matching. Quality is limited until RCARS gets infrastructure-aware catalog metadata (backlogged in both PH and RCARS backlogs per D-03).
Step E2: Store express intake data in portal DB
Call ph_store_intake_results MCP tool:
ph_store_intake_results(
owner_email="<user_email>",
mode="express",
intake_data={
"project": {
"name": "<project_name or user description>",
"type": "express",
"deployment_mode": "express"
},
"requirements": {
"products": [...],
"environment": "...",
"base_ci": "<selected CI or null>"
},
"vetting": {
"status": "<vetting result>",
"base_finding": {
"query": "<query string>",
"matches": [...]
}
}
},
project_name="<project name>"
)
Step E3: Record express metric
Call ph_record_express_run MCP tool:
ph_record_express_run(
owner_email="<user_email>",
base_ci="<selected CI name or null>",
automated=false
)
The automated field is always false until Babylon ordering automation is built.
Step E4: Dead-End at Environment Gate
Present the environment gate message and stop:
Express intake complete.
Your base CI is: (or "No base CI identified -- work with the platform team")
Next steps (manual):
- Order a Babylon environment based on the base CI above
- Once the environment is provisioned and you have access, come back and we can help you customize it
Your intake data is saved in the portal (session ID: <session_id>). You can resume this express project later by running
/rhdp-publishing-house-- the orchestrator will find your saved session.The express skill (automated cluster customization) is not yet built. For now, customization is manual.
Do NOT write a local manifest for express projects. Do NOT create a git repo. Do NOT proceed to spec refinement or any further phases. The express intake ends here. The express flow is complete once the environment gate message is presented.
Proceed with the existing flow:
project.deployment_mode in the manifest to rhdp_published or self_publishedSession Continuity Addition: After writing the manifest for onboarded/self-published modes (at the end of Phase 1 or after mode selection), also store intake results in the portal:
Call ph_store_intake_results MCP tool:
ph_store_intake_results(
owner_email="<user_email>",
mode="<onboarded or self_published>",
intake_data=<full intake data dict matching the manifest shape>,
project_name="<project.name from manifest>"
)
If MCP is unavailable, skip this step silently (intake data is still in the local manifest).
Manifest Sync Addition:
After every manifest write during intake (setting intake status, adding artifacts, etc.), call ph_sync_manifest per the Manifest Sync Rule defined in the orchestrator SKILL.md. If MCP is unavailable, skip silently.
Re-read all spec artifacts:
publishing-house/spec/design.mdpublishing-house/spec/modules/If RCARS flagged overlap:
Review each module outline for:
Update files in place:
Present summary of changes:
lifecycle:
current_phase: "approval"
phases:
spec_refinement:
status: "completed"
completed_date: "2026-04-09 14:30"
changes:
- "Incorporated RCARS differentiation guidance"
- "Standardized module outline format"
- "Added missing time estimates"
DO NOT advance past approval gate.
Inform the user:
Spec refinement is complete. Your design doc and module outlines are ready for review at:
publishing-house/spec/design.mdpublishing-house/spec/modules/Please review these artifacts. When you're ready to proceed, you can approve the spec and move to the writing phase.
Be as thorough as the superpowers:brainstorming skill when exploring requirements:
Push back on vague objectives
Propose module structures and validate them
Identify gaps the user hasn't thought of
Scale question depth to project complexity
Goal: Rigorous exploration through conversation, not just filling in a template.
Ask follow-up questions. Challenge assumptions. Propose alternatives. Validate feasibility. The quality of the spec determines success of all downstream phases.