This skill should be used when the user asks to "create the catalog", "build automation", "write the AgnosticV config", "set up the lab environment", "create ansible roles", "automate the deployment", or "write the environment automation". It wraps agnosticv:catalog-builder, agnosticv:validator, and code-review:code-review for RHDP Publishing House projects.
npx claudepluginhub rhpds/rhdp-publishing-house-skillsThis skill uses the workspace's default tool permissions.
---
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Processes PDFs: extracts text/tables/images, merges/splits/rotates pages, adds watermarks, creates/fills forms, encrypts/decrypts, OCRs scans. Activates on PDF mentions or output requests.
Share bugs, ideas, or general feedback.
You handle lifecycle phase 7: capturing automation requirements (7a), creating the AgnosticV catalog configuration (7b), and developing environment automation code (7c). You wrap existing agnosticv and code-review skills with Publishing House context. E2E checks are deferred to a future phase.
See @rhdp-publishing-house/skills/automation/references/automation-patterns.md for automation patterns. See @rhdp-publishing-house/skills/automation/references/ansible-automation-guide.md for Ansible collection structure. See @rhdp-publishing-house/skills/automation/references/gitops-automation-guide.md for GitOps (Helm + ArgoCD) patterns. See @rhdp-publishing-house/skills/automation/references/automation-manifest-format.md for the automation manifest format.
publishing-house/manifest.yaml to understand project statelifecycle.phases.automation.needs_automation is true
false or null: "Automation was marked as not needed during intake. Would you
like to change this and proceed, or skip automation?"project.autonomy for behavior modepublishing-house/spec/design.md for infrastructure requirementsproject.deployment_mode from manifest — this determines automation approach options and whether AgnosticV catalog creation is neededCheck the manifest's lifecycle.phases.automation.substeps:
requirements is pending → start with 7a (Automation Requirements)requirements is completed and catalog_item is pending → start with 7b (Catalog Item)catalog_item is completed (or skipped) and automation_code is pending → start with 7c (Automation Code)automation_code is completed and testing is pending → present 7d (Testing gate)completed (or testing skipped) → inform user automation is doneCheck what the user requested:
Phase 7a generates and reviews the automation manifest — the reviewable contract between content and automation. Capturing all requirements first means we know exactly what to build before committing to any catalog or code structure. Phase 7b then creates the AgnosticV catalog informed by those requirements, and Phase 7c writes the automation code from the approved manifest.
See @rhdp-publishing-house/skills/automation/references/automation-manifest-format.md for the full manifest format and field reference.
Check if publishing-house/spec/automation-manifest.yaml already exists and has content.
If the user provided a manifest:
"Found an automation manifest. Let me validate the format and review the requirements."
Validate the YAML structure against the manifest format. Present a summary and proceed to Step 7a-3 (approval).
If no manifest exists: Proceed to Step 7a-2 (generate one).
Read context to extract automation requirements. Priority order: drafted content first, outlines as fallback.
From content files (PRIMARY — read these first if they exist in content/):
lifecycle.phases.writing.modules[*].content_pathoc apply -f deployment.yaml" → learner does this (do NOT automate)broken_resources){attribute} placeholders — these indicate provision data needssource_module)From module outlines (FALLBACK — use if content files don't exist yet):
publishing-house/spec/modules/From the design spec (publishing-house/spec/design.md):
infrastructure sectionoperatorsinfrastructure.multi_userDetermine approach:
Check project.deployment_mode:
If self_published:
Approach is GitOps only. Inform the user:
"Self-published projects use the Field Source CI with a GitOps repo. Your automation approach is GitOps (Helm + ArgoCD)."
Ask ONE question to determine the deployment topology:
"Does each student need their own cluster (for admin-level work like installing operators), or will students share a cluster with per-user namespaces?
- Per-user cluster — each student gets their own OCP cluster. Use when the lab requires cluster-admin or would conflict across users (operator installs, cluster-level config). One user provisioned per deployment.
- Shared cluster — one cluster, N namespaces. Use when the lab is namespace-scoped and students don't interfere.
num_usersparameter controls how many copies of tenant resources are deployed."
Record as infrastructure.topology: per-user-cluster | shared-cluster and infrastructure.multi_user: false | true.
Set approach: gitops in the automation manifest.
If rhdp_published:
approach: gitopsapproach: ansibleWrite the manifest to publishing-house/spec/automation-manifest.yaml.
Present the manifest to the user for review. This is always a gate, regardless of autonomy level — automation scope must be explicitly approved.
"Here's the automation manifest — what needs to be pre-configured for the lab:
Approach: [ansible/gitops/both] Infrastructure: [type], [multi_user], [users] Operators: [count] — [names] Applications: [count] — [names] RBAC: [count entries] Seed data: [count entries] Broken resources: [count] (for troubleshooting exercises) Provision data: [count keys]
Full manifest:
publishing-house/spec/automation-manifest.yamlReview and approve, or edit the manifest and tell me when ready."
Wait for explicit approval. The user may edit the manifest file directly — always re-read it from disk after they say "approved" or "looks good."
After requirements are approved:
automation:
status: in_progress
substeps:
requirements: completed
catalog_item: pending
automation_code: pending
e2e_checks: deferred
Check deployment mode first:
When deployment_mode: self_published:
substeps.catalog_item: skippedWhen deployment_mode: rhdp_published:
"An AgnosticV catalog item is required for RHDP-published projects. Do you have AgnosticV access to create it, or does someone else need to handle this?"
substeps.catalog_item: pending_handoff. Write a worklog entry with the information needed for handoff (infrastructure type, operators, multi-user config from the approved automation manifest). Proceed to 7c.Read publishing-house/spec/design.md and the approved publishing-house/spec/automation-manifest.yaml and extract:
Inform the user:
"Using
agnosticv:catalog-builderto create the AgnosticV catalog for this project."
Invoke agnosticv:catalog-builder in Mode 1 (Full Catalog Creation).
The catalog-builder skill will ask its own questions interactively. Pre-fill answers from the PH context where possible:
project.type — workshop → lab_multi or lab_single,
demo → demo. Ask user to confirm.project.name from manifest.project.id from manifest.project.owner_name and project.owner_github from manifest. Ask for email.operators list.For questions the catalog-builder asks that aren't covered by the PH spec, let the user answer directly. Do not guess infrastructure-specific configuration.
After the catalog-builder generates files, immediately invoke validation:
Inform the user:
"Running
agnosticv:validatorto check the catalog configuration."
Invoke agnosticv:validator at scope level 2 (Standard).
If validation passes:
substeps.catalog_item: completedIf validation fails:
Do not proceed to 7c until the catalog validates at least at Level 2 with no errors.
automation:
substeps:
requirements: completed
catalog_item: completed
automation_code: pending
e2e_checks: deferred
catalog_path: "<agv-relative-path>"
agv_repo: "<local-agv-repo-path>"
Write the automation code from the approved automation manifest. If the catalog item (7b) was skipped (self-published projects skip this step), proceed directly from approved requirements (7a) to code.
Read the approved manifest from publishing-house/spec/automation-manifest.yaml.
Write code based on approach:
For approach: ansible:
See @rhdp-publishing-house/skills/automation/references/ansible-automation-guide.md.
automation/ with galaxy.yml and rolestasks/main.yml, tasks/workload.yml, tasks/remove_workload.yml,
defaults/main.yml, meta/main.yml, templates/*.yaml.j2operators → Subscription + OperatorGroup resourcesapplications → Deployment + Service + Route templatesrbac → Namespace + RoleBinding tasksseed_data → ConfigMap/Secret tasks or git clone tasksbroken_resources → Resources with intentional misconfigurationsprovision_data → agnosticd.core.agnosticd_user_info taskFor approach: gitops:
See @rhdp-publishing-house/skills/automation/references/gitops-automation-guide.md.
Generate code in automation/ following the ci-template-gitops patterns. Do not clone the template — generate tailored code:
cluster/infra/ — operator deployments (cluster-level, deployed once)tenant/bootstrap/ — per-user workloads (deployed per user or per namespace)tenant/labs/<project-id>/ — the lab's Helm chartMap manifest entries to the appropriate layer:
operators → cluster/infra/ Helm chart templatesapplications → tenant/labs/<project-id>/ Helm chart (Pattern 2 or 3)rbac → Inline in tenant/bootstrap/templates/ (Pattern 1) or in lab chartseed_data → ConfigMap/Secret templates in lab chartbroken_resources → Templates with intentional misconfigurations in lab chartprovision_data → ConfigMap with demo.redhat.com/tenant-<name>: "true" label in lab chart. Showroom is always included — provision_data must surface the Showroom URL and all per-user URLs so the lab guide can display them as {attribute} variables.For approach: both:
After writing automation code, run a quick safety check and write all blockers to the worklog.
"Don't hurt yourself" checklist:
latest)common.yaml match created roles/chartsrequirements_content are satisfiedocp4_workload_<project>_*)Re-validate catalog (if catalog item exists):
Re-run agnosticv:validator to verify workload references are consistent with the catalog configuration.
Write worklog entries for every pre-testing blocker found:
For each issue found in the checklist, write a worklog entry of type action. Examples:
"Pin image tag for <image> before testing. Currently using 'latest'. Find the digest or a stable release tag.""Build and push <image> to <registry> before testing. Image referenced in automation but not yet published.""Update module-03 content: replace Postman with Hoppscotch (callback URL, UI steps). Required before editing phase.""AgnosticV catalog item pending handoff to <person>. Provide: [infrastructure type, operators, multi-user config]."Invoke rhdp-publishing-house:worklog to write each entry. These become visible in "what's outstanding" and persist across sessions.
Note: This is a safety check, not a full code review. The formal code review happens in the Code & Security Review phase. For rhdp_published projects, code review is required. For self_published projects, it is recommended but optional.
After automation code is complete and review cycle passes:
automation:
substeps:
requirements: completed
catalog_item: completed
automation_code: completed
testing: pending
e2e_checks: deferred
automation_files:
- automation/roles/ocp4_workload_<project-id>/
- automation/helm/<chart-name>/ # if GitOps
The automation code must be deployed and tested on a real environment before the automation phase can be marked complete. This is a human gate — the agent does not deploy or test automation itself.
After automation code is written (7c complete), inform the user:
"Automation code is written and has open worklog items to verify before it can be considered tested. Deploy to a dev environment and work through the open items.
When you're done, describe what you tested:
- What environment did you use? (cluster name, type)
- Which worklog items did you resolve?
- Did all components deploy and function correctly? Any issues still open?
Or let me know if something is blocking you from testing."
Do not use magic-word prompts like "say 'testing done'". Wait for the user to describe results naturally.
When the user describes their testing:
If they provide a description of what was tested: Extract or ask for:
Set substeps.testing: completed and record the evidence in the manifest:
testing: completed
testing_notes: "<description of what was tested, environment, items resolved>"
Mark resolved worklog items as resolved.
If they say testing is done with no details: Ask for specifics before marking complete — "What environment did you test on, and which of the open worklog items were resolved?"
If they want to skip: Confirm — "Skipping testing means the automation code has not been verified on a real environment. Any issues will surface the first time someone orders this. Are you sure?" If confirmed, set substeps.testing: skipped.
automation:
status: in_progress # Orchestrator sets to completed
substeps:
requirements: completed
catalog_item: completed
automation_code: completed
testing: completed # or skipped
e2e_checks: deferred
Do not change lifecycle.phases.automation.status or lifecycle.current_phase.
Phase-level transitions are managed by the orchestrator.
After completing each sub-phase, inform the user:
After 7a (Automation Requirements):
"Automation requirements documented and approved. Manifest:
publishing-house/spec/automation-manifest.yamlNext: Create the AgnosticV catalog item (7b). Self-published projects skip this step automatically."
After 7b (Catalog Item):
"AgnosticV catalog item created and validated. Catalog path:
<agv-relative-path>Validation: [PASSED / PASSED WITH WARNINGS]Next: Write automation code (7c)."
After 7c (Automation Code):
"Automation code complete. Files: [list of roles/charts created] Code review: [PASSED / findings addressed] Catalog re-validation: [PASSED]
Next: Deploy and test the automation (7d). The code has not been tested yet."
After 7d (Testing):
"Automation testing [completed / skipped]. [If completed: testing notes]
Automation phase complete. Next: security review."
Users can skip individual sub-phases:
substeps.requirements: skipped. User provides
automation code directly without a manifest. Not recommended — requirements drive
both catalog and code decisions.substeps.catalog_item: skipped. Project is
self-published (no catalog item needed), or user manages AgV config outside
Publishing House. Automation code is built from the approved requirements manifest.substeps.automation_code: skipped. Automation
handled externally.substeps.testing: skipped. Not recommended — automation
code has not been verified on a real environment.skipped.Always confirm skip decisions: "Are you sure? This means [consequence]."