Develop, test, and register PMC workflows. Workflows are JSON state machines for Claude CLI, shell, sub-workflows. WORKFLOW: 1. DEFINE - Create workflow JSON with states, transitions 2. VALIDATE - pmc validate <workflow.json> 3. MOCK - Create mock scripts for each state 4. TEST MOCK - pmc run --mock to test transitions 5. TEST REAL - pmc run with real data 6. REGISTER - Add to registry.json Use when: - User says "create workflow", "new workflow", "automate" - Automating repetitive multi-step processes - Building CI/CD or development pipelines
Develop, test, and register PMC workflows—JSON state machines that automate multi-step CLI processes. Use when user requests "create workflow", "new workflow", or "automate" repetitive tasks. Triggers include building CI/CD pipelines and orchestrating complex shell/Claude operations with validation gates and mock testing.
/plugin marketplace add jayprimer/pmc-marketplace/plugin install jayprimer-pmc-plugins-pmc@jayprimer/pmc-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/error-handling.mdreferences/state-types.mdreferences/transitions.mdreferences/variables.mdDevelop, test, and register PMC workflows.
ALWAYS run /pmc:kb first to understand KB structure.
1. DEFINE
└── Create .pmc/workflows/{name}/workflow.json
2. VALIDATE
└── pmc validate .pmc/workflows/{name}/workflow.json
3. MOCK
└── Create .pmc/workflows/{name}/mocks/mocks.json + *.py
4. TEST MOCK
└── pmc run {name} --mock -i param=value
5. TEST REAL
└── pmc run {name} -i param=value
6. REGISTER
└── Add to .pmc/workflows/registry.json
Before defining states, decide on your workflow architecture.
┌──────────────────┐ ┌──────────────┐ ┌──────────────┐
│ Claude: Do Work │ ──▶ │ Shell: Check │ ──▶ │ Terminal │
│ (session: start) │ │ Artifacts │ │ │
└──────────────────┘ └──────────────┘ └──────────────┘
│ fail
▼
┌──────────────┐
│ Claude: Fix │ ─── loop back
│ (continue) │
└──────────────┘
When to use:
Benefits:
Example structure:
{
"states": {
"work": {
"type": "claude",
"prompt_file": "work.md",
"session": "start",
"transitions": [{"condition": {"type": "default"}, "target": "validate"}]
},
"validate": {
"type": "shell",
"command": "python scripts/validate.py",
"transitions": [
{"condition": {"type": "json", "path": "$.ok", "equals": true}, "target": "done"},
{"condition": {"type": "default"}, "target": "fix"}
]
},
"fix": {
"type": "claude",
"prompt_file": "fix.md",
"session": "continue",
"transitions": [{"condition": {"type": "default"}, "target": "validate"}]
},
"done": {"type": "terminal", "status": "success"}
}
}
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌──────────┐
│ State 1 │ ──▶ │ State 2 │ ──▶ │ State 3 │ ──▶ │ Terminal │
└─────────┘ └─────────┘ └─────────┘ └──────────┘
│ │ │
▼ ▼ ▼
(branch) (branch) (branch)
When to use:
Trade-offs:
Reference skills, don't duplicate:
## Task
Execute Step 2 (Scope Determination) from `/pmc:plan` skill.
## Context
Request: {request}
Related PRDs: {related_prds}
## Then Continue
Proceed to create artifacts based on your scope decision.
Why this works:
Use shell states as boolean gates, not complex decision points:
#!/usr/bin/env python3
"""Simple validation - check artifacts exist."""
import json
from pathlib import Path
docs_dir = Path(sys.argv[1])
ticket_dir = docs_dir / "tickets" / sys.argv[2]
issues = []
for f in ["1-definition.md", "2-plan.md", "3-spec.md"]:
if not (ticket_dir / f).exists():
issues.append(f"Missing {f}")
print(json.dumps({
"ok": len(issues) == 0,
"issues": issues
}))
For multi-Claude-state workflows, use session modes:
| First State | Subsequent States | Context |
|---|---|---|
session: start | session: continue | Claude remembers all previous work |
| (none) | (none) | Each state is fresh, needs JSON handoff |
Use a nested structure for each workflow:
.pmc/workflows/
├── registry.json
└── {name}/
├── workflow.json # Workflow definition
├── prompts/ # Prompt files for Claude states
│ └── {state-name}.md # Prompt file (referenced by prompt_file)
├── mocks/ # Mock scripts for testing
│ ├── mocks.json # Mock configuration (optional)
│ └── {state-name}.py # Mock script per state
└── scripts/ # Real workflow scripts (optional)
└── *.py
This structure ensures:
{name}/mocks/Create .pmc/workflows/{name}/workflow.json:
{
"name": "workflow-name",
"description": "What this workflow does",
"initial_state": "first-state",
"inputs": {
"param": {"type": "string", "required": true}
},
"states": {
"first-state": { ... },
"done": {"type": "terminal", "status": "success"}
}
}
| Field | Description |
|---|---|
name | Unique identifier |
initial_state | Starting state name |
states | State definitions |
| Type | Purpose | Key Fields |
|---|---|---|
shell | Run command | command, outputs, transitions |
claude | Invoke Claude | prompt, session, outputs, transitions |
workflow | Sub-workflow | workflow, inputs, transitions |
fan_out | Parallel items | items, item_var, state, transitions |
parallel | Spawn workflows | spawn, transitions |
checkpoint | User approval | message, options, transitions |
sleep | Wait duration | duration, next |
terminal | End workflow | status, message |
pmc validate .pmc/workflows/{name}/workflow.json
Fix any schema errors before proceeding.
Create .pmc/workflows/{name}/mocks/ directory with mock scripts:
.pmc/workflows/{name}/
├── workflow.json
└── mocks/
├── mocks.json # Optional: mock configuration
├── first-state.py # Mock for "first-state"
├── second-state.py # Mock for "second-state"
└── ...
When running with --mock, the system resolves mocks in this order:
mocks/{state-name}.py then .shCreate mocks/mocks.json for fine-grained control:
{
"description": "Mock configuration for my-workflow",
"fallback": "error",
"states": {
"plan-step": {
"type": "script",
"script": "plan-step.py",
"description": "Mock for planning state"
},
"simple-state": {
"type": "inline",
"output": {"status": "success", "value": 42}
},
"skip-state": {
"type": "passthrough",
"output": {}
}
}
}
Mock Types:
| Type | Description | Use Case |
|---|---|---|
script | Run Python/shell script | Complex logic, file I/O |
inline | Return static JSON output | Simple success responses |
passthrough | Return empty {} | States that need no mock |
Fallback Behavior:
| Value | Description |
|---|---|
error | Fail if no mock found (default) |
passthrough | Return {} for unmocked states |
skip | Skip state, use default transition |
#!/usr/bin/env python3
"""Mock for {state-name} state."""
import os
import json
# Read context from environment
ticket_id = os.environ.get("PMC_VAR_ticket_id", "")
state_name = os.environ.get("PMC_STATE", "")
context = json.loads(os.environ.get("PMC_CONTEXT", "{}"))
# Simulate state logic
# ...
# Output JSON for transitions
output = {
"status": "success",
"data": "mock result"
}
print(json.dumps(output))
# Exit codes: 0=success, 1=failure, 2=blocked
#!/usr/bin/env python3
"""Mock for check-exists state."""
import json
# Simulate file check
output = "EXISTS" # or "NOT_FOUND"
print(output)
#!/usr/bin/env python3
"""Mock for plan-ticket state."""
import json
# Simulate Claude response
output = {
"status": "success",
"test_mode": "script"
}
print(json.dumps(output))
# With nested structure, mock-dir is auto-discovered
pmc run {name} --mock -i param=value -v
# Or with explicit path:
pmc run .pmc/workflows/{name}/workflow.json --mock -i param=value -v
Note: When using the nested structure, the mock directory is automatically discovered at .pmc/workflows/{name}/mocks/. The --mock-dir flag is only needed to override this default.
Mock not found:
No mock found for state 'state-name'
→ Create mocks/state-name.py
Wrong transition:
Unexpected next state
→ Check mock output matches transition conditions
.pmc/workflows/test/mock-data/
├── tickets/T99001/
│ ├── 1-definition.md
│ └── ...
└── ...
pmc run {name} \
-i ticket_id=T99001 \
-i docs_dir=.pmc/workflows/test/mock-data \
-v
# Or with explicit path:
pmc run .pmc/workflows/{name}/workflow.json \
-i ticket_id=T99001 \
-i docs_dir=.pmc/workflows/test/mock-data \
-v
Add to .pmc/workflows/registry.json:
{
"version": "1.0",
"workflows": {
"existing.workflow": { ... },
"{name}": {
"path": "{name}/workflow.json",
"description": "What this workflow does",
"tags": ["category", "tag"],
"entry_point": true
}
}
}
Note: The path is relative to the registry.json location. With the nested structure, use {name}/workflow.json.
| Field | Description |
|---|---|
path | Relative path to JSON |
description | Human-readable description |
tags | Categorization tags |
entry_point | true if top-level runnable |
"check-file": {
"type": "shell",
"command": "test -f {path} && echo 'EXISTS' || echo 'NOT_FOUND'",
"timeout": "30s",
"working_dir": "{project_root}",
"outputs": {
"file_status": "$.result"
},
"transitions": [
{"condition": {"type": "pattern", "match": "EXISTS"}, "target": "next"},
{"condition": {"type": "default"}, "target": "error"}
]
}
"create-plan": {
"type": "claude",
"prompt_file": "create-plan.md",
"session": "start",
"working_dir": "{project_root}",
"transitions": [
{"condition": {"type": "default"}, "target": "next"}
]
}
Claude State Fields:
| Field | Type | Required | Description |
|---|---|---|---|
type | "claude" | Yes | State type |
prompt | string | One of | Inline prompt template (supports variables) |
prompt_file | string | One of | Path to prompt file in prompts/ directory |
session | string | No | "start" or "continue" |
working_dir | string | No | Working directory for Claude |
outputs | object | No | JSONPath extraction to context |
memory | object | No | Memory injection config |
transitions | array | No | Transition definitions |
Prompt Options (choose one):
| Option | Use Case |
|---|---|
prompt | Short, inline prompts |
prompt_file | Complex prompts, easier to maintain separately |
prompt_file:
prompts/ directory in workflow folder{variable}"prompt_file": "analyze.md" loads prompts/analyze.mdSession Modes:
| Mode | Description |
|---|---|
"start" | Begin new Claude session, stores _session_id in context |
"continue" | Resume existing session using _session_id from context |
Session Example (multi-state conversation):
"states": {
"start-session": {
"type": "claude",
"prompt": "Analyze {file} and identify issues.",
"session": "start",
"outputs": {"issues": "$.issues"},
"transitions": [{"condition": {"type": "default"}, "target": "fix-issues"}]
},
"fix-issues": {
"type": "claude",
"prompt": "Fix the issues you identified.",
"session": "continue",
"outputs": {"status": "$.status"},
"transitions": [{"condition": {"type": "default"}, "target": "done"}]
}
}
Note: When using session: "continue", the Claude instance retains context from all previous states in the same session.
"run-subtask": {
"type": "workflow",
"workflow": "subtask.handler",
"inputs": {
"param": "{value}"
},
"transitions": [
{"condition": {"type": "json", "path": "$.status", "equals": "success"}, "target": "next"},
{"condition": {"type": "default"}, "target": "error"}
]
}
"process-items": {
"type": "fan_out",
"items": "{item_list}",
"item_var": "item",
"concurrency": 3,
"state": {
"type": "workflow",
"workflow": "item.handler",
"inputs": {"item": "{item}"}
},
"transitions": [
{"condition": {"type": "all_success"}, "target": "done"},
{"condition": {"type": "any_failed"}, "target": "partial"}
]
}
"success": {
"type": "terminal",
"status": "success",
"message": "Completed {ticket_id}"
}
| Type | Description | Example |
|---|---|---|
json | JSONPath match | {"type": "json", "path": "$.status", "equals": "success"} |
pattern | Regex match | {"type": "pattern", "match": "EXISTS"} |
exit_code | Shell exit | {"type": "exit_code", "equals": 0} |
default | Fallback | {"type": "default"} |
all_success | Fan-out all pass | {"type": "all_success"} |
any_failed | Fan-out any fail | {"type": "any_failed"} |
For complete specifications, see:
# List registered workflows
pmc list
# Run workflow by registry name
pmc run <name> -i param=value
# Run workflow by path
pmc run .pmc/workflows/<name>/workflow.json -i param=value
# Validate workflow
pmc validate .pmc/workflows/<name>/workflow.json
# Dry run (no execution)
pmc run <name> --dry-run
# Mock mode (auto-discovers mocks/ in workflow directory)
pmc run <name> --mock -i param=value
# Mock mode with explicit mock directory
pmc run <name> --mock --mock-dir=<path>
# Verbose output
pmc run <name> -v
This skill should be used when the user asks to "create a hookify rule", "write a hook rule", "configure hookify", "add a hookify rule", or needs guidance on hookify rule syntax and patterns.
Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.