Programmatic access to workflow chain operations including validation, metrics, and visualization. Provides APIs for recording workflow runs, tracking step durations, calculating success rates, and identifying bottlenecks. Use when building custom workflow orchestration, analyzing multi-agent performance, or integrating chain tracking into automation. Do NOT use for simple single-agent tasks or when you just want to run an existing workflow - use the workflow command directly instead.
/plugin marketplace add jrc1883/popkit-claude/plugin install popkit@popkit-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Provides programmatic access to workflow chain operations for agents and skills. Use this skill when you need to validate chains, record metrics, or generate visualizations.
Core principle: Understand and track workflow execution for continuous improvement.
Trigger: When working with multi-agent workflows or analyzing workflow performance.
Invoke this skill when:
Check if workflow definitions are valid:
# Run the chain validator
python hooks/chain-validator.py
This will output:
# Read workflow configurations
python -c "
import json
with open('agents/config.json') as f:
config = json.load(f)
print(json.dumps(config.get('workflows', {}), indent=2))
"
When starting a workflow:
# Start a new run
echo '{"operation": "start_run", "workflow_id": "feature-dev", "workflow_name": "7-Phase Feature Development"}' | python hooks/chain-metrics.py
# Returns: {"status": "success", "run_id": "abc123"}
When completing a step:
# Record step completion
echo '{"operation": "record_step", "run_id": "abc123", "step_id": "exploration", "step_name": "Exploration", "agent": "code-explorer", "step_status": "completed", "duration_ms": 135000, "confidence": 85}' | python hooks/chain-metrics.py
When completing the workflow:
# Complete the run
echo '{"operation": "complete_run", "run_id": "abc123", "run_status": "completed"}' | python hooks/chain-metrics.py
Get workflow statistics:
# Get stats for a workflow
echo '{"operation": "get_stats", "workflow_id": "feature-dev"}' | python hooks/chain-metrics.py
Get recent runs:
# Get last 10 runs
echo '{"operation": "get_recent", "workflow_id": "feature-dev", "limit": 10}' | python hooks/chain-metrics.py
# Run validator with visualization
python -c "
import sys
sys.path.insert(0, 'hooks')
from chain_validator import ChainValidator
validator = ChainValidator()
for workflow_id in validator.config.get('workflows', {}).keys():
print(validator.get_workflow_visualization(workflow_id))
print()
"
Metrics are stored in ~/.claude/chain-metrics.json:
{
"version": "1.0.0",
"runs": [
{
"run_id": "abc123",
"workflow_id": "feature-dev",
"started_at": "2025-01-28T10:00:00Z",
"ended_at": "2025-01-28T10:12:30Z",
"status": "completed",
"steps": [
{
"step_id": "exploration",
"step_name": "Exploration",
"agent": "code-explorer",
"status": "completed",
"duration_ms": 135000,
"confidence": 85
}
],
"total_duration_ms": 750000
}
],
"aggregates": {
"feature-dev": {
"total_runs": 15,
"successful_runs": 13,
"success_rate": 86.7,
"avg_duration_ms": 750000,
"step_metrics": {},
"bottlenecks": []
}
}
}
When an agent is part of a workflow:
Before agent execution:
record_step (status: "running")After agent completion:
On failure:
# Pseudo-code for tracking a feature-dev workflow
# 1. Start the workflow
run_id = start_run("feature-dev", "Feature: User Authentication")
# 2. Discovery phase (no agent)
record_step(run_id, "discovery", "Discovery", status="completed", duration_ms=45000)
# 3. Exploration phase (code-explorer agent)
record_step(run_id, "exploration", "Exploration",
agent="code-explorer", status="completed",
duration_ms=135000, confidence=85)
# 4. Continue through phases...
# 5. Complete the workflow
complete_run(run_id, "completed")
To identify bottlenecks:
# Get aggregates and find slow steps
cat ~/.claude/chain-metrics.json | python -c "
import json, sys
data = json.load(sys.stdin)
for wid, agg in data.get('aggregates', {}).items():
print(f'{wid}:')
print(f' Success rate: {agg.get(\"success_rate\", 0)}%')
print(f' Avg duration: {agg.get(\"avg_duration_ms\", 0) / 1000:.1f}s')
if agg.get('bottlenecks'):
print(' Bottlenecks:')
for b in agg['bottlenecks']:
print(f' - {b[\"step_id\"]}: {b[\"avg_ms\"] / 1000:.1f}s')
"
/popkit:workflow-viz command - User-facing visualizationchain-validator.py hook - Validation logicchain-metrics.py hook - Metrics trackingagents/config.json - Workflow definitionsThis skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.