Visualize planned changes before implementation. Use when reviewing plans, comparing before/after architecture, assessing risk, or analyzing execution order and impact.
Visualizes planned code changes with risk analysis, execution order, and impact metrics before implementation.
/plugin marketplace add yonatangross/orchestkit/plugin install orkl@orchestkitThis skill is limited to using the following tools:
assets/impact-dashboard.mdassets/plan-report.mdassets/tier1-header.mdreferences/blast-radius-patterns.mdreferences/change-manifest-patterns.mdreferences/decision-log-patterns.mdreferences/deep-dives.mdreferences/execution-swimlane-patterns.mdreferences/risk-dashboard-patterns.mdreferences/visualization-tiers.mdrules/_sections.mdrules/_template.mdrules/section-rendering.mdscripts/analyze-impact.shscripts/detect-plan-context.shRender planned changes as structured ASCII visualizations with risk analysis, execution order, and impact metrics. Every section answers a specific reviewer question.
Core principle: Encode judgment into visualization, not decoration.
/ork:plan-viz # Auto-detect from current branch
/ork:plan-viz billing module redesign # Describe the plan
/ork:plan-viz #234 # Pull from GitHub issue
First, attempt auto-detection by running scripts/detect-plan-context.sh:
bash "$SKILL_DIR/scripts/detect-plan-context.sh"
This outputs branch name, issue number (if any), commit count, and file change summary.
If auto-detection finds a clear plan (branch with commits diverging from main, or issue number in args), proceed to Step 1.
If ambiguous, clarify with AskUserQuestion:
AskUserQuestion(
questions=[{
"question": "What should I visualize?",
"header": "Source",
"options": [
{"label": "Current branch changes (Recommended)", "description": "Auto-detect from git diff against main"},
{"label": "Describe the plan", "description": "I'll explain what I'm planning to change"},
{"label": "GitHub issue", "description": "Pull plan from a specific issue number"},
{"label": "Quick file diff only", "description": "Just show the change manifest, skip analysis"}
],
"multiSelect": false
}]
)
Run scripts/analyze-impact.sh for precise counts:
bash "$SKILL_DIR/scripts/analyze-impact.sh"
This produces: files by action (add/modify/delete), line counts, test files affected, and dependency changes.
For architecture-level understanding, spawn an Explore agent on the affected directories:
Task(
subagent_type="Explore",
prompt="Explore the architecture of {affected_directories}. Return: component diagram, key data flows, health scores per module. Use the ascii-visualizer skill for diagrams.",
model="haiku"
)
Use assets/tier1-header.md template. See references/visualization-tiers.md for field computation (risk level, confidence, reversibility).
PLAN: {plan_name} ({issue_ref}) | {phase_count} phases | {file_count} files | +{added} -{removed} lines
Risk: {risk_level} | Confidence: {confidence} | Reversible until {last_safe_phase}
Branch: {branch} -> {base_branch}
[1] Changes [2] Execution [3] Risks [4] Decisions [5] Impact [all]
AskUserQuestion(
questions=[{
"question": "Which sections to render?",
"header": "Sections",
"options": [
{"label": "All sections", "description": "Full visualization with all 5 core sections"},
{"label": "Changes + Execution", "description": "File diff tree and execution swimlane"},
{"label": "Risks + Decisions", "description": "Risk dashboard and decision log"},
{"label": "Impact only", "description": "Just the numbers: files, lines, tests, API surface"}
],
"multiSelect": false
}]
)
Render each requested section following rules/section-rendering.md conventions. Use the corresponding reference for ASCII patterns:
| Section | Reference | Key Convention |
|---|---|---|
| [1] Change Manifest | change-manifest-patterns.md | [A]/[M]/[D] + +N -N per file |
| [2] Execution Swimlane | execution-swimlane-patterns.md | === active, --- blocked, | deps |
| [3] Risk Dashboard | risk-dashboard-patterns.md | Reversibility timeline + 3 pre-mortems |
| [4] Decision Log | decision-log-patterns.md | ADR-lite: Context/Decision/Alternatives/Tradeoff |
| [5] Impact Summary | assets/impact-dashboard.md | Table: Added/Modified/Deleted/NET + tests/API/deps |
After rendering, offer next steps:
AskUserQuestion(
questions=[{
"question": "What next?",
"header": "Actions",
"options": [
{"label": "Write to designs/", "description": "Save as designs/{branch}.md for PR review"},
{"label": "Generate GitHub issues", "description": "Create issues from execution phases with labels and milestones"},
{"label": "Drill deeper", "description": "Expand blast radius, cross-layer check, or migration checklist"},
{"label": "Done", "description": "Plan visualization complete"}
],
"multiSelect": false
}]
)
Write to file: Save full report to designs/{branch-name}.md using assets/plan-report.md template.
Generate issues: For each execution phase, create a GitHub issue with title [{component}] {phase_description}, labels (component + risk:{level}), milestone, body from plan sections, and blocked-by references.
Available when user selects "Drill deeper". See references/deep-dives.md for cross-layer and migration patterns.
| Section | What It Shows | Reference |
|---|---|---|
| [6] Blast Radius | Concentric rings of impact (direct -> transitive -> tests) | blast-radius-patterns.md |
| [7] Cross-Layer Consistency | Frontend/backend endpoint alignment with gap detection | deep-dives.md |
| [8] Migration Checklist | Ordered runbook with sequential/parallel blocks and time estimates | deep-dives.md |
| Principle | Application |
|---|---|
| Progressive disclosure | Tier 1 header always, sections on request |
| Judgment over decoration | Every section answers a reviewer question |
| Precise over estimated | Use scripts for file/line counts |
| Honest uncertainty | Confidence levels, pre-mortems, tradeoff costs |
| Actionable output | Write to file, generate issues, drill deeper |
| Anti-slop | No generic transitions, no fake precision, no unused sections |
| Rule | Impact | What It Covers |
|---|---|---|
| section-rendering | HIGH | Rendering conventions for all 5 core sections |
| ASCII diagrams | MEDIUM | Via ascii-visualizer skill (box-drawing, file trees, workflows) |
ork:implement - Execute planned changesork:explore - Understand current architectureork:assess - Evaluate complexity and risksSearch, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.