From workflow-audit
Audits SwiftUI app UI workflows in 5 layers: discovers entry points, traces flows, detects dead ends and broken promises, evaluates UX, verifies data wiring.
npx claudepluginhub terryc21/workflow-audit --plugin workflow-auditThis skill is limited to using the following tools:
> **Quick Ref:** 5-layer UI workflow audit: discover entry points → trace flows → detect issues → evaluate UX → verify data wiring. Output: `.workflow-audit/` in project root.
agents-skill/README.mdagents-skill/examples/bad-patterns.swiftagents-skill/examples/good-patterns.swiftagents-skill/issue-categories.mdagents-skill/pattern-library.mdagents-skill/quick-start.mdagents-skill/templates/layer1-inventory.yamlagents-skill/templates/layer2-flow-trace.yamlagents-skill/templates/layer3-issue.yamlagents-skill/templates/layer4-evaluation.yamlagents-skill/templates/layer5-data-wiring.yamlagents/README.mdagents/layer1-inventory.yamlagents/layer1-patterns.mdagents/layer1-summary.mdagents/layer2-methodology.mdagents/layer2-summary.mdagents/layer2-traces/flow-001-pricewatch.yamlagents/layer2-traces/flow-002-repairadvisor.yamlagents/layer2-traces/flow-003-bulkactions.yamlAudits iOS app UX flows (SwiftUI/UIKit) for dead ends, dismiss traps, buried CTAs, missing empty/loading/error states, and broken data paths.
Audits end-to-end application workflows for bugs, data safety, performance, and round-trip completeness. Discovers workflows, traces natural-language user flows, and rolls up cross-cutting issues.
Runs UX regression testing on running apps: static checks for AI slop, accessibility, Nielsen heuristics; dynamic pixel-only user journey validation. Outputs dashboard of UX issues and UI risk signals with suggestions. Use after building or UI changes.
Share bugs, ideas, or general feedback.
Quick Ref: 5-layer UI workflow audit: discover entry points → trace flows → detect issues → evaluate UX → verify data wiring. Output:
.workflow-audit/in project root.
You are performing a systematic workflow audit on this SwiftUI application.
Required output: Every finding MUST include Urgency, Risk, ROI, and Blast Radius ratings using the Issue Rating Table format. Do not omit these ratings.
| Command | Description |
|---|---|
/workflow-audit | Full 5-layer audit |
/workflow-audit layer1 | Discovery only — find all entry points |
/workflow-audit layer2 | Trace — trace critical paths |
/workflow-audit layer3 | Issues — detect problems across codebase |
/workflow-audit layer4 | Evaluate — assess user impact |
/workflow-audit layer5 | Data wiring — verify real data usage |
/workflow-audit trace "A → B → C" | Trace a specific user flow path |
/workflow-audit diff | Compare current findings against previous audit |
/workflow-audit fix | Generate fixes for found issues |
/workflow-audit status | Show audit progress and remaining issues |
The Workflow Audit uses a 5-layer approach:
| Layer | Purpose | Output |
|---|---|---|
| Layer 1 | Pattern Discovery - Find all UI entry points | Entry point inventory |
| Layer 2 | Flow Tracing - Trace critical paths in depth | Detailed flow traces |
| Layer 3 | Issue Detection - Categorize issues across codebase | Issue catalog |
| Layer 4 | Semantic Evaluation - Evaluate from user perspective | UX impact analysis |
| Layer 5 | Data Wiring - Verify features use real data | Data integrity report |
Read these files for methodology and patterns (paths relative to this skill's directory):
agents/README.md - Overview and quick startagents/layer1-patterns.md - Discovery regex patternsagents/layer2-methodology.md - Flow tracing processagents/layer3-issue-detection.md - Issue categoriesagents/layer4-semantic-evaluation.md - User impact analysisagents/layer5-data-wiring.md - Data integrity methodologyFor templates and examples:
agents-skill/templates/ - YAML templates for each layeragents-skill/examples/ - Good and bad patternsNote: These paths are relative to the skill directory (
~/.claude/skills/workflow-audit/). When reading these files, resolve from the skill's installed location, not the current working directory.
| Category | Severity | Description |
|---|---|---|
| Dead End | 🔴 CRITICAL | Entry point leads nowhere |
| Wrong Destination | 🔴 CRITICAL | Entry point leads to wrong place |
| Mock Data | 🔴 CRITICAL | Shows fabricated data when real data exists |
| Destructive Without Confirm | 🔴 CRITICAL | Delete/clear with no confirmation dialog |
| Silent State Reset | 🔴 CRITICAL | In-progress work lost on navigate away |
| Incomplete Navigation | 🟡 HIGH | Must scroll/search after landing |
| Missing Auto-Activation | 🟡 HIGH | Expected mode/state not set |
| Unwired Data | 🟡 HIGH | Model data exists but feature ignores it |
| Platform Parity Gap | 🟡 HIGH | Works on one platform, broken on another |
| Promise-Scope Mismatch | 🟡 HIGH | Specific CTA opens generic destination |
| Buried Primary Action | 🟡 HIGH | Primary button hidden below scroll fold |
| Dismiss Trap | 🟡 HIGH | Only Cancel/back visible, no forward path |
| Context Dropping | 🟡 HIGH | Item context lost between platforms/notifs |
| Notification Nav Fragility | 🟡 HIGH | Untyped dict used for navigation context |
| Sheet Presentation Asymm | 🟡 HIGH | Different sheet mechanisms per platform |
| Empty State Missing | 🟡 HIGH | Blank screen when list empty |
| Error Recovery Missing | 🟡 HIGH | Error shown but no retry or recovery path |
| Keyboard Obscures Input | 🟡 HIGH | TextField covered by keyboard, no scroll |
| Permission Denied Dead End | 🟡 HIGH | Denied with no path to Settings |
| Modal Stacking | 🟡 HIGH | Multiple sheets/alerts stacked |
| Nav Container Mismatch | 🟡 HIGH | Selection tag invalid for current container |
| Two-Step Flow | 🟢 MEDIUM | Intermediate selection required |
| Missing Feedback | 🟢 MEDIUM | No confirmation of success |
| Gesture-Only Action | 🟢 MEDIUM | Only accessible via swipe/long-press |
| Loading State Trap | 🟢 MEDIUM | Spinner with no cancel/timeout/escape |
| Stale Navigation Context | 🟢 MEDIUM | Cached context never cleared/validated |
| Phantom Touch Target | 🟢 MEDIUM | Looks tappable but has no action |
| Race Condition UX | 🟢 MEDIUM | Conflicting ops triggered simultaneously |
| Invisible Selection | 🟢 MEDIUM | Selected/active but no visual indicator |
| Inconsistent Pattern | ⚪ LOW | Same feature accessed differently |
| Orphaned Code | ⚪ LOW | Feature exists but no entry point |
| Double-Nested Navigation | ⚪ LOW | NavigationStack inside NavigationStack |
When a button/card says "Do X", tapping it should DO X. Not "go somewhere you might find X."
If user's context implies a specific item, skip pickers.
When navigating to a feature, set up the expected state.
Same feature should be accessed the same way everywhere.
If the app tracks data relevant to a feature, the feature must use it. Never show mock/hardcoded data when real user data exists. Never ignore model relationships that would improve decisions.
The primary action must be visible without scrolling after the user completes the key interaction. Pin Save/Continue/Done buttons outside ScrollView or in toolbar. Never bury them below tall content.
Every view must have a visible way to go forward OR back. Cancel alone is not enough after user completes a step.
Every action available via gesture (swipe, long-press) should also be accessible via a visible button or menu.
Base all findings on current source code only. Do not read or reference
files in .agents/, scratch/, or prior audit reports. Ignore cached
findings from auto-memory or previous sessions. Every finding must come
from scanning the actual codebase as it exists now.
On first invocation, ask the user two questions in a single AskUserQuestion call:
Question 1: "What's your experience level with Swift/SwiftUI?"
Question 2: "Would you like a brief explanation of what this skill does?"
Experience-adapted explanations:
Store as USER_EXPERIENCE. Apply to ALL output for the session.
Experience-level auto-apply:
--explain (user impact explanations), default sort to impacteffort| Output Element | Beginner | Intermediate | Experienced | Senior/Expert |
|---|---|---|---|---|
| Skill intro | Full paragraph | 2-3 sentences | One line | Skip |
--explain | Auto-enabled | Off (suggested) | Off | Off |
| Progress banner | Full with hints | Full with hints | Compact (no hints) | One-line status |
| Finding text | Plain language + "why it matters" | Standard terminology | file:line + description | file:line only |
| Sort default | --sort impact | --sort urgency | --sort urgency | --sort effort |
| Design citations | Always cite principle | On non-obvious only | Never | Never |
| Post-fix summary | Full before/after | Brief | Skip | Skip |
See radar-suite-core.md for: Session Persistence, Checkpoint & Resume, Accepted Risks, Wave-Based Fix Presentation, Fix-Forward Bias, Test Hygiene, Plain Language Communication, Work Receipts, Contradiction Detection, Finding Classification, Audit Methodology, Context Exhaustion, Progress Banner, Issue Rating Tables, Known-Intentional Suppression, Pattern Reintroduction Detection, Experience-Level Output Rules, Implementation Sort Algorithm, Handoff YAML schema, Opt-Out.
Path note: workflow-audit uses
.workflow-audit/instead of.radar-suite/for all persistent files (session-prefs.yaml, checkpoint.yaml, known-intentional.yaml, ledger.yaml).
When invoked, perform the workflow audit:
Run all 5 layers sequentially, outputting findings to .workflow-audit/ in the project root
grep -r "activeSheet = \." Sources/grep -r "selectedSection = \." Sources/grep -r "PromotionCard\|CompactPromotionCard" Sources/grep -r "\.contextMenu" Sources/layer1-inventory.yamllayer2-traces/flow-XXX.yamltrace "Dashboard → Add Item → Photo → Save"):Targeted flow trace — trace a specific user journey described in natural language:
→, ->, or ,)grep -r for button labels, sheet cases, navigation destinationslayer3-results.yamllayer4-semantic-evaluation.md.workflow-audit/persona-handoff.yaml (see Persona Handoff section)layer5-data-wiring.yamlCompare current codebase against the previous audit to show what changed:
.workflow-audit/layer3-results.yaml and .workflow-audit/handoff.yamlAudit Diff: <previous date> → <current date>
✅ Resolved: <count> issues fixed since last audit
🔴 Still Open: <count> issues remain
🆕 New: <count> new issues detected
📁 Changed: <count> files modified since audit (may need re-verification)
layer3-results.yaml and layer5-data-wiring.yaml for unfixed issuesCRITICAL FORMATTING RULE: The Issue Rating Table below IS the output. Do NOT create separate sections for "Critical Issues", "Data Wiring Issues", "Recommendations", or any other vertical breakdown of findings. Every finding — navigation issues, data wiring issues, orphaned code, missing feedback, design violations — goes into ONE table as ONE row. Context goes in the Finding column. No exceptions.
Before rendering, check terminal width with tput cols. If under 100 columns, use the compact 4-column table inline (# / Finding / Urgency / Effort) and write the full 8-column table to the report file only. If the table renders as vertical blocks instead of horizontal rows, tell the user: "The rating table needs a wider terminal to display correctly. Try widening your window or using full-screen mode."
After completing the audit, provide:
/plan --workflow-audit if fixes are neededThat's it. Three items. No other sections.
Reference: See
radar-suite-core.mdfor full column definitions, indicator scale, and sorting rules.
Hard formatting rule — Table, not list: ALL findings MUST be in a single markdown table. Each finding is ONE ROW. Ratings are COLUMNS read left-to-right. Never expand findings into individual sections, vertical blocks, or bullet-pointed ratings. Do NOT create separate headed sections for categories of findings (e.g., "Data Wiring Issues", "Critical Issues", "Orphaned Views"). ALL categories go in the same table. The Finding column carries the context.
All findings MUST be presented in this format, sorted by Urgency then ROI:
| # | Finding | Urgency | Risk:Fix | Risk:NoFix | ROI | Blast | Effort |
|-----|--------------------------------|--------------|----------|------------|----------|----------|---------|
| 1 | Dead end: "View Warranty" | 🔴 Critical | ⚪ Low | 🔴 Crit | 🟠 Exc | 🟢 2f | Trivial |
| | → empty sheet | | | | | | |
| 2 | Promise-scope: "Track Price" | 🟡 High | 🟢 Med | 🟡 High | 🟠 Exc | 🟡 4f | Small |
| | opens generic list | | | | | | |
Use the Issue Rating scale:
If the user passes --explain (or the project's CLAUDE.md includes explain-findings: true), append a brief explanation for each finding after the Issue Rating Table. See radar-suite-core.md "User Impact Explanations" for the exact format and rules.
After presenting audit results, always print:
💡 To generate a phased fix plan from these findings, run: /plan --workflow-audit
💡 Re-sort: --sort effort (easiest first) · --sort impact (most visible first) · --sort implement (build order)
💡 Explain findings: --explain (adds what's wrong / fix / user experience for each finding)
After completing all layers (full audit) or fix mode, generate .workflow-audit/handoff.yaml for consumption by the planning skill.
fix mode completes (refreshes the brief with current state)# Handoff Brief — generated by workflow-audit
# Consumed by /plan --workflow-audit
project: <project name from directory>
audit_date: <ISO 8601 date>
source_files_scanned: <count>
summary:
total_issues: <count>
critical: <count>
high: <count>
medium: <count>
low: <count>
file_timestamps:
<file path>: "<ISO 8601 mod date>"
# one entry per unique file referenced in issues[]
issues:
- id: <sequential number>
finding: "<description>"
category: <dead_end|wrong_destination|mock_data|destructive_no_confirm|silent_state_reset|incomplete_navigation|missing_activation|unwired_data|platform_gap|promise_scope_mismatch|buried_primary_action|dismiss_trap|context_dropping|notif_nav_fragility|sheet_asymmetry|empty_state_missing|error_recovery_missing|keyboard_obscures|permission_dead_end|modal_stacking|nav_container_mismatch|two_step_flow|missing_feedback|gesture_only_action|loading_state_trap|stale_nav_context|phantom_touch_target|race_condition_ux|invisible_selection|inconsistent_pattern|orphaned_code|double_nested_nav>
urgency: <critical|high|medium|low>
risk_fix: <critical|high|medium|low>
risk_no_fix: <critical|high|medium|low>
roi: <excellent|good|marginal|poor>
blast_radius: "<description, e.g. '1 file' or '4 files'>"
fix_effort: <trivial|small|medium|large>
files:
- <file path>
suggested_fix: "<what to do, not how>"
group_hint: "<optional grouping suggestion, e.g. 'missing_confirmations'>"
For each unique file path referenced across all issues, record its modification date at audit time. This enables the planning skill to detect staleness — if a file changed after the audit, affected issues may need re-verification.
# Get file mod date (macOS)
stat -f "%Sm" -t "%Y-%m-%dT%H:%M:%SZ" "<file path>"
Optional field suggesting how the planning skill might batch issues:
group_hint are candidates for a single taskmissing_confirmations, missing_feedback, orphaned_features, dead_code, platform_parityAfter completing Layer 4 (full audit or standalone layer4), write .workflow-audit/persona-handoff.yaml for consumption by ui-path-radar (if installed):
source: workflow-audit
version: <skill version>
timestamp: <ISO 8601>
project: <project name>
personas:
- name: "Warranty Tracker"
goal: "Never miss a warranty deadline"
key_workflows:
- "Add item with warranty"
- "See expiring warranties"
evaluation:
discovery: 5 # 1-5 star rating
efficiency: 4
feedback: 5
recovery: 5
issues_found:
- finding_ref: 3 # issue id from handoff.yaml
impact: "Breaks trust in promotion cards"
evaluation_matrix:
- workflow: "Add Item"
discovery: 5
efficiency: 4
feedback: 5
recovery: 5
checks_performed:
categories_scanned: # all 32 category keys
- dead_end
- wrong_destination
- mock_data
- destructive_no_confirm
- silent_state_reset
# ... (all 32)
persona_evaluation: true
personas_defined: <count>
When to generate: After Layer 4 completes (full audit or standalone layer4 invocation). Not generated for individual layer1/layer2/layer3 runs.
If ui-path-radar is not installed: The file is still written. It costs nothing and will be consumed if ui-path-radar is installed later.
Before starting Layer 3, read companion handoffs (if they exist):
Read .agents/ui-audit/ui-path-radar-handoff.yaml (if exists)
Read .radar-suite/ui-path-radar-handoff.yaml (if exists)
If found:
checks_performed.categories_scanned to see which categories ui-path-radar already checked[via ui-path-radar]If not found: proceed normally. No change to audit behavior.