From radar-suite
Audits end-to-end application workflows for bugs, data safety, performance, and round-trip completeness. Discovers workflows, traces natural-language user flows, and rolls up cross-cutting issues.
npx claudepluginhub terryc21/radar-suite --plugin radar-suiteThis skill uses the workspace's default tool permissions.
This skill audits application workflows for bugs, data-safety issues, performance
Audits SwiftUI app UI workflows in 5 layers: discovers entry points, traces flows, detects dead ends and broken promises, evaluates UX, verifies data wiring.
Orchestrates radar audit skills: run full sequence or targeted via git changes/directories, manage unified findings ledger, verify fixes, link issues.
Runs cross-codebase quality audits using parallel specialist agents (QA, Hacker, Performance presets) for systemic bug detection after major features or periodically.
Share bugs, ideas, or general feedback.
This skill audits application workflows for bugs, data-safety issues, performance problems, and data round-trip completeness. It operates in three steps:
| Command | Description |
|---|---|
/roundtrip-radar | Start with Step 0 (discover), then prompt for Step 1 |
/roundtrip-radar discover | Run Step 0 only — find all workflows |
/roundtrip-radar [WORKFLOW] | Run Step 1 for a specific workflow |
/roundtrip-radar rollup | Run Step 2 — cross-cutting analysis |
/roundtrip-radar trace "A → B → C" | Trace a specific user flow path (see below) |
/roundtrip-radar diff | Compare findings against previous audit |
--show-suppressed | Show findings suppressed by known-intentional entries |
--accept-intentional | Mark current finding as known-intentional (not a bug) |
Targeted flow tracing — trace a specific user journey described in natural language.
/roundtrip-radar trace "Dashboard → Add Item → Photo → Save"
/roundtrip-radar trace "Settings, Export, CSV, Email"
→, ->, or , into discrete stepsTrace: Dashboard → Add Item → Photo → Save
| Step | Action | File | Lines | Data In | Data Out | Finding |
|------|--------|------|-------|---------|----------|---------|
| 1 | Dashboard tap "Add" | DashboardView.swift | 142-145 | — | activeSheet = .addItem | ok |
| 2 | Add Item sheet presents | AddItemView.swift | 1-50 | Item.draft | item.title, item.category | ok |
| 3 | Photo picker | PhotoPicker.swift | 23-89 | item.id | PhotoAttachment | ⚠️ orientation lost |
| 4 | Save item | ItemViewModel.swift | 112-134 | item + attachments | modelContext.save() | ok |
Issues Found:
| # | Finding | Urgency | Risk: Fix | Risk: No Fix | ROI | Blast Radius | Fix Effort | Status |
On first invocation, ask the user two questions in a single AskUserQuestion call:
Question 1: "What's your experience level with Swift/SwiftUI?"
Question 2: "Would you like a brief explanation of what this skill does?"
Experience-adapted explanations for Roundtrip Radar:
Beginner: "Roundtrip Radar follows your data through complete user journeys — like tracking a package from warehouse to doorstep and back. For example, it checks: if you create an item, back it up, delete it, and restore — does everything come back exactly? It finds bugs where data gets lost, corrupted, or silently dropped along the way. It audits one workflow at a time (backup, add item, sync, etc.) so nothing gets missed."
Intermediate: "Roundtrip Radar audits individual workflows end-to-end for data safety, error handling, concurrency, and round-trip completeness. It traces data through create → modify → export → import cycles, checks transaction boundaries, verifies error recovery paths, and identifies where data is silently lost. Works one workflow at a time to stay thorough."
Experienced: "Per-workflow code audit: data safety, error handling, concurrency, performance, contract mismatches, and round-trip completeness. Discovers workflows, audits each with issue rating tables and fix plans, then rolls up cross-cutting patterns."
Senior/Expert: "Workflow-scoped audit: data safety + error paths + concurrency + round-trip completeness. Rating tables + fix plans + cross-workflow rollup."
Store the experience level as USER_EXPERIENCE and apply to ALL output for the session.
User impact explanations: Can be toggled at any time with --explain / --no-explain. When enabled, each finding gets a 3-line companion explanation (what's wrong, fix, user experience before/after). See the shared rating system doc for format and rules. Store as EXPLAIN_FINDINGS (default: false).
Experience-level auto-apply: If USER_EXPERIENCE = Beginner, auto-set EXPLAIN_FINDINGS = true and default sort to impact. If Senior/Expert, default sort to effort. Apply all output rules from Experience-Level Output Rules table in radar-suite-core.md.
Subsequent workflows: Do NOT re-ask the full setup questions. Instead, show a one-line reminder before each workflow:
Using: [Beginner] mode, [Auto-fix small issues], [Display only]. Type "adjust" to change, or press Enter to continue.
If the user types "adjust", re-ask only the question(s) they want to change. Users may want to adjust experience level after a few workflows (beginner explanations may feel too simple, expert too terse).
See radar-suite-core.md for: Tier System, Pipeline UX Enhancements, Table Format, Plain Language Communication, Work Receipts, Contradiction Detection, Finding Classification, Audit Methodology, Context Exhaustion, Progress Banner, Issue Rating Tables, Handoff YAML schema, Known-Intentional Suppression, Pattern Reintroduction Detection, Experience-Level Output Rules, Implementation Sort Algorithm, short_title requirement.
Every roundtrip-radar finding must be classified on the 3-axis framework and pass the schema gate in radar-suite-core.md before emission. The framework is defined in skills/radar-suite-axis-classification/SKILL.md.
roundtrip-radar's findings are organized by what part of the round-trip path is broken. Each finding category maps to a default axis, with reclassification rules based on verification checks.
| Finding category | Default axis | Reclassification rule |
|---|---|---|
| Data loss on cancel | axis_1_bug | Stays axis_1 (user-facing data loss) |
| Data loss on error | axis_1_bug | Stays axis_1 |
| Missing feedback after save | axis_1_bug | Stays axis_1 |
| Field written but not read | axis_3_smelly | Reclassify to axis_1_bug ONLY if a user feature depends on the field (check feature flags and view usage) |
| Field read but not written | axis_3_smelly | Reclassify to axis_1_bug if feature claims the field is set |
| Field exists but unwired end-to-end | axis_3_smelly | Reclassify to axis_1_bug if a user action should write it |
| Round-trip path opaque (cannot be traced from UI to persistence) | axis_2_scatter | Stays axis_2 — data flow is correct but impossible to verify |
| Inconsistent serialization across paths (CSV, backup, CloudKit) | axis_1_bug | Stays axis_1 — ONE path loses user data |
| Serialization paths duplicated across multiple managers | axis_2_scatter | Stays axis_2 — correct but hard to maintain |
| Serialization call reaches a dead branch (e.g., guard always true) | axis_3_dead_code | Stays axis_3 |
Every finding must cite the full roundtrip path in its verification_log. The path is the sequence of file:line hops from UI entry point to persistence and back to UI. If the path cannot be traced end-to-end, the finding is axis_2_scatter regardless of its category (the data flow is opaque even if correct).
Example verification_log for a roundtrip finding:
verification_log:
- check: full_path_trace
path:
- ImportCSVView.swift:142 (user taps Import)
- CSVImportManager.swift:420 (parse loop)
- Item.swift:58 (Item init with room: nil ← MISSING FIELD)
- ModelContext (save)
- ItemListView.swift:84 (Query fetches items)
- ItemRowView.swift:29 (displays room, which is nil)
result: "path traced end-to-end; room field is dropped at CSVImportManager.swift:420 and displayed as nil downstream"
- check: pattern_citation_lookup
result: "found existing round-trip pattern at Sources/Managers/BackupManager.swift:NNN which correctly serializes room"
If any hop cannot be found (e.g., "no persistence call in this workflow"), the path entry documents the gap:
path:
- AddItemView.swift:120 (user taps Save)
- AddItemViewModel.swift:85 (validate inputs)
- [GAP: no modelContext.insert found for this workflow in scope]
result: "round-trip path INCOMPLETE; finding classified as axis_2_scatter (opaque flow)"
better_approach.Per radar-suite-core.md, a finding is REJECTED if:
axis field is missingbefore_after_experience is missing or incompletebetter_approach lacks a file:line citation matching the pattern shapeverification_log lacks a pattern_citation_lookup entryRejected findings are NOT silently dropped: downgrade confidence to possible, mark as coaching incomplete, and increment rejected_no_citation in the handoff's axis_summary.
At the end of every roundtrip-radar handoff:
axis_summary:
axis_1_bug: [count] # data loss, missing feedback, broken serialization paths
axis_2_scatter: [count] # opaque flows, duplicated serialization
axis_3_dead_code: [count] # unreachable serialization branches
axis_3_smelly: [count] # unwired fields with no user feature dependency
rejected_no_citation: [count]
Known-intentional check: Read .radar-suite/known-intentional.yaml (if exists). Store as KNOWN_INTENTIONAL. Before presenting any finding during the audit, check it against these entries. If file + pattern match, skip silently and increment intentional_suppressed counter.
Pattern reintroduction check: Read .radar-suite/ledger.yaml for status: fixed findings with pattern_fingerprint and grep_pattern. For each, grep the codebase. If the pattern appears in a new file without the exclusion_pattern, report as "Reintroduced pattern" at 🟡 HIGH urgency.
Run first if workflows are unknown or have changed.
Scan the codebase and identify all user-facing workflows.
A workflow is a multi-step user action that:
.sheet, .fullScreenCover, .navigationDestinationNavigationLink, TabView tabsmodelContext.insert, modelContext.delete, context.save.idle, .processing, .complete@State progressionsisProcessing, isImporting, isSaving patternsList each workflow with:
| # | Workflow | Entry Point | Key Files | Complexity | Data Risk |
|---|---|---|---|---|---|
| 1 | [Name] | [Where user starts it] | [2-4 main files] | Low/Med/High | None/Read/Write/Delete |
After listing all workflows, recommend which to audit first based on:
Do NOT write a report file. Output the table directly.
One workflow per prompt. Run as a separate agent or conversation per workflow to prevent context exhaustion.
Audit the [WORKFLOW NAME] workflow for bugs, data-safety issues, performance problems, and data round-trip completeness.
Ask the user these questions once per session in a single prompt. For subsequent workflows, show the one-line reminder from Skill Introduction and skip to the audit.
Question 1: "What's your experience level with Swift/SwiftUI?"
Question 2: "How should fixes be handled?"
IMPORTANT: Both modes lead to fixes. "Review first" means the user sees the plan before code changes — it does NOT mean "skip fixes and jump to handoff." After presenting findings, ALWAYS offer to fix them regardless of which mode was selected.
Question 3: "How should results be delivered?"
.agents/research/[DATE]-[WORKFLOW]-audit.md.
Minimal conversation output. Before writing, per Artifact Lifecycle (Class 3) in radar-suite-core.md, archive any existing .agents/research/*-[WORKFLOW]-audit.md matching the same workflow to .agents/research/archive/superseded/.Question 4: "Will you be stepping away during the audit?"
Guarantees no blocking prompts. The skill will ONLY use these tools:
Read — read file contentsGrep — search file contentsGlob — find files by patternIt will NOT use:
Bash — no shell commands (grep via Grep tool instead)Edit / Write — no file modificationsAskUserQuestion — no interactive promptsWhen the audit completes (or hits a step that needs restricted tools), it prints:
⏱ Hands-free audit complete through Step [N].
Steps requiring action: [list]
Reply to continue with supervised steps.
Full speed, no restrictions. Assumes you've set up permissions. See below.
To avoid permission prompts during audits, pre-allow these read-only patterns in your Claude Code settings. These are safe to auto-approve — they cannot modify your codebase:
# Already safe by default (no setup needed):
Read, Grep, Glob — always auto-approved
# Add these for unattended Bash scans:
Bash(find:*)
Bash(wc:*)
Bash(stat:*)
Do NOT auto-approve (keep these prompted — they modify state):
Edit, Write — file modifications
Bash(rm:*), Bash(git:*) — destructive operations
Tip: Hands-free mode can complete workflow discovery (Step 0) and the full per-workflow audit (Step 1) read-only. Only fix application needs write access.
Base all findings on current source code only. Do not read or reference
files in .agents/, scratch/, or prior audit reports. Ignore cached
findings from auto-memory or previous sessions. Every finding must come
from scanning the actual codebase as it exists now.
If context is running low, prioritize in this order:
Never start a new check you can't finish.
Adjust ALL output based on the user's experience level:
Task.sleep workaround indicates a broken refresh path). Full 9-column tables with brief descriptions.modelContext.save() after batch delete in BulkEditViewModel.swift:142". Full tables, terse text.[One per line. Include file name, approximate line, and the specific question to answer.]
Example:
BackupDataSheet.swift ~line 351: decryptAndRestore(replaceExisting: false)
— is the user's replace choice preserved through the password prompt?[If no suspects: "No prior suspects — full exploratory audit."]
[One per line. Include file, what changed, and what to verify.]
Example:
CloudSyncManager.swift: Added maxRetries=3 retry limit
— verify counter resets on all terminal paths (success, non-retryable errors)[If none: "None"]
Tests/ for [WORKFLOW_KEYWORD]enumerate-required — destructive operations, transaction boundaries, edge casesmixed — missing catches, silent failures, user-facing error messagesmixed — @MainActor compliance, Task isolation, ModelContext thread safetygrep-sufficient — @Query without predicates, O(n²) loops, main-thread blockinggrep-sufficient — constants vs hardcoded strings, keys defined in one file but consumed in anotherenumerate-required — does data survive a full create → export → import/restore cycle?enumerate-required — dismiss mid-operation, app backgrounding, rotation, cancelenumerate-required — collections silently reduced to single elements at handoff points (see detection guide below)enumerate-required — update broken tests, add tests for P0-P1 fixes where logic is testableenumerate-required — multiple consumers of the same model type must read the same fields (see detection guide below)Before grading a workflow, produce this table showing what was actually traced:
| Step | Action | File Read | Lines | Receipt | Finding |
|------|--------|-----------|-------|---------|---------|
| 1. Create | [what happens] | [file:line] | [range] | [evidence] | [ok / issue] |
| 2. Save | [what happens] | [file:line] | [range] | [evidence] | [ok / issue] |
| 3. Export | [what happens] | [file:line] | [range] | [evidence] | [ok / issue] |
| 4. Import | [what happens] | [file:line] | [range] | [evidence] | [ok / issue] |
Rules:
What it is: A [T] (array) enters a handoff point but only a T (single element) exits -- silently discarding the rest. The flow works, types are correct, and the user gets a result -- but with degraded quality because most of the input data was dropped.
Why it matters: This bug class is invisible to type checking, navigation audits, and UX flow audits. The flow completes successfully with no errors. Only the quantity of data is wrong, which degrades results without any signal to the user.
Detection patterns:
| Pattern | Code Signature | Likely Bug? |
|---|---|---|
| Array-to-first | array.first or array[0] passed where [T] is accepted | Yes -- receiver can handle the full array |
| Init narrowing | init(item: T) called by a site holding [T] when init(items: [T]) exists | Yes -- wrong init chosen |
| Wrapper narrowing | Class/struct stores T but is created from [T] context | Yes -- wrapper designed for single item, not updated for multi |
| Loop-break narrowing | for item in items { ...; break } or if let first = items.first without processing rest | Maybe -- check if intentional (preview thumbnail) |
Distinguish intentional from accidental:
[T] but the caller passes .first. The API was designed for multiple items.T but is created in a context where [T] is available and the downstream service accepts [T].T and there's no [T] alternative. Example: displaying a single thumbnail preview from an array.How to check during a workflow audit:
[4 images] → [4 images] = ok[4 images] → [1 image] = flag as collection narrowingT, check if a multi-item init exists on the same typeOrigin: Found in Stuffolio Apr 2026 -- user selected 4 photos for AI analysis, but AIAnalysisTask only accepted first image, and Stuff Scout sheet passed productImages?.first to a view that had a multi-image init. Flow worked, types were correct, results were degraded.
What it is: Multiple functions consume the same model type but read different subsets of its fields -- one was updated when new fields were added, the others were not. The outdated consumer silently drops data.
Why it matters: This bug class is invisible to single-path tracing. Each consumer works correctly in isolation -- types match, no crashes, no errors. The bug only appears when you compare consumers against each other and notice one reads fewer fields. It survives code review because reviewers follow one path at a time.
Detection method -- relative comparison, not absolute counting:
Enumerate all consumers of a given model type. A "consumer" is any function that reads fields from the type to build output (notes, prefill data, export, display). Search for:
func build(_ session: ScoutSession))session.aboutItem, session.historicalContext)init(from:) or conversion methods (toItem(), toPrefillData())For each consumer, record which fields it reads. Build a field-access matrix:
| Field | ConsumerA | ConsumerB | ConsumerC |
|------------------|-----------|-----------|-----------|
| era | ✓ | ✓ | ✓ |
| aboutItem | ✓ | ✓ | ✓ |
| historicalContext | ✓ | | ✓ |
| collectorNotes | ✓ | | ✓ |
| researchTips | ✓ | | ✓ |
Flag asymmetries. If N consumers exist and one reads strictly fewer fields than the others, it is the outlier. The comparison is relative -- you do not need to know the "correct" field count. The mismatch itself is the finding.
Ranking the finding:
| Signal | Confidence |
|---|---|
| One consumer reads < 50% of fields that others read | Almost certain bug -- flag immediately |
| One consumer misses 1-2 fields that all others include | Probable bug -- verify with git blame (was the field added after this consumer was written?) |
| Consumers read different but overlapping sets (no strict subset) | Possible intentional -- different consumers may legitimately need different fields (e.g., summary view vs. detail export). Check if the omitted fields are relevant to the consumer's purpose. |
Distinguish intentional from accidental:
How to check during a workflow audit:
verified (you read the code of all consumers) with blast radius = number of consumers that need updatingCross-cutting accumulator integration: After finding a bridge parity issue, add the model type to the cross-cutting pattern accumulator. In subsequent workflows, automatically check any new consumer of that type against the established field matrix.
Origin: Found in Stuffolio Apr 2026 -- ScoutSession had 3 consumers building notes. ScoutMergeView and StuffScoutBridge included all 6 narrative fields. ExistingItemScoutFlow.buildNotesFromScout() included only 3. The outlier was written before the other fields were added and never updated. No type error, no crash, no test failure -- users simply lost Historical Context, Collector Notes, and Research Tips when saving via that code path.
For every finding, use this table format sorted by Urgency (descending), then ROI:
| # | Finding | Confidence | Urgency | Risk: Fix | Risk: No Fix | ROI | Blast Radius | Fix Effort | Status |
|---|
| Column | Meaning |
|---|---|
| Confidence | verified (code read + confirmed), probable (agent reported, not independently verified), needs-runtime (requires running the app to confirm) |
| Urgency | How time-sensitive — must it be fixed before release? |
| Risk: Fix | What could break when making the change |
| Risk: No Fix | Cost of leaving it — crash, data loss, user-visible bug |
| ROI | Return on effort (inverted — 🟠 = excellent, 🔴 = poor) |
| Blast Radius | Number of files the fix touches (e.g., 🟢 3 files, ⚪ 1 file). Do not use <br> tags. Count by grepping for callers/references before rating. |
| Fix Effort | Trivial / Small / Medium / Large |
| Status | Fixed / Documented / Deferred (reason) |
When creating findings, populate these optional fields where relationships are obvious:
depends_on/enables: Workflow findings often chain -- a data loss at step 2 enables a corrupt display at step 5. If one fix must come before another, populate with finding IDs.pattern_fingerprint/grep_pattern/exclusion_pattern: Assign fingerprints for generalizable anti-patterns (e.g., silent_data_narrowing, missing_error_recovery, unguarded_concurrent_write).| Indicator | General meaning | ROI meaning |
|---|---|---|
| 🔴 | Critical / high concern | Poor return — reconsider |
| 🟡 | High / notable | Marginal return |
| 🟢 | Medium / moderate | Good return |
| ⚪ | Low / negligible | — |
| 🟠 | Pass / positive | Excellent return |
Do not use prose for ratings. Every finding gets a row in this table.
After all findings, generate a Fix Plan grouped into these sections:
1. Safe fixes (contained, only touching one or two files) Changes contained within the audited files. No behavioral changes outside the workflow.
| # | Finding | Files Changed | Urgency | ROI | Fix Effort |
|---|
2. Cross-cutting fixes (touch shared code) Changes that affect models, protocols, or utilities used by other features. Review for unintended side effects before approving.
| # | Finding | Files Changed | Urgency | ROI | Fix Effort | Side Effects |
|---|
3. Requires design decision Multiple valid approaches. Needs user input before proceeding.
| # | Finding | Options | Urgency |
|---|
4. Deferred (no action needed now) Documented for future reference. No plan step generated.
| # | Finding | Urgency | Why Deferred |
|---|
5. Shared utility extraction When multiple code paths duplicate the same logic, extract to a shared utility.
| # | Finding | Proposed Utility | Files Affected |
|---|
6. Out of scope Issues discovered here that belong to a different workflow. List them with the affected workflow name so they can be fed into that workflow's audit.
| # | Finding | Affected Workflow | Urgency |
|---|
After applying Safe fixes:
.agents/research/[DATE]-[WORKFLOW]-audit.md. Show only a one-line summary
in the conversation (e.g., "Audit complete: 3 critical, 5 high, 2 medium.
Report written to .agents/research/2026-03-06-backup-audit.md").After each workflow audit, append deferred findings to .agents/research/roundtrip-radar-deferred.md. This accumulates across workflows so Step 2 rollup can consume them without re-reading all audit output.
Format:
## [Workflow Name] — [Date]
| # | Finding | Urgency | Why Deferred |
|---|---------|---------|--------------|
| 1 | ... | 🟡 HIGH | Needs design decision |
After presenting the Fix Plan, apply fixes in waves. Each wave is a phase from the Fix Plan. After each wave (including commits), always print the progress banner and auto-prompt for the next wave.
| Wave | Fix Plan Section | Est. Time | Description |
|---|---|---|---|
| 1 — Quick fixes | Safe fixes + tests | ~10-15 min | Small, contained fixes (one or two files each). Applied automatically. Tests written for each fix. |
| 2 — Shared code fixes | Cross-cutting fixes + tests | ~15-25 min | Fixes that touch code used by other features. Presented for your review first. Tests written for each fix. |
| 3 — Your call | Design decisions | ~5-15 min | Multiple valid approaches. You pick the direction for each item. |
| 4 — Same bug elsewhere | Pattern Sweep | ~5 min | Search the whole codebase for the same bugs found in this workflow. |
| 5 — Wrap up | Build + Test + Commit | ~5 min | Build both platforms, run tests, stage, commit. |
Every fix must have a test. Do not move to the next wave until tests for the current wave's fixes are written and compiling. The test verifies the fix works; without it, the fix is unverified code.
Skip empty waves (e.g., if no design decisions, go straight from Wave 2 to Wave 4).
If a "cross-cutting fix" turns out to need a design decision during implementation, reclassify it — ask the user via AskUserQuestion with the options, don't proceed without input.
After fixing findings in a workflow, scan the entire codebase for the same anti-pattern. This catches all instances at once instead of rediscovering them workflow-by-workflow.
For each pattern found and fixed in this workflow:
Double( for raw price parsing, hashValue for unstable IDs)This is a BLOCKING requirement. After EVERY wave and EVERY commit, your NEXT output MUST be the progress banner followed by the next-wave AskUserQuestion. Do not output anything else first. Do not wait for user input. Do not leave a blank prompt.
After completing each wave, always print:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Fix batch [N] of [total] complete: [plain description]
[X] findings fixed, [Y] remaining, [Z] deferred
⏱ Next: Batch [N+1] — [plain description] (~[time estimate])
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Then immediately ask: "Ready for the next batch?" with options:
After committing all fixes for a workflow, follow this exact sequence:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 Roundtrip Radar Progress
Workflows: [N]/[total] | Fixed: [X] | Deferred: [Y] | Patterns: [Z]
Last: [workflow name] ([N] fixed)
Next: [workflow name] (~[time estimate])
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Immediately ask: "Ready for the next workflow?" with options:
If user proceeds, show the one-line settings reminder (see Skill Introduction) then start the audit. Do NOT re-ask the 4 setup questions.
Never leave the user with a blank prompt between workflows.
When running inside a Tier 2 or Tier 3 pipeline (detected via tier field in .radar-suite/session-prefs.yaml):
radar-suite-core.md Pipeline UX Enhancements #1). If this is the first skill in the pipeline OR experience_level is Beginner/Intermediate, also emit the audit-only statement.Every finding MUST include a short_title field (max 8 words). This is the human-scannable label used in pipeline banners, pre-capstone summaries, and ledger output.
Example: short_title: "Backup drops attachment external storage"
All finding ID references in output (tables, banners, summaries) use the format: RS-NNN (short_title).
Run after all per-workflow audits are complete.
Review the findings from the individual workflow audits below and identify cross-cutting patterns.
[Paste the finding tables from each workflow audit here, or reference the report files if they were written.]
@Query without predicates, hash-based IDs)Deliver results according to the user's output preference from Step 1.
Roundtrip Radar complements data-model-radar (model layer), ui-path-radar (navigation paths), ui-enhancer-radar (visual quality), and capstone-radar (ship readiness). Findings from one skill inform the others.
After completing a workflow audit (or Step 2 rollup), write/update .agents/ui-audit/roundtrip-radar-handoff.yaml:
source: roundtrip-radar
date: <ISO 8601>
project: <project name>
workflows_audited: <count>
# File timestamps — enables staleness detection by consuming skills
# If a file changed after the audit, affected issues may need re-verification
file_timestamps:
<file path>: "<ISO 8601 mod date>"
# one entry per unique file referenced in issues
for_ui_path_radar:
# Data issues that may have navigation/entry-point implications
suspects:
- entry_point: "<button/link that triggers this workflow>"
finding: "<data safety issue found>"
file: "<file:line>"
question: "<does the UI reflect this data issue?>"
group_hint: "<optional, e.g. 'data_loss', 'silent_failure'>"
for_ui_enhancer_radar:
# Dead code, orphaned UI, or views with broken data backing
suspects:
- view: "<view file>"
finding: "<data issue that affects this view>"
action: "verify data binding or remove dead UI"
group_hint: "<optional batching suggestion>"
for_capstone_radar:
# Critical/high findings that affect ship readiness
blockers:
- finding: "<description>"
urgency: "<CRITICAL|HIGH>"
workflow: "<workflow name>"
group_hint: "<optional batching suggestion>"
cross_cutting_patterns:
# Patterns found across multiple workflows — useful for all skills
- pattern: "<e.g., Double() price parsing>"
workflows_affected: ["Backup", "Edit Item", "CSV Import"]
status: "fixed" | "deferred"
group_hint: "<optional, e.g. 'price_parsing', 'id_handling'>"
# Example: collection narrowing pattern
# - pattern: "array.first passed to multi-item API"
# workflows_affected: ["Add Item (Photo)", "Stuff Scout"]
# status: "fixed"
# group_hint: "collection_narrowing"
# Example: bridge parity pattern
# - pattern: "ScoutSession consumed by 3 functions, 1 reads 3/6 fields"
# workflows_affected: ["Stuff Scout", "Add Item (Photo)"]
# status: "fixed"
# group_hint: "bridge_parity"
For each unique file path referenced across all issues, record its modification date at audit time:
# Get file mod date (macOS)
stat -f "%Sm" -t "%Y-%m-%dT%H:%M:%SZ" "<file path>"
This enables consuming skills to detect staleness — if a file changed after the audit, affected issues may need re-verification before acting on them.
Optional field suggesting how consuming skills might batch related issues:
group_hint are candidates for a single fix taskdata_loss, silent_failure, round_trip_gap, error_handling, concurrency, collection_narrowing, bridge_parityAutomatic: This file is always written so other audit skills can pick up where this one left off. No user action needed.
Per the Artifact Lifecycle rules in radar-suite-core.md, before returning from this skill:
.radar-suite/ (and .agents/research/ if used).RESUME_PHASE_*.md, RESUME_*.md except NEXT_STEPS.md, *-v[0-9]*.md) to .radar-suite/archive/superseded/.ledger.yaml, session-prefs.yaml) are in-place rewrites — not dated or versioned.This prevents .radar-suite/ from accumulating stale prose artifacts across runs.
After writing the handoff YAML, also write findings to .radar-suite/ledger.yaml following the Ledger Write Rules in radar-suite-core.md:
impact_category, compute file_hashImpact category mapping for roundtrip-radar findings:
data-lossdata-losscrashux-brokenux-degradedBefore Step 0 (or Step 1 if skipping discovery), read the unified ledger and ALL companion handoff YAMLs:
Read .radar-suite/ledger.yaml (if exists) — check for existing findings to avoid duplicates
Read .agents/ui-audit/data-model-radar-handoff.yaml (if exists)
Read .radar-suite/time-bomb-radar-handoff.yaml (if exists)
Read .agents/ui-audit/ui-path-radar-handoff.yaml (if exists)
Read .agents/ui-audit/ui-enhancer-radar-handoff.yaml (if exists)
Read .agents/ui-audit/capstone-radar-handoff.yaml (if exists)
Ledger check: If the ledger contains findings for workflows you're about to audit, note their RS-NNN IDs. When you find the same issue, update the existing finding instead of creating a new one.
Regression check: For any fixed findings in the ledger whose file_hash no longer matches the current file, flag for re-verification per the Regression Detection protocol in radar-suite-core.md.
Parse for_roundtrip_radar sections. Each companion can direct findings to this skill. Look for:
for_roundtrip_radar.suspects[] — workflows or data paths another skill flagged as potentially brokenfor_roundtrip_radar.priority_workflows[] — workflows another skill wants audited firstIf found, incorporate as priority targets in workflow selection. These are not pre-confirmed findings — verify each one independently.
What each companion provides:
Specific incorporation rules:
If not found, proceed normally.
After every wave/commit/workflow: print progress banner → AskUserQuestion → never blank prompt.