From radar-suite
Audits SwiftData/Core Data models for field completeness, serialization gaps, relationship integrity, semantic ambiguity, dead fields, and migration safety. Detects model bugs causing workflow issues in iOS apps.
npx claudepluginhub terryc21/radar-suite --plugin radar-suiteThis skill is limited to using the following tools:
> Audits the @Model layer for completeness, consistency, and round-trip integrity. Finds model-layer bugs before they manifest as workflow bugs.
Reviews SwiftData code for patterns including autosave, relationships, safe predicates, CloudKit constraints, indexing, and class inheritance. Use when writing, reviewing, or debugging.
Reviews SwiftData code for model design, queries, concurrency, and migrations. Use when reviewing .swift files with import SwiftData, @Model, @Query, @ModelActor, or VersionedSchema.
Diagnoses SwiftData migration failures: crashes, data loss, relationship breaks, schema mismatches, simulator-device gaps via config checks and device testing.
Share bugs, ideas, or general feedback.
Audits the @Model layer for completeness, consistency, and round-trip integrity. Finds model-layer bugs before they manifest as workflow bugs.
Anti-shortcut rule: Do not claim a domain is "clean" without evidence. Every domain grade must cite specific files read and patterns checked. "No dead fields detected from structural analysis" without grepping is a failing grade for the auditor, not a passing grade for the model.
| Command | Description |
|---|---|
/data-model-radar | Full 9-domain audit of all models (+ delegates to time-bomb-radar for Domain 8) |
/data-model-radar [ModelName] | Audit a single model in depth |
/data-model-radar models | Show all models with risk ranking (no audit, discovery only) |
/data-model-radar serialization | Domain 2 only — backup/export coverage |
/data-model-radar relationships | Domain 3 only — cascade rules, orphan risk |
/data-model-radar migration | Domain 6 only — schema version safety |
/data-model-radar dead-fields | Domain 5 only — unused model fields |
/data-model-radar time-bombs | Domain 8 only — deferred operations on aged data |
/data-model-radar status | Show audit progress |
--show-suppressed | Show findings suppressed by known-intentional entries |
--accept-intentional | Mark current finding as known-intentional (not a bug) |
Data Model Radar audits the foundation your app is built on — the data models. Every UI bug, every sync failure, every round-trip data loss traces back to a model-layer decision. This skill finds those issues at the source instead of waiting for them to surface in workflows.
| Domain | What It Finds | Est. Time |
|---|---|---|
| 1. Field Completeness | Missing fields, enum gaps, semantic holes | ~3-5 min |
| 1.5 Computed Properties | Business logic bugs in computed properties (nil chains, fallback defaults, currency math) | ~5-10 min |
| 2. Serialization Coverage | Backup/export fields that don't round-trip | ~10-20 min |
| 3. Relationship Integrity | Cascade rules, inverse relationships, orphan risk | ~3-5 min |
| 4. Semantic Clarity | nil vs zero ambiguity, missing type distinctions | ~2-3 min |
| 5. Field Usage Mapping | Dead fields (no UI reads), phantom fields (UI shows, model doesn't store) | ~10-20 min |
| 6. Migration Safety | Schema versions, VersionedSchema coverage, migration plan gaps | ~5-10 min |
| 7. Cross-Model Consistency | Identifier strategy, naming conventions, shared pattern violations | ~3-5 min |
| 7.5 Near-Duplicate Detection | Models sharing 70%+ fields that should be consolidated | ~3-5 min |
On first invocation, ask the user two questions in a single AskUserQuestion call:
Question 1: "What's your experience level with Swift/SwiftUI?"
Question 2: "Would you like a brief explanation of what this skill does?"
Experience-adapted explanations for Data Model Radar:
Beginner: "Data Model Radar checks the blueprints your app is built on — the data models that define what an 'item' or 'warranty' or 'photo' looks like in the database. Think of it like inspecting a building's foundation before checking the rooms. If the blueprint says a house has 10 rooms but only 8 have doors, people can't reach 2 rooms. Similarly, if your Item model has 47 fields but your backup only saves 40, those 7 fields are lost forever when someone restores from backup. This skill finds those gaps."
Intermediate: "Data Model Radar audits your @Model classes for field completeness (missing enums, semantic holes), serialization coverage (backup/export round-trip gaps), relationship integrity (cascade rules, orphan risk), nil-vs-zero ambiguity, dead fields (defined but never read), migration safety (schema versions), and cross-model consistency. It finds model-layer bugs that would otherwise surface as workflow bugs across multiple features."
Experienced: "8-domain audit of @Model layer: field completeness, serialization coverage, relationship integrity, semantic clarity, usage mapping, migration safety, cross-model consistency, time bomb detection. Outputs issue rating tables with fix plans. Findings feed roundtrip-radar as suspects."
Senior/Expert: "Model audit: fields → serialization → relationships → semantics → usage → migration → consistency. Rating tables + fix plans."
Store the experience level as USER_EXPERIENCE and apply to ALL output for the session.
User impact explanations: Can be toggled at any time with --explain / --no-explain. When enabled, each finding gets a 3-line companion explanation (what's wrong, fix, user experience before/after). See the shared rating system doc for format and rules. Store as EXPLAIN_FINDINGS (default: false).
Subsequent models (if auditing multiple): Show one-line reminder:
Using: [Beginner] mode, [Auto-fix] or [Review first], [Display only]. Type "adjust" to change, or press Enter to continue.
See radar-suite-core.md for: Rules Summary, Tier System, Pipeline UX Enhancements, Table Format, Rating Table Gate, Plain Language Communication, Work Receipts, Contradiction Detection, Finding Classification, Audit Methodology, Context Exhaustion, Progress Banner, Issue Rating Tables, Handoff YAML schema, Known-Intentional Suppression, Pattern Reintroduction Detection, Experience-Level Output Rules, Implementation Sort Algorithm, short_title requirement.
Known-intentional check: Read .radar-suite/known-intentional.yaml (if exists). Store as KNOWN_INTENTIONAL. Before presenting any finding during the audit, check it against these entries. If file + pattern match, skip silently and increment intentional_suppressed counter.
Pattern reintroduction check: Read .radar-suite/ledger.yaml for status: fixed findings with pattern_fingerprint and grep_pattern. For each, grep the codebase. If the pattern appears in a new file without the exclusion_pattern, report as "Reintroduced pattern" at 🟡 HIGH urgency.
Experience-level auto-apply: If USER_EXPERIENCE = Beginner, auto-set EXPLAIN_FINDINGS = true and default sort to impact. If Senior/Expert, default sort to effort. Apply output rules from Experience-Level Output Rules table in core.
Each domain can be run at two depths:
| Depth | When to Use | What It Does |
|---|---|---|
| Quick | Triage, low-risk models, <10 fields | Structural analysis — read the model, check patterns, report what's visible |
| Deep | High-risk models, >20 fields, multiple serialization targets | Grep every field, read every serialization target, verify every claim |
Default: Deep for models with Risk = High. Quick for Low risk. Ask for Medium risk.
The difference matters. A quick audit of Domain 5 says "no obvious dead fields." A deep audit greps each of the 55 fields and proves it. Quick audits must label their grades with (quick) so the handoff distinguishes verified from unverified.
Do not start verifying until you have ranked where to go deep. The default behavior is to verify whatever is easiest to read (usually backup code) and skip what's harder (CSV import, CloudKit mapping). Risk-ranking inverts this: verify riskiest first, not easiest first.
Signal 1: Prior findings exist. Before choosing depth, check:
.agents/ui-audit/*-handoff.yaml for companion skill findings.agents/research/*-audit.md for previous capstone/codebase audit notesIf prior sessions found CSV import loses fields, go deep on CSV — not backup. This is the strongest signal and the cheapest to collect.
Signal 2: Asymmetry between input and output. Count fields on both sides of any serialization target. If export writes 27 columns but import reads 11, the 16-field gap is where data loss hides. Go deep on the smaller side (the reader, not the writer).
Signal 3: Recently changed code.
git log --since="3 months ago" -p -- Sources/Models/{Model}.swift | grep "^+.*var " | head -20
New fields are most likely to be missing from serialization structs. Cross-reference these against backup/CSV/CloudKit code.
Signal 4: Multiple systems touch the same data. Count how many systems read/write each field. A field serialized across backup + CSV + CloudKit + UI = 4 consumers = high risk. A field only in one view = low risk.
Signal 5: Money, identity, and relationships.
Fields with InCents/Price/Value/Cost, identity fields (cloudSyncID, ownerUserRecordID), and @Relationship properties have higher consequences when gaps exist.
Signal 6: The "looks clean" feeling (meta-signal). When you feel confident about a domain without having produced its required artifact — that IS the signal to stop and do the work. Confidence without evidence is the #1 predictor of shallow work.
Before starting Domain 1, produce a risk-ranking table:
Risk Ranking for [Model]:
GO DEEP:
1. CSV import (Signal 2: 27 export vs 11 import — 16-field asymmetry)
2. Price fields (Signal 5: 6 currency fields across 3 serialization targets)
3. Fields added since Build 24 (Signal 3: 4 new fields in last 3 months)
QUICK OK:
4. Backup (Signal 1: prior audit found full coverage)
5. Relationship integrity (low change rate, all cascade)
NOT APPLICABLE:
6. CloudKit (no CKRecord mapping code found)
This table determines where time is spent. Domains that touch "GO DEEP" items get deep verification. Domains that only touch "QUICK OK" items can use quick verification (labeled accordingly).
Scan the codebase and identify all data models. This step runs automatically before any audit and is also available standalone via /data-model-radar models.
IMPORTANT: Models can live anywhere under Sources/, not just Sources/Models/. Scan the full tree.
@Model classes across ALL of Sources/: Grep pattern="@Model" glob="**/*.swift" path="Sources/"Glob pattern="**/*.xcdatamodeld"Grep pattern="@Relationship" glob="**/*.swift" path="Sources/"git log --since="3 months ago" --name-only --diff-filter=M -- "Sources/**/*.swift" (note: ** not just Sources/Models/)Common locations missed if only checking Sources/Models/:
Sources/Features/*/ (feature-specific models like ScoutBookmark, ScoutRecentScan)Sources/Data/ or Sources/Database/Sources/ itself| Risk | Criteria |
|---|---|
| High | Many fields (>20) + multiple serialization targets + external sync or externalStorage |
| Medium | Moderate fields (10-20) + some serialization |
| Low | Simple model (<10 fields), local only, cache/ephemeral |
Risk boosters (promote a model one level):
@Attribute(.externalStorage) on any field (delete/sync crash risk).radar-suite/ledger.yamlPresent models grouped by risk level, sorted within each group by field count descending. Include recently-changed indicator and prior finding count.
Found [N] @Model classes. Ranked by audit priority:
HIGH RISK (recommend auditing first)
1. Item — 88 fields, 10 rels, Backup+CSV+CloudKit, changed 3d ago
2. ExtendedWarranty — 43 fields, 1 rel, Backup+CloudKit
3. RMARecord — 41 fields, 1 rel, Backup, changed 2w ago
MEDIUM RISK
4. DonationRecord — 30 fields, 0 rels, Backup+CSV, externalStorage ⚠
5. MaintenanceTask — 28 fields, 1 rel, Backup
6. ScoutBookmark — 25 fields, 0 rels, Backup, externalStorage ⚠
...
LOW RISK (cache/ephemeral, audit only if time permits)
18. AIResponseCache — 14 fields, cache only, externalStorage ⚠
...
[N] models with prior findings: [list RS-NNN IDs if any]
Mark externalStorage models with ⚠ since they have elevated delete/sync crash risk.
/data-model-radar modelsWhen invoked with the models argument, run ONLY Step 0 (discovery) and present the risk-ranked inventory. Do not start any audit. Do not ask setup questions. This is a read-only browse command.
After presenting the inventory, offer:
Options:
1. Audit a specific model — pick from the list above
2. Audit all High Risk models — deep audit of [N] models (~[time])
3. Full audit — all [N] models across 7 domains (~[time])
4. Done — exit without auditing
When the user selects a single-model audit (either from the interactive menu or via /data-model-radar [ModelName]), always run Step 0 first to present the risk-ranked inventory before asking which model. Do NOT ask the user to name a model blind.
Exception: If the user provided a model name directly (e.g., /data-model-radar Item), skip the selection menu and audit that model immediately. The user already knows what they want.
After presenting the inventory, ask:
Which model to deep-audit? (enter number, model name, or "all high")
The AskUserQuestion options should list the top 4 models by risk as selectable options, with "Other" allowing any model name.
For high-risk models, identify fields most likely to have serialization gaps:
git log --since="3 months ago" -p -- Sources/Models/Item.swift | grep "^+.*var " | head -20
Fields added recently are highest risk -- they may not have been added to backup/CSV/CloudKit structs yet. Audit these first in Domain 2.
Audit one model at a time across all 9 domains (1, 1.5, 2, 3, 4, 5, 6, 7, 7.5) plus Domain 8 delegation.
Ask setup questions:
"How should fixes be handled?"
IMPORTANT: Both modes lead to fixes. "Review first" means the user sees the plan before code changes — it does NOT mean "skip fixes and jump to handoff." After presenting findings, ALWAYS offer to fix them regardless of which mode was selected.
enumerate-requiredWhat to check:
"inherited" string where an acquisitionType enum should exist)cloudSyncID except one)switch statement — are there default cases hiding missing enum values?warrantyMonths really be non-optional with default 12?)Output per finding: What field is missing, why it should exist, which code paths would use it.
mixedModels often embed business logic in computed properties (e.g., effectiveReplacementCostInCents, isNearingEndOfLife, warrantyStatus). A wrong computation is a model-layer bug that affects every view consuming it.
What to check:
effectiveReplacementCostInCents correctly fall back from replacementCostInCents to priceInCents to 0? Does the fallback chain match business intent?0 for a missing price may be wrong if downstream code interprets 0 as "free" rather than "unknown."isReplacementCostStale that use assetAge > 2.0 embed a threshold. Is the threshold reasonable? Is it documented? Could it be a user-configurable setting?assetAge, daysRemaining) that use hardcoded values like 365.25 instead of Calendar APIs. Check for leap year handling, timezone assumptions.*InCents fields. Verify they handle nil correctly and don't accidentally mix dollars and cents.Method (Deep):
var.*:.*{ (computed properties with getters)Method (Quick):
(quick)Output per finding: The property, what it computes, what's wrong, and a concrete example of incorrect output.
enumerate-requiredMANDATORY CHECKLIST — verify each target explicitly:
For the model being audited, check ALL applicable serialization targets. Do not skip any.
BackupXxx struct. Diff every model field against backup fields. List gaps.CSVExportManager, ExportManager). Read it. List which model fields map to CSV columns and which don't.CSVImportManager). Read it. List which CSV columns map back to model fields. The export→import gap is where data loss hides.CloudSyncManager, SharedZoneSyncManager). List synced fields.Do not report a target as "covered" without reading the actual code. "Backup looks complete" without reading BackupItem is not verification.
Method:
@Transient computed properties)Not every serialization gap is a bug. Some fields are intentionally excluded from certain targets. Before reporting a gap as a finding, classify it:
| Classification | Meaning | Action | Example |
|---|---|---|---|
| Gap (report) | Field carries user data that would be lost on round-trip | Report as finding | purchasePrice missing from CSV import |
| Intentional: format mismatch | Field type doesn't fit the target format | Document, don't report | priceHistoryData (JSON blob) excluded from CSV |
| Intentional: internal metadata | Field is sync/system infrastructure, not user data | Document, don't report | cloudSyncID, cloudKitRecordID excluded from CSV |
| Intentional: relationship data | Nested objects that need their own serialization | Document, note if the child model IS serialized | attachments excluded from CSV (binary data) |
| Intentional: scope boundary | Target intentionally covers a subset | Document, don't report | SharedZone CloudKit syncs 22 fields (household subset) |
How to classify:
Output: In the Domain 2 coverage table, add an Exclusion column for fields not in a target:
| Field | Model | Backup | CSV Export | CSV Import | CloudKit | Exclusion Reason |
|-------|:-----:|:------:|:---------:|:----------:|:--------:|-----------------|
| cloudSyncID | yes | yes | no | no | yes | Internal metadata |
| priceHistoryData | yes | yes | no | no | yes | Format mismatch (JSON blob) |
| attachments | yes | yes | no | no | yes | Relationship data (binary) |
Grade impact: Intentional exclusions do NOT lower the domain grade. Only unclassified gaps or gaps classified as "report" lower the grade.
Output: Side-by-side coverage table:
| Field | Model | Backup | CSV Export | CSV Import | CloudKit |
|-------|:-----:|:------:|:---------:|:----------:|:--------:|
| title | yes | yes | yes | yes | yes |
| cloudSyncID | yes | yes | no | no | yes |
| documents | yes | no | no | no | yes |
Mark fields not checked with ? instead of yes/no. The table must be honest about what was verified.
Before grading serialization coverage, produce this pre-populated table. Read the model file first to get all stored properties, then fill in each cell by reading the actual serialization code.
| Field | Model | Backup | CSV Export | CSV Import | CloudKit | Receipt |
|-------|:-----:|:------:|:---------:|:----------:|:--------:|---------|
| [field1] | yes | ? | ? | ? | ? | (fill after reading each target) |
| [field2] | yes | ? | ? | ? | ? | |
Rules:
yes = confirmed by reading the code (include file:line in Receipt column)no = confirmed absent by reading the code? = not yet checked? for a target you claimed to checknot checked — don't fill in ? and then grade as if you checked itmixedWhat to check:
@Relationship has a correct deleteRule (.cascade, .nullify, .deny, .noAction)@Relationship has an inverse specifieditem: Item? = nil)Method:
@Relationship declarationsmixedThe problem: A manager or service class creates its own ModelContext (via makeContext(), ModelContext(container), or similar), then receives @Model objects as parameters and mutates them or sets relationships on them. SwiftData forbids relating or mutating objects across different contexts. This crashes at runtime with "attempting to relate model with model context to destination model from destination's model context."
Why static audits miss it: Every line of code is valid Swift. The bug is the relationship between where the object was created (caller's context) and where the mutation happens (method's own context). No single file is wrong in isolation.
How to find them:
Grep pattern="ModelContext\(|makeContext\(\)" glob="**/*.swift" output_mode="files_with_matches"
For each file that creates its own context, check every method that:
@Model parameters (any @Model type as a function argument)False positive filter: Methods that re-fetch the passed-in object by persistentModelID before mutation are safe. Example:
let localObject = context.model(for: passedObject.persistentModelID) as? MyModel
Classification:
| Behavior | Rating |
|---|---|
| Receives @Model param, creates own context, mutates param directly | BOMB (crash) |
| Receives @Model param, creates own context, re-fetches by ID before mutation | Safe |
| Receives @Model param, uses caller's context (passed as parameter) | Safe |
| Creates own context, only reads/queries (no mutation of passed-in objects) | Safe |
mixedThe problem: A manager method saves changes in its own ModelContext, but the caller passed in an @Model object and expects it to reflect the saved changes. The caller's copy belongs to a different context and won't see the update. The UI shows stale data until the user navigates away and back. Not a crash, but frequently reported as "my data disappeared" or "save didn't work."
How to find them:
Same grep as 3a. For each method that creates its own context and receives @Model parameters, check if:
Classification:
| Behavior | Rating |
|---|---|
| Saves in own context, caller assumes passed-in object is updated | Risky (stale UI) |
| Saves in own context, caller re-fetches after call returns | Safe |
| Uses caller's context for save | Safe |
enumerate-requiredWhat to check:
Int? field, is there UI or documentation that distinguishes "not set" from "zero"? (e.g., priceInCents: Int? — nil = not entered, 0 = free. But does the UI show this difference?)priceInCents used for both new and used purchases — should there be separate fields?)isEstimatedPrice — does false mean "confirmed exact" or "user never set this flag"?disposalMethodRaw: String — is the enum complete? Are there raw strings in the codebase that don't match any enum case?mixedWhat to check:
form (read to populate edit state), detail (read for display in a read-only view), serialization (backup/CSV/CloudKit), or compute (consumed by a computed property). A field with form + serialization consumers but zero detail consumers is a form-only field.Method (Deep — required for High-risk models):
For models with >20 fields, use a stratified sampling strategy instead of grepping all fields:
*InCents, *Price*, *Value*, *Cost*) — money = high riskvar x: Type? = nil) — more likely to be dead than required fields*Raw) — enum storage, check if enum is used anywhereFor each sampled field:
Grep pattern="\.fieldName" glob="**/*.swift" path="Sources/" output_mode="files_with_matches"
Flag: 0 read hits in Sources/ (excluding the model file itself and Tests/) = dead field candidate.
MANDATORY verification before reporting any dead field:
var fieldName. If the field doesn't exist in the model, it was hallucinated. Do NOT report hallucinated fields.extension ModelName across Sources/ and read any matches. A field may appear "dead" because its consumer is in Model+Extension.swift, not the main model file. See Extension Discovery below.Only after all 4 checks pass with 0 consumers should a field be reported as dead.
Models often split logic across multiple files via extensions (e.g., Item+Warranty.swift, Item+Maintenance.swift). A field consumer in an extension file is invisible if only the main model file is read.
Before running field usage grep for any model, discover all extension files:
Grep pattern="extension ModelName" glob="**/*.swift" path="Sources/" output_mode="files_with_matches"
Also check for naming convention files:
Glob pattern="**/ModelName+*.swift" path="Sources/"
Include all discovered extension files in the "exclude self" list when counting consumer files. A field referenced only in the model file AND its extensions is not dead if the extension provides computed properties consumed by views.
Example: Item.swift has extensions in Item+Warranty.swift, Item+Maintenance.swift, Item+Extensions.swift, Item+PriceWatch.swift, Item+Rating.swift, Item+Timeline.swift, Item+Accessible.swift. A field like warrantyMonths may only appear in Item.swift and Item+Warranty.swift, but Item+Warranty.swift provides computed properties like expirationDate that ARE consumed by views. The field is not dead.
Method (Quick — for Low-risk models):
(quick) — not fully verifiedenumerate-requiredWhat to check (Deep — read the actual schema files):
Glob pattern="**/*Schema*.swift" or **/*Migration*.swift"SchemaMigrationPlan exist with proper stage ordering?git log -1 --format="%ai" -- Sources/Models/Item.swift vs git log -1 --format="%ai" -- Sources/Models/AppSchema.swift@Attribute modifiers that affect storage (.externalStorage, .unique) — are these in the migration plan?Do not say "migration infrastructure exists" without reading the schema file. That's the same as saying "backup exists" without reading BackupItem.
enumerate-requiredMinimum model requirement: This domain requires reading at least 3 models to be meaningful. When auditing a single model, this domain outputs one of:
(partial — N models compared).What to check:
cloudSyncID, some use persistentModelID.hashValue, some use UUID — should be consistent)timestamp, createdAt, date — same concept, different names across models?priceInCents vs deductibleInCents vs fairMarketValue (not in cents?) — consistent currency representation?Sendable all conform? Do models with cloudSyncID all use it the same way?mixedDetect models that share a high percentage of fields and methods, indicating they should be consolidated via a shared protocol or base pattern.
When to run:
How to detect:
For each pair of models (A, B), compute field overlap:
shared_fields = fields_in_A ∩ fields_in_B (by name AND type)
overlap_ratio = shared_fields / min(fields_in_A, fields_in_B)
Flag pairs with overlap_ratio >= 0.70 (70%+ shared fields)
For flagged pairs, also check method duplication:
func declarationsWhat to look for beyond field overlap:
init(from session:) that copies the same fieldsgenerateThumbnail(), helper functions copy-pastedmodelA.toModelB() that manually copies every field (fragile, breaks when fields are added to one but not the other)Classification:
| Overlap | Recommendation |
|---|---|
| 90%+ fields, 50%+ methods | Extract shared protocol with default implementations |
| 70-89% fields | Consider shared protocol for common fields; document why they diverge |
| 50-69% fields | Note the similarity; likely intentional domain separation |
| <50% fields | Not duplicates, skip |
Output per finding: The two models, overlap percentage, list of shared fields, list of duplicated methods, and recommended consolidation approach.
Example finding:
ScoutBookmark and ScoutRecentScan share 90% of fields (20/22) and 6 identical
computed properties. Both have duplicated init(from:), toSession(), and
generateThumbnail() methods. Recommend extracting ScoutIdentifiable protocol.
Delegated to /time-bomb-radar -- a standalone skill in the radar-suite family.
When running a full data-model-radar audit, invoke /time-bomb-radar after completing Domains 1-7. Its findings feed into the data-model-radar handoff under for_capstone_radar.blockers[].
When running data-model-radar standalone (not as part of a full suite run), note in the handoff that Domain 8 was not run and recommend /time-bomb-radar as a follow-up.
Every domain produces a letter grade. Use this rubric so grades are consistent across sessions, models, and auditors.
| Grade | Score | Meaning |
|---|---|---|
| A | 95-100 | No findings. All checks verified with evidence. |
| A- | 90-94 | Minor observations only (hygiene, documentation). No functional gaps. |
| B+ | 85-89 | 1-2 LOW findings. No data loss or crash risk. |
| B | 80-84 | 1 MEDIUM finding or 3+ LOW findings. |
| B- | 75-79 | 2-3 MEDIUM findings. Functional gaps exist but workarounds available. |
| C+ | 70-74 | 1 HIGH finding or 4+ MEDIUM findings. |
| C | 65-69 | 2+ HIGH findings. Significant gaps affecting data integrity. |
| C- | 60-64 | Multiple HIGH findings with user-visible impact. |
| D | 50-59 | CRITICAL finding present. Ship blocker. |
| F | <50 | Multiple CRITICAL findings or fundamental design flaw. |
Domain 1 (Field Completeness):
Domain 1.5 (Computed Property Correctness):
Domain 2 (Serialization Coverage):
Domain 3 (Relationship Integrity):
Domain 4 (Semantic Clarity):
Domain 5 (Field Usage Mapping):
Domain 6 (Migration Safety):
Domain 7 (Cross-Model Consistency):
Domain 7.5 (Near-Duplicate Detection):
Every grade must include an Evidence block:
**Evidence:** Read [files read with line ranges]. Verified [what was checked].
[What was NOT checked, if anything].
**Confidence:** [verified | quick | partial]
A grade without evidence is not a grade. It's a guess. Downgrade to needs-verification if evidence is absent.
The overall model grade is the weighted average of domain grades:
| Domain | Weight | Rationale |
|---|---|---|
| 1. Field Completeness | 10% | Foundation, but gaps are usually LOW severity |
| 1.5 Computed Properties | 10% | Business logic correctness |
| 2. Serialization Coverage | 25% | Data loss is the highest-impact model bug |
| 3. Relationship Integrity | 15% | Cascade/orphan bugs cause crashes |
| 4. Semantic Clarity | 5% | Important but rarely causes data loss |
| 5. Field Usage Mapping | 10% | Dead fields are hygiene, not crashes |
| 6. Migration Safety | 15% | Wrong migration = app won't launch |
| 7. Cross-Model Consistency | 5% | Consistency aids maintenance |
| 7.5 Near-Duplicate Detection | 5% | Code hygiene |
Convert letter grades to numeric (A=97, A-=92, B+=87, B=82, B-=77, C+=72, C=67, C-=62, D=55, F=40), compute weighted average, convert back to letter.
For every finding, use this table format sorted by Urgency (descending), then ROI:
| # | Finding | Confidence | Urgency | Risk: Fix | Risk: No Fix | ROI | Blast Radius | Fix Effort | Status |
|---|
| Column | Meaning |
|---|---|
| Confidence | verified (code read + confirmed), probable (structural analysis, not independently verified), needs-runtime (requires running the app to confirm) |
| Urgency | CRITICAL / HIGH / MEDIUM / LOW |
| Risk: Fix | What could break when making the change |
| Risk: No Fix | Cost of leaving it — data loss, crash, user-visible bug |
| ROI | Excellent / Good / Marginal / Poor |
| Blast Radius | Number of files the fix touches (e.g., 3 files, 1 file). Count by grepping for callers/references before rating. |
| Fix Effort | Trivial / Small / Medium / Large |
| Status | Fixed / Documented / Deferred (reason) |
When creating findings, populate these optional fields where relationships are obvious:
depends_on/enables: If one finding must be fixed before another (e.g., "add Codable conformance" enables "serialize to JSON backup"), populate these fields with finding IDs. Common in data-model-radar: structural changes (new field, new enum case) enable serialization/UI fixes.pattern_fingerprint/grep_pattern/exclusion_pattern: If the anti-pattern is generalizable, assign a fingerprint so it can be detected if reintroduced elsewhere. Example: missing_backup_field with grep_pattern: "case .backup" and exclusion checking for the field name.Group findings into:
1. Safe fixes (isolated, low blast radius)
2. Cross-cutting fixes (touch shared code)
3. Schema changes (require migration) Changes that add/modify/remove model fields. These need:
| # | Finding | Migration Required | Backward Compatible | Fix Effort |
|---|
4. Shared utility extraction When multiple models duplicate the same pattern, extract to a shared protocol or utility.
5. Design decisions (need user input)
6. Deferred
Apply fixes in waves with progress tracking.
Scale the number of waves to finding severity:
| Wave | Section | Est. Time | Description |
|---|---|---|---|
| 1 | Safe fixes + tests | ~10-15 min | No schema changes, isolated. Write tests for each fix. |
| 2 | Cross-cutting fixes + tests | ~15-25 min | Touch shared code. Write tests for each fix. |
| 3 | Schema changes + tests | ~20-35 min | Model + migration + backup + tests on real data. |
| 4 | Pattern Sweep | ~5 min | Scan codebase for patterns found. |
| 5 | Build + Test + Commit | ~5 min | Build both platforms, run tests, stage, commit. |
Every fix must have a test. Do not move to the next wave until tests for the current wave's fixes are written and compiling. The test verifies the fix works; without it, the fix is unverified code.
After EVERY wave and EVERY commit, your NEXT output MUST be the progress banner followed by the next-wave AskUserQuestion. Do not output anything else first. Do not leave a blank prompt.
After completing each wave, always print:
Step [N] of [total] complete: [plain description of what was done]
[X] issues fixed, [Y] remaining, [Z] deferred
Next: [plain description of what comes next] (~[time estimate])
Then immediately ask using AskUserQuestion:
After all waves for a model, print scorecard and prompt for next model:
Progress: [N] of [total] models checked | [X] issues fixed | [Y] deferred
Just finished: [model name] — [one-line summary of what was found]
Next up: [model name] (~[time estimate])
Then ask: "Ready to check the next model?" with options including "Explain more" — what this model is and why it matters.
When running inside a Tier 2 or Tier 3 pipeline (detected via tier field in .radar-suite/session-prefs.yaml):
radar-suite-core.md Pipeline UX Enhancements #1). If this is the first skill in the pipeline OR experience_level is Beginner/Intermediate, also emit the audit-only statement.Every finding MUST include a short_title field (max 8 words). This is the human-scannable label used in pipeline banners, pre-capstone summaries, and ledger output.
Example: short_title: "CSV export drops Room column"
All finding ID references in output (tables, banners, summaries) use the format: RS-NNN (short_title).
Data Model Radar is the foundation layer of the radar family. Run it first — its findings feed every other skill.
After auditing models, write .agents/ui-audit/data-model-radar-handoff.yaml:
source: data-model-radar
date: <ISO 8601>
project: <project name>
models_audited: <count>
audit_depth: <full | partial | quick>
domains_verified: [1, 1.5, 2, 3, 4, 5, 6, 7, 7.5]
domains_at_quick_depth: []
domains_skipped: []
# File timestamps — enables staleness detection by consuming skills
file_timestamps:
<file path>: "<ISO 8601 mod date>"
# one entry per unique file referenced in issues
for_roundtrip_radar:
# Serialization gaps = workflow-specific data loss
suspects:
- workflow: "<affected workflow>"
finding: "<e.g., DocumentAttachment not in BackupItem>"
model: "<model name>"
field: "<field name>"
group_hint: "<optional, e.g. 'backup_gaps', 'csv_gaps', 'cloudkit_gaps'>"
for_ui_path_radar:
# Dead fields may indicate dead UI paths
suspects:
- view: "<view that might reference this field>"
finding: "<e.g., field exists but no UI reads it>"
group_hint: "<optional batching suggestion>"
# Form-only fields: user enters data that vanishes from detail view
form_only_fields:
- model: "<model name>"
field: "<field name>"
form_view: "<form that edits this field>"
detail_view: "<expected detail view that should display it>"
group_hint: "form_to_detail_gap"
for_ui_enhancer_radar:
# Semantic ambiguity affects how fields should be displayed
suspects:
- view: "<view displaying this field>"
finding: "<e.g., nil vs 0 not distinguished in UI>"
group_hint: "<optional batching suggestion>"
for_capstone_radar:
# Model-layer blockers
blockers:
- finding: "<description>"
urgency: "<CRITICAL|HIGH>"
domain: "Data Safety"
group_hint: "<optional batching suggestion>"
serialization_coverage:
# Summary table for roundtrip-radar to reference
# Only include targets that were ACTUALLY READ AND VERIFIED
- model: "Item"
total_fields: 47
backup_coverage: 40
backup_verified: true
csv_export_coverage: 27
csv_export_verified: true
csv_import_coverage: 11
csv_import_verified: true
cloudkit_verified: false
missing_from_backup: ["field1", "field2"]
missing_from_csv_export: ["field3", "field4"]
For each unique file path referenced across all issues, record its modification date:
stat -f "%Sm" -t "%Y-%m-%dT%H:%M:%SZ" "<file path>"
Enables consuming skills to detect staleness — if a model file changed after the audit, serialization coverage data may be stale.
Optional field for batching related issues. Common hints:
backup_gaps — fields missing from backupcsv_gaps — fields missing from CSV export/importcloudkit_gaps — fields missing from CloudKit syncdead_fields — model fields with no UI readssemantic_ambiguity — nil vs zero not distinguishedHonesty rule: The handoff must distinguish "verified clean" from "not checked." Use _verified: true/false for each serialization target. Capstone-radar uses these flags to determine how much credit to give.
Per the Artifact Lifecycle rules in radar-suite-core.md, before returning from this skill:
.radar-suite/ (and .agents/ui-audit/ or equivalent if used).RESUME_PHASE_*.md, RESUME_*.md except NEXT_STEPS.md, *-v[0-9]*.md) to .radar-suite/archive/superseded/.ledger.yaml, session-prefs.yaml) are in-place rewrites — not dated or versioned.This prevents .radar-suite/ from accumulating stale prose artifacts across runs.
After writing the handoff YAML, also write findings to .radar-suite/ledger.yaml following the Ledger Write Rules in radar-suite-core.md:
impact_category, compute file_hashImpact category mapping for data-model-radar findings:
data-losscrash (if cascade to external storage) or data-losscrashux-degradedhygienecrash (if breaking) or data-losshygiene or ux-degraded)Before starting Step 1, read the unified ledger and ALL companion handoff YAMLs:
Read .radar-suite/ledger.yaml (if exists) — check for existing findings to avoid duplicates
Read .agents/ui-audit/roundtrip-radar-handoff.yaml (if exists)
Read .agents/ui-audit/ui-path-radar-handoff.yaml (if exists)
Read .agents/ui-audit/ui-enhancer-radar-handoff.yaml (if exists)
Read .agents/ui-audit/capstone-radar-handoff.yaml (if exists)
Ledger check: If the ledger contains findings for files you're about to audit, note their RS-NNN IDs. When you find the same issue, update the existing finding instead of creating a new one.
Regression check: For any fixed findings in the ledger whose file_hash no longer matches the current file, flag for re-verification per the Regression Detection protocol in radar-suite-core.md.
Parse for_data_model_radar sections. Each companion can direct findings to this skill. Look for:
for_data_model_radar.suspects[] — models or fields another skill flagged as potentially wrongfor_data_model_radar.priority_models[] — models another skill wants audited firstIf found, incorporate as priority targets in the model audit order. These are not pre-confirmed findings — verify each one independently. If not found, proceed normally.
What each companion provides:
Automatic: This file is always written so other audit skills can pick up where this one left off. No user action needed.
After auditing all high-risk models, identify patterns across models:
Every domain grade MUST cite what was actually done. Use this format after each domain:
**Evidence:** Read BackupItem (BackupManager.swift:32-340). Diffed 55 model fields against 55 backup fields.
Verified toItem() restore at line 342-546. CSV export NOT checked (CSVExportManager not read).
**Confidence:** verified (backup), not-checked (CSV, CloudKit)
A grade without evidence is not a grade — it's a guess.
After every step: print progress banner → AskUserQuestion → never blank prompt.
Anti-shortcut: Domain 2 (Serialization) and Domain 5 (Field Usage) require actual grep verification. No "looks complete" without evidence.