From radar-suite
Detects time bombs: deferred operations crashing on aged data after passing tests, covering cascade deletes, cache expiry, trial paths, background accumulation, date transitions, scheduled side effects.
npx claudepluginhub terryc21/radar-suite --plugin radar-suiteThis skill is limited to using the following tools:
> Finds code that works today but crashes after the data gets old enough.
Audits SwiftData/Core Data models for field completeness, serialization gaps, relationship integrity, semantic ambiguity, dead fields, and migration safety. Detects model bugs causing workflow issues in iOS apps.
Implements forensic data auditing, anomaly detection, fraud prevention, and audit trail analysis in TypeScript apps with database middleware patterns.
Debugs Core Data schema migration crashes, thread-confinement errors, N+1 query performance, SwiftData bridging, and tests migrations without data loss.
Share bugs, ideas, or general feedback.
Finds code that works today but crashes after the data gets old enough.
Time bombs are deferred operations that pass every test, every code review, every pattern matcher, then crash your app weeks or months after release. The trigger is data age + environment state, not code paths. They produce 1-star reviews from your most loyal users -- the ones who kept the app long enough for the timer to fire.
Origin: A production-class crash where SafeDeletionManager archived items for 30 days, then cascade-deleted them, triggering a SwiftData _FullFutureBackingData fatal error on unresolved iCloud .externalStorage faults. The bug was invisible during development because no test data was 30 days old. If shipped, every user would have crashed on day 31.
| Command | What it does |
|---|---|
/time-bomb-radar | Full audit across all 7 patterns |
/time-bomb-radar deferred-deletes | Pattern 1 only -- cascade deletes on aged data |
/time-bomb-radar cache-expiry | Pattern 2 only -- cache purge with model relationships |
/time-bomb-radar trial-expiry | Pattern 3 only -- subscription/trial expiry paths |
/time-bomb-radar background-tasks | Pattern 4 only -- accumulated background work |
/time-bomb-radar date-transitions | Pattern 5 only -- date-threshold state changes |
/time-bomb-radar scheduled-side-effects | Pattern 6 only -- notifications/reminders scheduled from aged data |
--show-suppressed | Show findings suppressed by known-intentional entries |
--accept-intentional | Mark current finding as known-intentional (not a bug) |
These concepts appear throughout the 5 patterns. Understanding them makes the patterns easier to apply regardless of framework.
Most ORMs don't load related objects until you access them. A User object with 50 photos doesn't load those photos into memory just because you fetched the user. Instead, the photos are represented as faults -- lightweight placeholders that get filled in when you access them.
This is efficient for normal use. It becomes dangerous when:
In SwiftData: Faults are _FullFutureBackingData<T> objects. Accessing them triggers resolution. If resolution fails (data not available), it's a fatalError -- not a throwing error. You cannot catch it.
In Core Data: Faults are NSManagedObject subclasses with isFault == true. Accessing a property triggers resolution. If the store is unavailable, you get NSObjectInaccessibleException.
In Django/SQLAlchemy/ActiveRecord: Lazy-loaded relationships raise database errors if the connection is lost or the row was deleted. The ORM equivalent of "this object doesn't exist anymore."
In any ORM with cloud sync: The object exists in the schema but the data hasn't been synced to this device. The fault resolution goes to the network, which may be unavailable.
When you delete a parent object, the ORM can automatically delete its children. This is configured via delete rules (.cascade in SwiftData/Core Data, on_delete=CASCADE in Django, dependent: :destroy in Rails).
The problem: cascade deletion forces the ORM to find and visit every child before deleting them. If any child is a fault whose data isn't locally available, the visit fails.
Object-level cascade delete: ORM loads each child into memory, snapshots it for change tracking, then deletes it. Triggers fault resolution. Dangerous on aged data.
Batch/SQL-level delete: ORM issues DELETE FROM children WHERE parent_id = ? directly. Never loads objects. Never triggers faults. Safe on aged data.
Some ORMs store large binary data (photos, PDFs, audio) outside the main database file. SwiftData uses .externalStorage to put Data properties on disk instead of inline in SQLite. Core Data has "Allows External Storage" in the model editor. Other frameworks use file references.
External storage is the highest-risk target for time bombs because:
To catch a time bomb manually, you'd need to: create data, archive it, set your device clock forward 30-90 days, disconnect from the network, and relaunch. Nobody does this.
Present to the user based on experience level:
User impact explanations: Can be toggled at any time with --explain / --no-explain. When enabled, each finding gets a 3-line companion explanation (what's wrong, fix, user experience before/after). See the shared rating system doc for format and rules. Store as EXPLAIN_FINDINGS (default: false).
Experience-level auto-apply: If USER_EXPERIENCE = Beginner, auto-set EXPLAIN_FINDINGS = true and default sort to impact. If Senior/Expert, default sort to effort. Apply all output rules from Experience-Level Output Rules table in radar-suite-core.md.
Known-intentional check: Read .radar-suite/known-intentional.yaml (if exists). Store as KNOWN_INTENTIONAL. Before presenting any finding during the audit, check it against these entries. If file + pattern match, skip silently and increment intentional_suppressed counter.
Pattern reintroduction check: Read .radar-suite/ledger.yaml for status: fixed findings with pattern_fingerprint and grep_pattern. For each, grep the codebase. If the pattern appears in a new file without the exclusion_pattern, report as "Reintroduced pattern" at 🟡 HIGH urgency.
Before checking individual patterns, collect baseline information:
.externalStorage attributes or large binary data stored outside the main database?Grep pattern="@Model|NSManagedObject|@Table" glob="**/*.swift" output_mode="files_with_matches"
Grep pattern="\.externalStorage|Allows External Storage" glob="**/*.swift" output_mode="content"
Grep pattern="cloudKit|CKContainer|iCloud|FirebaseFirestore" glob="**/*.swift" output_mode="files_with_matches"
Grep pattern="StoreKit|SubscriptionManager|TrialManager|RevenueCat" glob="**/*.swift" output_mode="files_with_matches"
Grep pattern="on_delete.*CASCADE|dependent.*destroy|CASCADE" glob="**/*.{py,rb,ts,js}" output_mode="content"
Grep pattern="expires_at|ttl|max_age|cache_expiry" glob="**/*.{py,rb,ts,js}" output_mode="content"
Grep pattern="trial|subscription.*expir|free_tier" glob="**/*.{py,rb,ts,js}" output_mode="content"
Grep pattern="cron|scheduler|background_job|sidekiq|celery|delayed_job" glob="**/*.{py,rb,ts,js,yaml,yml}" output_mode="files_with_matches"
Persistence: [framework]
Cloud sync: [yes/no, which service]
External storage: [list of models/properties]
Subscription system: [yes/no, which framework]
This tells you which patterns are relevant. Local-only apps without subscriptions can skip patterns 2, 3, and parts of 1.
The general problem: Code that soft-deletes objects (archive, trash, recycle bin), then permanently deletes them after a time threshold. The permanent delete triggers cascade rules that try to visit related objects. If those objects have remote or externally stored data that isn't locally available, the visit fails.
This is the most dangerous pattern because the crash is usually uncatchable. The ORM hits a fatal error during internal bookkeeping (snapshot creation, change tracking), not during your code.
Severity: CRITICAL when cascade targets include external storage or cloud-synced data.
Grep pattern="byAdding.*day.*value.*-|byAdding.*month.*value.*-" glob="**/*.swift" output_mode="content"
For each hit, check if the same file or calling chain includes:
Grep pattern="\.delete|context\.delete|modelContext\.delete|remove|purge|cleanup" path="[file from above]" output_mode="content"
# Python/Django
Grep pattern="timedelta.*days|datetime.*now.*-" glob="**/*.py" output_mode="content"
# Then check same files for .delete(), bulk_delete, QuerySet.delete()
# Ruby/Rails
Grep pattern="ago|days\.ago|months\.ago" glob="**/*.rb" output_mode="content"
# Then check same files for destroy, destroy_all, delete, delete_all
# Node/TypeScript
Grep pattern="Date\.now.*-|subtract.*days|moment.*subtract" glob="**/*.{ts,js}" output_mode="content"
# Then check same files for .remove(), .delete(), .destroy()
Enumerate-then-verify: Don't stop at "does it have cascade targets with external storage?" Enumerate ALL cascade children, then check each one. The bug hides in the gap between what was handled and what exists.
.externalStorage (SwiftData), Allows External Storage (Core Data), or file references (other ORMs).Common miss: Existing code already handles the obvious case (e.g., photos) with comments explaining why. A human reading that assumes "they handled it." The skill must verify completeness -- enumerate all children, not just confirm the documented ones.
| Delete method | Cascade target | Rating |
|---|---|---|
| Batch/SQL-level delete | Any | Safe |
| Object-level delete | No cascade | Safe |
| Object-level delete | Cascade to normal properties | Risky |
| Object-level delete | Cascade to external storage or cloud-synced data | BOMB |
Safe: context.delete(model: T.self, where:) operates at the SQL level. Never materializes objects. Never triggers faults.
Unsafe: context.delete(object) with .cascade rule. Forces materialization of all related objects via ModelSnapshot creation. If any child has _FullFutureBackingData (unresolved iCloud .externalStorage), it's a fatal error.
Fix: Two-phase batch delete. Delete children first (by predicate), then delete parents. Requires stored properties used in predicates to be internal (not private).
// Phase 1: Batch-delete child objects (SQL-level, no materialization)
let childPredicate = #Predicate<ChildModel> {
$0.parent?.statusRaw == "archived"
}
try? context.delete(model: ChildModel.self, where: childPredicate)
// Phase 2: Batch-delete parent objects (cascade is now a no-op)
let parentPredicate = #Predicate<ParentModel> {
$0.statusRaw == "archived"
}
try? context.delete(model: ParentModel.self, where: parentPredicate)
Safe: MyModel.objects.filter(archived_before=threshold).delete() uses SQL-level CASCADE. No object loading if no signals/overrides.
Unsafe: Looping with obj.delete() when pre_delete/post_delete signals access related objects that may have been deleted by another process or have stale foreign keys.
Additional risk: Django's on_delete=CASCADE at the database level is safe, but Python-level cascade (on_delete=models.CASCADE with signal handlers) loads objects.
The general problem: Cache entries (API responses, OCR results, AI outputs, thumbnails) with a TTL that expire after N days. The cache works fine for fresh entries. When the purge runs on old entries, it may trigger relationship resolution or external data access on stale objects.
This is Pattern 1 in disguise, with a different trigger (TTL vs archive age) and often a different location in the codebase (cache managers vs deletion managers).
Grep pattern="cacheExpiry|expiresAt|isExpired|ttl|maxAge|cacheExpiryDays" glob="**/*.swift" output_mode="content"
Exclude warranty/coverage/subscription business logic (those are Pattern 5).
Grep pattern="expires_at|ttl|max_age|cache_timeout|CACHE_TTL" glob="**/*.{py,rb,ts,js,yaml}" output_mode="content"
Grep pattern="redis.*expire|memcache.*expir|cache\.delete" glob="**/*.{py,rb,ts,js}" output_mode="content"
Check ALL delete paths, not just expiry: Once you find a cache model with .externalStorage, check every method that deletes instances of that model -- not just the TTL-triggered purge. User-triggered operations like "Clear Cache" and "Clear Cache for Item" have the same .externalStorage crash risk. The trigger is different (user action vs timer) but the fault resolution crash is identical.
| Cache storage | Relationships | Purge method | Rating |
|---|---|---|---|
| UserDefaults, files, Redis, Memcached | N/A | Any | Safe |
@Model / ORM model, no relationships | N/A | Any | Safe |
@Model / ORM model, has relationships | N/A | Batch | Safe |
@Model / ORM model, has relationships | N/A | Object-level | Risky |
@Model / ORM model, .externalStorage | N/A | Object-level | BOMB |
The general problem: Features gated behind a time-limited trial or subscription. The risk isn't the paywall UI. It's what happens to in-flight operations, initialized sessions, and cached permissions when the authorization state changes after weeks of being valid.
This pattern exists in every app with a freemium model, regardless of platform. The specific risk varies:
Grep pattern="daysRemaining|trialEnd|subscriptionExpir|canUse|isSubscribed|queriesRemaining" glob="**/*.swift" output_mode="content"
Grep pattern="trial_end|subscription_expir|is_subscribed|can_use_feature|free_tier" glob="**/*.{py,rb,ts,js}" output_mode="content"
Grep pattern="billing.*check|license.*valid|entitlement" glob="**/*.{py,rb,ts,js}" output_mode="content"
| Behavior on expiry | Rating |
|---|---|
| Gate checks at view/route level with graceful fallback | Safe |
| User can still read their own data (read-only) | Safe |
| Feature session assumes trial is active, no expiry handling | Risky |
| User loses access to data they created during trial | BOMB |
| Purchase/subscribe button broken because billing not initialized | BOMB |
Set the device date (or server clock) forward past the trial end date. Launch the app. Verify:
The general problem: Background tasks (thumbnail generation, sync reconciliation, data cleanup, analytics upload, email queues) that process accumulated items. They work fine on 5 items. After weeks of the app (or service) being idle, they wake up to hundreds or thousands.
This affects every platform:
BGTaskScheduler tasks with 30-second execution limitsWorkManager jobs with battery-aware schedulingGrep pattern="BGTaskScheduler|scheduleCleanup|scheduleOnLaunch|performAfter|backgroundTask" glob="**/*.swift" output_mode="content"
Grep pattern="cron|scheduler|background_job|sidekiq|celery|delayed_job|bull|agenda" glob="**/*.{py,rb,ts,js,yaml,yml}" output_mode="files_with_matches"
Grep pattern="WorkManager|JobScheduler|AlarmManager" glob="**/*.{kt,java}" output_mode="files_with_matches"
.prefix(50), LIMIT 100).externalStorage properties (or equivalent) on the objects it processes? Reading .externalStorage triggers the same fault resolution as deleting -- if the data hasn't synced from iCloud, accessing it in a filter, map, or property check is a fatalError. Use predicates to filter at the SQL level instead of fetching objects and checking properties in Swift/Python/Ruby.| Behavior | Rating |
|---|---|
| Batch-limited, chunked processing, handles partial failure | Safe |
| No batch limit but lightweight per-item work (no I/O) | Risky (memory) |
| No batch limit, materializes relationships or does I/O per item | BOMB (memory + faults) |
| No timeout handling, can be killed mid-batch with unsaved state | Risky (data corruption) |
let items = fetchEligibleItems()
for chunk in items.prefix(100).chunks(ofCount: 20) {
for item in chunk {
process(item)
}
try context.save() // Save after each chunk
}
The general problem: Objects that change state based on date arithmetic. Warranties expiring, loans becoming overdue, subscriptions lapsing, items aging into a different category, passwords expiring, tokens rotating. The transition code runs when the object is next accessed, which could be months later.
The risk: the code that computes the new state assumes the object is fully loaded and its relationships are intact. After months, optional fields may be nil from migration gaps, relationships may have been pruned by cloud sync, or the object may have been deleted on another device.
Grep pattern="byAdding.*day|byAdding.*month" glob="**/*.swift" output_mode="content"
Filter to hits that also involve state changes:
Grep pattern="lifecyclePhase|\.status|isExpired|isOverdue|isDueSoon" glob="**/*.swift" output_mode="content"
Grep pattern="timedelta|relativedelta|date_add|DATE_ADD|dateadd" glob="**/*.{py,rb,ts,js,sql}" output_mode="content"
Grep pattern="status.*expir|is_expired|is_overdue|is_stale" glob="**/*.{py,rb,ts,js}" output_mode="content"
| Behavior | Rating |
|---|---|
| Date comparison with nil guards and graceful fallback | Safe |
| Date comparison that force-unwraps or assumes non-nil | Risky |
| State transition that accesses relationships without nil checks | BOMB |
// Before (unsafe)
if item.warranty!.expirationDate < Date() { ... }
// After (safe)
guard let warranty = item.warranty,
let expiration = warranty.expirationDate else { return }
if expiration < Date() { ... }
The general problem: Code that schedules future side effects (push notifications, calendar events, reminders, emails, webhook triggers) based on date fields in the data model. The scheduling happens when the object is created or updated, but the side effect fires later -- sometimes much later. If the date field is nil from a sync gap, the scheduling produces a wrong or missing result. If the related object has been deleted by the time the side effect fires, the handler crashes or shows garbage.
This is distinct from Pattern 5 (date-threshold state transitions) because the failure mode is different. Pattern 5 produces wrong computed state. Pattern 6 produces wrong or missing real-world actions -- a notification that never fires, a reminder for the wrong date, or a crash in the notification handler when it tries to look up the source object.
Severity: Usually Risky (silent failure) rather than BOMB (crash). But notification handlers that force-unwrap the source object are BOMB.
Grep pattern="UNUserNotificationCenter|UNMutableNotificationContent|UNCalendarNotificationTrigger|UNTimeIntervalNotificationTrigger" glob="**/*.swift" output_mode="files_with_matches"
Grep pattern="EKEvent|EKReminder|EventKit" glob="**/*.swift" output_mode="files_with_matches"
For each hit, check what data feeds the scheduling:
Grep pattern="byAdding.*day|byAdding.*month|expirationDate|dueDate|returnDate" path="[file from above]" output_mode="content"
# Python/Django
Grep pattern="send_mail|celery.*eta|schedule.*send|django_q" glob="**/*.py" output_mode="content"
# Ruby/Rails
Grep pattern="deliver_later|perform_later|notify|ActionMailer" glob="**/*.rb" output_mode="content"
# Node/TypeScript
Grep pattern="setTimeout|agenda\.schedule|bull\.add|cron\.schedule" glob="**/*.{ts,js}" output_mode="content"
| Behavior | Rating |
|---|---|
| Nil-safe scheduling with guard-let, cancels stale events on update | Safe |
| Schedules from optional date without nil check | Risky (wrong time or missed event) |
| Handler force-unwraps source object on fire | BOMB (crash when object deleted) |
| No cancellation of stale events when source data changes | Risky (duplicate/wrong notifications) |
// Before (unsafe -- if expirationDate is nil, crashes or schedules for epoch)
let trigger = UNCalendarNotificationTrigger(
dateMatching: Calendar.current.dateComponents([.year, .month, .day], from: item.expirationDate!),
repeats: false
)
// After (safe)
guard let expirationDate = item.expirationDate else { return }
let reminderDate = Calendar.current.date(byAdding: .day, value: -daysBefore, to: expirationDate)
guard let reminderDate else { return }
let trigger = UNCalendarNotificationTrigger(
dateMatching: Calendar.current.dateComponents([.year, .month, .day], from: reminderDate),
repeats: false
)
The general problem: A parent @Model object has a .cascade delete rule on a relationship. A view or closure elsewhere holds a direct reference to a child object independently of the parent. When the parent is deleted, SwiftData cascade-deletes the child, but the view/closure still holds and accesses the now-deleted child. Accessing properties of a deleted @Model crashes at runtime.
This is distinct from Pattern 1 (deferred deletion) because the delete is immediate, not time-delayed. The "bomb" is spatial, not temporal: it depends on which views are active when the delete happens, not on how much time has passed. But it belongs in time-bomb-radar because it shares the same diagnostic shape: an action (delete) whose consequences are invisible at the call site and only manifest when a distant piece of code touches the affected object later.
Severity: Usually BOMB (crash) if the child is accessed after deletion. Safe if the view observes the parent and dismisses when the parent is deleted.
Grep pattern="\.cascade|deleteRule.*cascade" glob="**/*.swift" output_mode="files_with_matches"
For each cascade relationship, identify the child type. Then check if any view holds a direct reference to the child type independent of navigating through the parent:
Grep pattern="@Query.*ChildType|@Bindable.*ChildType|let.*: ChildType|var.*: ChildType" glob="**/*.swift" output_mode="files_with_matches"
@Query, @Bindable, let/var parameter, or @State) rather than accessing it through parent.children?| Behavior | Rating |
|---|---|
| View holds child independently, no guard on parent deletion | BOMB (crash on access) |
| View holds child independently, but parent delete pops navigation/dismisses sheet | Safe |
View accesses child only through parent reference (parent.children) | Safe (parent nil-check guards access) |
Child has .nullify or .deny delete rule (not .cascade) | Not applicable to this pattern |
SwiftData cascade deletes happen synchronously when the parent is deleted in a context. If a SwiftUI view holds an @Bindable reference to the child, the view body may re-evaluate after the delete and access properties on a deleted object. The fix is either:
@Query filtered by parent ID (query returns empty when parent is deleted, view shows empty state)For every hit, produce a rated finding:
| # | Pattern | File:Line | Trigger | Risk | Evidence |
|---|---|---|---|---|---|
| 1 | Deferred delete | SafeDeletionManager.swift:89 | 30 days after archive | BOMB | Cascade to PhotoAttachment with .externalStorage |
| 2 | Cache expiry | OCRCacheManager.swift:235 | 90 days after cache creation | Risky | Object-level purge, no .externalStorage but has relationships |
| 3 | Trial expiry | AITrialManager.swift:44 | Trial end date | Safe | Gate check at view level with fallback |
| 4 | Cascade delete | Item.swift:405 | Parent delete while child sheet open | BOMB | Child view holds @Bindable ref, no dismiss guard |
| Rating | Meaning |
|---|---|
| BOMB | Will crash or corrupt data on aged data. Fix before release. |
| Risky | May fail under specific conditions (bad network, large accumulation). Test manually. |
| Safe | Handles aged data correctly. Document why it's safe. |
A rating without evidence is a guess, not an audit.
When creating findings, populate these optional fields where relationships are obvious:
depends_on/enables: If one finding must be fixed before another (e.g., "fix cascade delete" must happen before "add batch purge for cache"), populate with finding IDs.pattern_fingerprint/grep_pattern/exclusion_pattern: Time bomb patterns are highly generalizable. Assign fingerprints like cascade_delete_with_external_storage, object_level_purge_with_relationships, unbounded_batch_accumulation.After fixing any BOMB or Risky finding, re-verify before closing it:
pruneExpired() was fixed but clearAll() still uses object-level delete, the model is still vulnerable.status: fixed in the handoff YAML with evidence of what was changed.A fix that covers 9 of 10 cascade children is not a fix. Enumerate again after every change.
After completing each pattern scan, print:
---------------------------------------------
TIME BOMB RADAR: Pattern [N]/6 complete
Scanned: [pattern name]
Hits: [count]
Bombs: [count] | Risky: [count] | Safe: [count]
Next: Pattern [N+1] -- [name]
---------------------------------------------
Then AskUserQuestion before proceeding to the next pattern.
When running inside a Tier 2 or Tier 3 pipeline (detected via tier field in .radar-suite/session-prefs.yaml):
radar-suite-core.md Pipeline UX Enhancements #1). If this is the first skill in the pipeline OR experience_level is Beginner/Intermediate, also emit the audit-only statement.Every finding MUST include a short_title field (max 8 words). This is the human-scannable label used in pipeline banners, pre-capstone summaries, and ledger output.
Example: short_title: "30-day cascade delete crash"
All finding ID references in output (tables, banners, summaries) use the format: RS-NNN (short_title).
See radar-suite-core.md for: Tier System, Pipeline UX Enhancements, Table Format, Progress Banner, Issue Rating Tables, Handoff YAML schema, Experience-Level Output Rules, short_title requirement.
Write findings to .radar-suite/time-bomb-radar-handoff.yaml:
source: time-bomb-radar
version: 1.0.0
date: <ISO 8601>
project: <project name>
build: <build number>
patterns_audited: [1, 2, 3, 4, 5, 6]
for_roundtrip_radar:
suspects:
- workflow: "<affected workflow>"
finding: "<time bomb description>"
trigger_condition: "<e.g., 30 days after archive>"
file: "<path:line>"
for_capstone_radar:
blockers:
- finding: "<BOMB description>"
urgency: "CRITICAL"
domain: "Time Bomb"
pattern: "<pattern number and name>"
findings:
- id: <unique hash>
pattern: <1-6>
description: "<plain language>"
file: "<path>"
line: <number>
trigger: "<when it fires>"
rating: "BOMB|Risky|Safe"
confidence: "verified|probable|possible"
status: "open|fixed|deferred|accepted"
evidence: "<what was checked>"
Per the Artifact Lifecycle rules in radar-suite-core.md, before returning from this skill:
.radar-suite/ (and .agents/ui-audit/ or equivalent if used).RESUME_PHASE_*.md, RESUME_*.md except NEXT_STEPS.md, *-v[0-9]*.md) to .radar-suite/archive/superseded/.ledger.yaml, session-prefs.yaml) are in-place rewrites — not dated or versioned.This prevents .radar-suite/ from accumulating stale prose artifacts across runs.
After writing the handoff YAML, also write findings to .radar-suite/ledger.yaml following the Ledger Write Rules in radar-suite-core.md:
impact_category, compute file_hashImpact category mapping for time-bomb-radar findings:
crashdata-lossux-degradedCheck for prior findings that inform this audit:
Read .radar-suite/ledger.yaml (if exists) — check for existing findings to avoid duplicates
If the ledger contains time-bomb findings, note their RS-NNN IDs. When you find the same issue, update the existing finding instead of creating a new one.
Regression check: For any fixed findings in the ledger whose file_hash no longer matches the current file, flag for re-verification per the Regression Detection protocol in radar-suite-core.md.
Read .radar-suite/time-bomb-radar-handoff.yaml (if exists)
If a prior handoff exists, this is a re-run. For each previously-fixed finding:
This prevents the skill from rediscovering everything from scratch on every run while also catching regressions.
Read .radar-suite/data-model-radar-handoff.yaml (if exists)
Look for:
.externalStorage properties (high-priority targets for Pattern 1)Time bomb findings feed directly into:
Time bomb radar consumes:
After every pattern: print progress banner, then AskUserQuestion. Never leave a blank prompt.
Any finding rated BOMB is an automatic release blocker. Do not downgrade without evidence that the code path is unreachable.