Skill
Community

resource-freshness

Install
1
Install the plugin
$
npx claudepluginhub cianos95-dev/claude-command-centre --plugin claude-command-centre

Want just this skill?

Then install: npx claudepluginhub u/[userId]/[slug]

Description

Detect stale resources across the CCC ecosystem: project descriptions, initiative status updates, milestone health, Linear documents, plugin reference docs (README, CONNECTORS, plugin-manifest), and execution context freshness (ctx:* label stale detection for In Progress/In Review issues). Compares actual plugin state from disk against documented state and flags discrepancies. Produces a freshness report with Error/Warning/Info severity ratings. Use when running periodic health checks, auditing resource staleness, checking for drift between plugin state and documentation, detecting stale autonomous agents, or as part of the /ccc:hygiene pipeline. Trigger with phrases like "check resource freshness", "stale resources", "freshness audit", "resource drift", "are my docs stale", "plugin manifest drift", "check project descriptions", "stale agents", "execution context check".

Tool Access

This skill uses the workspace's default tool permissions.

Skill Content

Resource Freshness

Detect stale, outdated, or drifted resources across the CCC ecosystem. This skill audits six resource categories — project descriptions, initiative status updates, milestone health, Linear documents, plugin reference docs, and execution context freshness — and produces a unified freshness report with severity-rated findings.

The skill operates in read-only mode by default. It flags problems but does not auto-remediate. Staleness flags are always advisory — a human decides what to update.

When to Use

  • Periodic health checks — Run weekly or before milestone boundaries to catch staleness early.
  • Pre-planning — Before starting a new cycle, verify all resources are current.
  • Post-milestone — After completing a milestone, check that project descriptions and docs reflect the new state.
  • Hygiene pipeline — Invoked automatically by /ccc:hygiene as the "Resource Freshness" check group.
  • Ad hoc — When you suspect a document or description has drifted from reality.

When NOT to Use

  • For one-time project setup — use issue-lifecycle (Maintenance section) instead.
  • For document creation — use document-lifecycle instead.
  • For milestone carry-forward decisions — use milestone-management instead.
  • For issue-level staleness (Backlog >30 days, etc.) — that's in the /ccc:hygiene Staleness check group, not here.

The Five Check Categories

Category 1: Project Description Staleness

Project descriptions are the primary orientation surface for anyone entering a project. Stale descriptions mislead contributors and cause misrouted work.

Detection logic:

FOR each project in scope:
  FETCH project metadata (description, updatedAt)
  FETCH project milestones (list_milestones)

  days_since_update = today - project.description.updatedAt

  IF days_since_update > staleness_threshold:
    IF any milestone has new Done issues since last description update:
      FLAG as Warning: "Project description may be stale — N issues completed since last update"
    ELSE:
      FLAG as Info: "Project description unchanged for N days (no milestone progress detected)"

  IF project was renamed since last description update:
    FLAG as Error: "Project renamed but description not updated"

  IF milestone count changed since last description update:
    FLAG as Warning: "Milestone structure changed — description may need refresh"

Dynamic thresholds (evidence-based):

Thresholds are not hardcoded. They are computed at runtime from observed update distributions for the workspace, using the P90 × 1.5 formula. This means the skill adapts as the team's cadence changes.

Threshold computation algorithm:

FUNCTION compute_threshold(resource_type, observed_update_ages):
  IF len(observed_update_ages) < 3:
    RETURN fallback_threshold(resource_type)

  SORT observed_update_ages ascending
  p90_index = floor(len * 0.9)
  p90 = observed_update_ages[p90_index]
  threshold = ceil(p90 * 1.5)

  # Clamp to sensible bounds
  RETURN clamp(threshold, min=3, max=90)

FUNCTION fallback_threshold(resource_type):
  # Used when fewer than 3 data points exist
  MATCH resource_type:
    "project_description" → 20   # Claudian baseline: P90=13, ×1.5=20
    "initiative_update"   → 15   # Claudian baseline: P90=10, ×1.5=15
    "milestone_stall"     → 14   # Operational milestones cycle in ~8-10 days
    "document"            → 9    # Claudian baseline: P90=6, ×1.5=9

Calibration evidence (Claudian workspace, Feb 2026):

ResourceNMedianP75P90P90×1.5 (threshold)
Project descriptions51d10d13d20d
Initiative metadata57d8d10d15d
Documents502d6d6d9d
Milestones (operational)49d10d10d15d

Note: Initiative status updates had 0 formal posts — threshold based on updatedAt age. Long-horizon placeholder milestones (conferences, demos >30d) are excluded from calibration; only operational milestones with active issue flow are sampled.

The fallback values above are derived from this calibration data. They are intentionally conservative (round up) so they don't fire on normal cadence. When the skill runs, it attempts to compute live thresholds first; fallbacks are only used when sample size is too small.

Per-project override: Projects can pin a static threshold by including an HTML comment in their description:

<!-- freshness:N -->

Where N is the number of days. This bypasses dynamic computation for that project only. Use for maintenance projects or projects with intentionally slow cadence.

Category 2: Initiative Status Update Freshness

Initiative status updates track strategic progress. This category checks two signals: formal status updates (via save_status_update) and initiative metadata staleness (via updatedAt age).

Detection logic:

FOR each initiative with active projects:
  FETCH status updates (get_status_updates)
  FETCH initiative metadata (updatedAt)

  # Signal 1: Formal status updates
  IF no status updates exist:
    FLAG as Info: "Initiative has no formal status updates — consider posting one"
  ELSE:
    latest_update = most recent status update
    days_since_update = today - latest_update.createdAt
    threshold = compute_threshold("initiative_update", all_initiative_update_gaps)

    IF days_since_update > threshold:
      FLAG as Warning: "Initiative status update overdue — last update N days ago (threshold: M days)"

    IF latest_update.health is "atRisk" or "offTrack":
      FLAG as Info: "Initiative health is [status] — may need attention"

  # Signal 2: Initiative metadata staleness
  initiative_age = today - initiative.updatedAt
  threshold = compute_threshold("initiative_update", all_initiative_ages)

  IF initiative_age > threshold:
    FLAG as Warning: "Initiative [name] metadata stale — last touched N days ago"

Calibration note: As of Feb 2026, zero formal status updates exist in the Claudian workspace. The threshold for formal updates falls back to 15 days (derived from initiative updatedAt P90=10d × 1.5). Once formal updates are posted regularly, the dynamic threshold adapts to the actual posting cadence.

Integration with issue-lifecycle skill (Status Updates section): This category detects when updates are missing or overdue. The issue-lifecycle skill's Status Updates section handles how to generate updates. Resource-freshness never generates updates — it only flags their absence.

Category 3: Milestone Health

Milestones with passed target dates, stalled completion percentages, or orphaned issues indicate execution drift.

Detection logic:

FOR each project with active milestones:
  FETCH milestones (list_milestones)

  FOR each active milestone (not completed, not archived):
    IF milestone.targetDate < today:
      open_issues = count of non-Done issues in milestone
      IF open_issues > 0:
        FLAG as Warning: "Milestone [name] target date passed with N open issues"
      ELSE:
        FLAG as Info: "Milestone [name] target date passed — all issues Done (mark complete?)"

    IF milestone.targetDate is within near_due_window AND completion < 50%:
      # near_due_window = max(3, median_milestone_lifespan * 0.3)
      # Claudian baseline: operational milestones last ~8-10 days, so 3 days is ~30%
      FLAG as Warning: "Milestone [name] due in N days but only M% complete"

    done_count = count of Done issues
    total_count = count of all issues
    stall_threshold = compute_threshold("milestone_stall", completed_milestone_lifespans)
    IF total_count > 0 AND done_count == 0 AND milestone age > stall_threshold:
      FLAG as Warning: "Milestone [name] has N issues but 0% completion after M days — may be stalled"

    IF milestone has no issues:
      FLAG as Info: "Milestone [name] has no issues assigned — empty milestone"

Delegation: This category reads milestone data directly. For carry-forward decisions (moving open issues from expired milestones to the next one), defer to the milestone-management skill. Resource-freshness detects the problem; milestone-management handles the remedy.

Category 4: Document Staleness

Linear documents have type-specific staleness thresholds defined by the document-lifecycle skill. This category delegates to those thresholds and adds ecosystem-level checks.

Detection logic:

FOR each project in scope:
  IF project description contains "<!-- no-auto-docs -->":
    SKIP document checks for this project
    REPORT: "Document freshness: skipped ([project] opted out)"

  FETCH documents (list_documents, limit: 100)

  FOR each document:
    IDENTIFY type by title pattern (per document-lifecycle taxonomy)
    LOOK UP staleness threshold for type

    CHECK content for <!-- reviewed: YYYY-MM-DD --> marker
    IF marker present AND within threshold:
      SKIP (recently reviewed)

    days_since_update = today - document.updatedAt
    IF days_since_update > threshold:
      FLAG as Warning: "Document [title] stale — last updated N days ago (threshold: M days)"

  IF document count >= 100:
    FLAG as Info: "Document freshness limited to first 100 documents — audit may be incomplete"

Staleness thresholds:

For typed documents, use document-lifecycle taxonomy thresholds as the primary signal. For untyped documents, apply the dynamic P90 × 1.5 threshold computed from the workspace document corpus.

Document TypeThreshold SourceCalibrated Value (Claudian Feb 2026)Pattern
Key Resourcesdocument-lifecycle taxonomy14 daysexact: Key Resources
Decision Logdocument-lifecycle taxonomy14 daysexact: Decision Log
Project UpdateNo stalenessprefix: Project Update —
Research Library Indexdocument-lifecycle taxonomy30 daysexact: Research Library Index
ADRdocument-lifecycle taxonomy60 daysprefix: ADR:
Untyped / Living DocumentDynamic P90×1.59 days (P90=6d)project-specific

The untyped document threshold of 9 days reflects that the Claudian workspace's 50 documents have a median age of 2 days and P90 of 6 days — a very active corpus. In a slower workspace, the dynamic computation would produce a higher threshold automatically.

Per-document override in project description:

<!-- staleness:document-slug:N -->

Category 5: Plugin Reference Doc Drift

Plugin reference documents (README.md, CONNECTORS.md, docs/plugin-manifest.md) can drift from the actual plugin state on disk. This category compares documented state against authoritative sources (marketplace.json, filesystem).

Detection logic:

READ marketplace.json → extract authoritative counts:
  actual_skills = len(plugins[0].skills)
  actual_commands = len(plugins[0].commands)
  actual_agents = len(plugins[0].agents)
  actual_version = plugins[0].version

READ README.md → extract documented counts:
  PARSE "N skills, N commands, N agents, N hooks" pattern
  PARSE version badge or version string

READ CONNECTORS.md → extract agent status fields:
  FOR each documented agent/connector:
    CHECK status field (Evaluating/Adopted/Deprecated)
    CHECK cost estimates
    CHECK trial/adoption dates

READ docs/plugin-manifest.md → extract documented counts and version:
  PARSE skill/command/agent/hook counts
  PARSE version number

COMPARE actual vs documented:
  IF skill count mismatch:
    FLAG as Error: "README skill count (N) != manifest (M)"
  IF command count mismatch:
    FLAG as Error: "README command count (N) != manifest (M)"
  IF agent count mismatch:
    FLAG as Error: "README agent count (N) != manifest (M)"
  IF version mismatch:
    FLAG as Warning: "README version (X) != marketplace.json version (Y)"

  IF CONNECTORS.md has stale status fields:
    FLAG as Warning: "CONNECTORS.md agent [name] status may be stale"
  IF docs/plugin-manifest.md counts differ from manifest:
    FLAG as Warning: "plugin-manifest.md counts drift from marketplace.json"

What counts as "drift":

CheckSource of TruthDocumented InSeverity
Skill countmarketplace.json skills[]README.mdError
Command countmarketplace.json commands[]README.mdError
Agent countmarketplace.json agents[]README.mdError
Hook counthooks/scripts/*.sh on diskREADME.mdError
Plugin versionmarketplace.json versionREADME.md, plugin-manifest.mdWarning
Agent statusActual usage (Evaluating/Adopted/Deprecated)CONNECTORS.mdWarning
Skill listmarketplace.json skills[]docs/plugin-manifest.mdWarning

Category 6: Execution Context Freshness

Issues with ctx:* labels have context-specific stale thresholds. An issue stuck In Progress with ctx:autonomous for 2 hours likely means the agent failed silently. This category detects those situations.

Detection logic:

FOR each issue with status "In Progress" or "In Review":
  IDENTIFY ctx:* label (if any)

  IF no ctx:* label:
    FLAG as Warning: "Issue [identifier] is In Progress but has no ctx:* label — execution context unknown"
    CONTINUE

  MATCH ctx label:
    "ctx:autonomous":
      stale_ip = 30 minutes
      stale_ir = 4 hours
    "ctx:interactive":
      stale_ip = 2 hours
      stale_ir = 24 hours
    "ctx:review":
      stale_ip = 1 hour
      stale_ir = 8 hours
    "ctx:human":
      stale_ip = 48 hours
      stale_ir = 72 hours

  time_in_status = now - issue.statusChangedAt

  IF status == "In Progress" AND time_in_status > stale_ip:
    FLAG as Warning: "Issue [identifier] (ctx:[context]) stale in In Progress — [time] (threshold: [stale_ip])"
  IF status == "In Review" AND time_in_status > stale_ir:
    FLAG as Warning: "Issue [identifier] (ctx:[context]) stale in In Review — [time] (threshold: [stale_ir])"

When this runs: On-demand during session-exit normalization, daily triage sweep, or /ccc:hygiene. Not a daemon — Layer 1 (Linear SLAs) provides passive visual indicators; this skill provides active detection.

Integration with Linear SLAs: Layer 1 SLA fire icons (configured in Linear UI) provide visual indicators. This category adds programmatic detection with context-aware thresholds that Linear SLAs cannot express (sub-hour for autonomous agents).

Freshness Report Output Format

The skill produces a structured report following the same severity model used across all CCC hygiene checks.

## Resource Freshness Report

**Date:** [timestamp]
**Projects audited:** N
**Initiatives audited:** N
**Documents audited:** N
**Reference docs checked:** N

### Summary
- Errors: N
- Warnings: N
- Info: N

### Category Coverage
| Category | Checks Run | Errors | Warnings | Info |
|----------|-----------|--------|----------|------|
| Project Descriptions | N | N | N | N |
| Initiative Updates | N | N | N | N |
| Milestone Health | N | N | N | N |
| Document Staleness | N | N | N | N |
| Reference Doc Drift | N | N | N | N |
| Execution Context Freshness | N | N | N | N |

### Errors (must fix)
| Resource | Category | Details | Suggested Fix |
|----------|----------|---------|---------------|
| [resource name] | [category] | [what's wrong] | [what to do] |

### Warnings (should fix)
| Resource | Category | Details | Suggested Fix |
|----------|----------|---------|---------------|
| [resource name] | [category] | [what's wrong] | [what to do] |

### Info (nice to fix)
| Resource | Category | Details | Suggested Fix |
|----------|----------|---------|---------------|
| [resource name] | [category] | [what's wrong] | [what to do] |

Severity scoring (same as /ccc:hygiene):

LevelLabelScore ImpactAction Required
Errormust fix-10 per findingBlocking — should be resolved before next cycle
Warningshould fix-3 per findingAdvisory — resolve during current cycle
Infonice to fix-1 per findingInformational — resolve when convenient

Suppress clean categories: If a category has zero findings, omit it from the Errors/Warnings/Info tables (but keep it in the Category Coverage table with zeroes). This follows the planning-preflight convention of suppressing clean results.

Hygiene Integration

Resource freshness integrates with /ccc:hygiene as an additional check group that runs after the existing six groups.

Check group order (updated):

  1. Label Consistency
  2. Metadata Completeness
  3. Staleness (issue-level)
  4. Milestone Health (delegates to milestone-management)
  5. Document Health (delegates to document-lifecycle)
  6. Dependency Health (delegates to issue-lifecycle Dependencies section)
  7. Resource Freshness (delegates to this skill, Categories 1-5)
  8. Execution Context Freshness (delegates to this skill, Category 6)

How it's invoked:

The hygiene command calls the resource-freshness skill after dependency health checks. The skill returns findings in the standard {severity, resource, category, details, suggested_fix} format. The hygiene command merges these findings into the overall hygiene report and adjusts the hygiene score accordingly.

Session cache: The resource-freshness skill uses session cache when available. If list_milestones, list_documents, or get_status_updates have already been called in the current session (by earlier hygiene check groups), reuse those results. Do NOT re-fetch.

Scope Control

Default scope: All projects in the current team.

Narrow scope: When invoked with a specific project name (e.g., "check freshness for CCC"), limit to that project only. Skip initiative-level checks unless the initiative contains the specified project.

Plugin reference doc checks (Category 5) only run when the current working directory is a CCC plugin repository (detected by presence of .claude-plugin/marketplace.json). Outside a plugin repo, Category 5 is skipped with a note: "Reference doc drift: skipped (not a plugin repository)."

Graceful Degradation

Every data source can fail. The skill must continue when individual sources are unavailable.

FailureResponse
Linear API unavailableSkip Categories 1-4. Report: "Linear API unavailable — skipped project/initiative/milestone/document checks." Run Category 5 (local disk) only.
list_milestones fails for a projectSkip milestone health for that project. Report: "Milestone health: skipped for [project] (API error)." Continue with other projects.
get_status_updates failsSkip initiative checks. Report: "Initiative freshness: skipped (API error)." Continue with other categories.
list_documents fails for a projectSkip document checks for that project. Report: "Document freshness: skipped for [project] (API error)." Continue with other projects.
README.md missingSkip README drift checks. Report: "README drift: skipped (file not found)."
CONNECTORS.md missingSkip CONNECTORS drift checks. Report: "CONNECTORS drift: skipped (file not found)."
docs/plugin-manifest.md missingSkip plugin-manifest drift checks. Report: "Plugin manifest doc drift: skipped (file not found)."
marketplace.json missingSkip ALL Category 5 checks. Report: "Reference doc drift: skipped (no marketplace.json found)."
.ccc-preferences.yaml missingUse default thresholds. No warning needed — defaults are sensible.

Principle: Never block the entire freshness run because one data source is unavailable. Report what you can, skip what you can't, and be transparent about what was skipped.

Advisory-Only Principle

Resource freshness follows the CCC-wide pattern: detect → flag → present to human → wait for decision.

The skill NEVER:

  • Auto-updates project descriptions
  • Auto-posts initiative status updates
  • Auto-carries-forward milestone issues
  • Auto-updates documents
  • Auto-edits README, CONNECTORS, or plugin-manifest.md

It ONLY:

  • Reads current state
  • Compares against thresholds and authoritative sources
  • Reports findings with suggested fixes
  • Contributes to the hygiene score

Configuration

Threshold Strategy: Dynamic First, Override Second

The skill computes thresholds dynamically from observed data using the P90 × 1.5 formula. This is the primary threshold mechanism — no configuration needed for normal operation.

Configuration is only needed to override the dynamic thresholds in special cases.

.ccc-preferences.yaml Overrides

Static overrides pin thresholds when dynamic computation is inappropriate (e.g., a project with irregular cadence that shouldn't adapt):

resource_freshness:
  # Pin static thresholds (bypasses dynamic computation)
  # Only set these if you need to override the P90×1.5 defaults
  project_description_threshold_days: null  # null = use dynamic (default)
  initiative_update_threshold_days: null
  milestone_stall_threshold_days: null
  document_default_threshold_days: null

  # Category 5 toggle
  reference_doc_drift_enabled: true

  # Dynamic computation parameters
  dynamic:
    multiplier: 1.5         # Applied to P90 (default: 1.5)
    min_sample_size: 3      # Below this, use fallback thresholds
    min_threshold_days: 3   # Floor clamp
    max_threshold_days: 90  # Ceiling clamp

Resolution order: Per-project HTML override → .ccc-preferences.yaml static override → dynamic P90×1.5 → fallback threshold.

Per-Project Overrides

In the project description, use HTML comments:

<!-- freshness:N -->              # Pin project description threshold to N days
<!-- staleness:doc-slug:N -->     # Pin specific document threshold (from document-lifecycle)
<!-- no-auto-docs -->             # Opt out of document checks entirely
<!-- no-freshness -->             # Opt out of ALL resource freshness checks for this project

Performance Budget

OperationExpected CostBudget
List projects1 API callCached
Project descriptions1 call per projectMax 10 projects
Initiative status updates1 call per initiativeMax 5 initiatives
List milestones (per project)1 call per projectSession cached
List documents (per project)1 call per projectSession cached, limit: 100
Local file reads (Cat 5)4 file readsInstant (disk)

Total budget: ~25 API calls for a typical workspace. Well within the standard session budget.

Session cache integration: Milestone and document data fetched by earlier hygiene check groups (Milestone Health, Document Health) should be reused. The resource-freshness skill checks for cached results before making API calls.

Cross-Skill References

  • document-lifecycle -- Provides document type taxonomy and staleness thresholds for Category 4. Resource-freshness delegates document classification to this skill's references/document-types.md.
  • milestone-management -- Handles milestone carry-forward decisions when Category 3 detects expired milestones with open issues. Resource-freshness detects the problem; milestone-management provides the remedy.
  • issue-lifecycle (Status Updates section) -- Generates initiative and project status updates. Resource-freshness (Category 2) detects when updates are missing or overdue; the Status Updates section handles the generation.
  • issue-lifecycle (Maintenance section) -- One-time project normalization vs. resource-freshness's ongoing monitoring. Use the Maintenance section for initial setup, resource-freshness for maintenance.
  • planning-preflight -- Both skills detect staleness but at different levels. planning-preflight focuses on issue-level staleness (IQR-based detection). Resource-freshness focuses on resource-level staleness (descriptions, docs, milestones).
  • issue-lifecycle -- Defines project hygiene protocol with staleness thresholds for project descriptions. Resource-freshness replaces the static 14-day threshold with a dynamic P90×1.5 computation calibrated from observed workspace data.
  • observability-patterns -- Layer 2 (runtime) monitors skill trigger frequency. Resource-freshness can detect dormant reference docs that may indicate the observability layer is not being consulted.
  • quality-scoring -- Freshness findings feed into the quality score's "documentation health" dimension. Zero errors + zero warnings in resource-freshness contributes to a higher quality score.
Stats
Stars0
Forks0
Last CommitFeb 26, 2026

Similar Skills