Workspace maintenance with health checks, index rebuilding, task sync, memory gap detection, archival, inbox processing, and reference file updates
Performs comprehensive workspace maintenance including health checks, index rebuilding, inbox processing, and task synchronization.
npx claudepluginhub ajayjohn/tars-work-assistantThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Comprehensive workspace maintenance with five distinct modes: running health checks, syncing tasks and detecting gaps, rebuilding all index files, batch-processing inbox items with parallel sub-agents, and updating workspace reference files to match the latest plugin version.
These operations run silently at the start of the first session each day via the core skill's session-start housekeeping check. They require no user interaction and produce no output unless critical issues are found.
| Operation | Script | What it does | Failure behavior |
|---|---|---|---|
| Archival sweep | scripts/archive.py --auto | Expire ephemeral lines past their [expires:] date, check staleness thresholds, move qualifying files to archive/ | Log failure, continue session |
| Health check | scripts/health-check.py | Validate indexes vs files on disk, check for broken wikilinks, flag naming violations | Log failure, continue session |
| Task sync | scripts/sync.py | Check reference/schedule.md for due recurring/one-time items, scan for orphan tasks | Log failure, continue session |
| Inbox count | (directory listing) | Count files in inbox/pending/ and update .housekeeping-state.yaml | Non-critical, skip on error |
Triggering: The core skill reads reference/.housekeeping-state.yaml at session start. If last_run is not today, it runs the above scripts in sequence, updates the state file, and proceeds to the user's request.
These operations are more expensive, require user judgment, or have side-effects that need confirmation. They are invoked via /maintain <mode> or natural language ("rebuild indexes", "process inbox", "deep sync").
| Operation | Command | Why not automatic |
|---|---|---|
| Full index rebuild | /maintain rebuild | Rewrites all _index.md files. Expensive for large workspaces. Only needed when indexes are known to be stale or corrupted. |
| Inbox processing | /maintain inbox | Spawns sub-agents that create tasks and memory entries. Requires user to review and confirm the processing plan before execution. |
| Comprehensive sync | /maintain sync --comprehensive | Queries MCP sources (project tracker, calendar), scans last 90 days of journal. Higher cost. Surfaces items that need user triage. |
| Reference file update | /maintain update | Updates workspace reference files to match latest plugin version. Preserves user data. Requires user to review changes before applying. |
| Unarchive content | (manual) | Requires user to select specific archived files to restore. No bulk unarchive. |
| Manual health fixes | /maintain health | Script-detected issues that require human judgment (file renames, broken wikilink resolution, frontmatter corrections). |
If the user's environment supports Cowork shortcuts with schedules, a daily-housekeeping shortcut can run maintenance on a cron schedule (e.g., 8 AM daily) instead of relying on session-start detection. This is the preferred approach when available, as it runs even on days the user doesn't start a session. See the shortcut definition in the plugin's shortcut registry.
The session-start check in the core skill serves as a fallback: if the scheduled shortcut already ran today, last_run will be current and the session-start check will be a no-op.
Scan for workspace issues, auto-fix where safe, and report problems requiring manual intervention.
Read the following indexes:
memory/_index.md (master memory index)memory/decisions/_index.mdmemory/people/_index.mdmemory/initiatives/_index.mdcontexts/products/_index.md (if exists)reference/replacements.mdScan memory/decisions/ for naming violations:
Standard pattern: YYYY-MM-DD-{slug}.md
| Violation | Example | Suggested fix |
|---|---|---|
| Missing date prefix | ba-role-definition.md | Extract date from frontmatter, rename |
| Context-first | ai-strategy-2026-01-20.md | Reorder to 2026-01-20-ai-strategy.md |
| No date anywhere | legacy-decision.md | Check frontmatter date field, rename |
Auto-fix (safe): If frontmatter contains date field, suggest rename command.
Scan all memory files for required fields:
| Type | Required fields |
|---|---|
| All memory | title, type, summary, updated |
| person | + tags, aliases |
| decision | + status, decision_maker |
| product-spec | + status, owner |
For decisions, verify status is one of: proposed, decided, implemented, superseded, rejected
For products/product-specs, verify status is one of: active, planned, deprecated
Report invalid values.
For each category index:
_index.md.md files in folderFor each file in memory:
updated date with index entryAuto-fix (safe): Run /maintain rebuild to resync all indexes.
Scan all files in memory/, journal/, and contexts/ for wikilinks:
Pattern: [[Entity Name]]
For each wikilink:
memory/people/_index.mdmemory/initiatives/_index.mdmemory/products/_index.mdmemory/decisions/_index.mdFlag broken wikilinks (reference to non-existent entity).
Scan all journal files from the last 30 days for names:
reference/replacements.mdFlag names/acronyms that appear multiple times but aren't in replacements.
Auto-fix (safe): Add flagged items to reference/replacements.md with placeholder:
| NewName | ?? (needs canonical form) |
Scan for potential duplicates:
aliases values across different filesFor each person in memory/people/:
Generate report in this format:
## Housekeeping report (YYYY-MM-DD)
### Auto-fixed
| Category | File | Issue | Fix applied |
|----------|------|-------|------------|
| naming | decisions/ba-role-definition.md | Missing date prefix | Renamed to 2026-01-15-ba-role-definition.md |
| index | memory/people/_index.md | Orphan entry "Jane Doe" | Removed from index |
| index | (multiple) | Files not in index | Ran rebuild-indexes.py |
| replacements | reference/replacements.md | "JT" uncovered (5 uses) | Added placeholder entry |
### Manual action required
| Category | File | Issue | Suggested fix |
|----------|------|-------|---------------|
| frontmatter | memory/decisions/old.md | Invalid status "pending" | Change to "proposed" |
| wikilink | journal/2026-01/meeting.md | Broken link [[Unknown Person]] | Create memory entry or fix reference |
### Summary
- Auto-fixed: N issues (N renames, N index fixes, N replacement additions)
- Manual action required: N issues
- Workspace health: {healthy | needs attention | degraded}
Sync tasks from integration, detect memory gaps, and triage stale items. Optional flag: --comprehensive for deep scan mode.
Query task integration (read reference/integrations.md Tasks section for provider details):
list operation for all configured lists (default: Active, Delegated, Backlog)overdue operationRead:
memory/people/_index.mdmemory/initiatives/_index.mdRun the automated sync script:
python3 scripts/sync.py {workspace_path}
This script checks reference/schedule.md for due items (recurring and one-time) and scans recent journal entries for memory gaps (people and initiatives referenced but not in memory). Parse the JSON output:
schedule.recurring_due: Recurring items past their next-due date. After completion, advance next-due to the next occurrence in schedule.md.schedule.onetime_due: One-time items past their due date. After completion, remove the entry from schedule.md.memory_gaps.unknown_people: People referenced in journal but not in memory/people/. Present to user.memory_gaps.unknown_initiatives: Initiatives referenced but not in memory/initiatives/. Present to user.Surface due items in the report under "Scheduled items." Merge memory gaps with Step 4 output.
If scripts/sync.py is not available, fall back to manual schedule checking:
Read reference/schedule.md if it exists. For each entry:
next-due is today or past. If due, add to triage output as "Scheduled item due."due is today or past. If due, surface it.If project tracker integration is configured:
add operation in the appropriate listScan all reminders/tasks and flag:
| Condition | Flag |
|---|---|
Past due date (from task integration overdue operation) | OVERDUE |
| Created >30 days ago without update (from notes field) | STALE |
| No initiative in notes field | ORPHAN |
| Owner in notes not in memory/people/ | UNKNOWN OWNER |
Present flagged items grouped by category. For each:
Decode all entities referenced in tasks:
memory/people/_index.md. List undefined people.[[Initiative]] references from notes fields. Cross-reference against memory/initiatives/_index.md. List undefined initiatives.reference/replacements.md or memory indexes. List undefined terms.Present gaps:
## Memory gaps detected
### Undefined people (referenced in tasks but not in memory)
- "Sarah Chen" (owner of 3 tasks) -- Create memory entry?
- "Mike R." (mentioned in 1 task) -- Add to replacements?
### Undefined initiatives
- "Project Phoenix" (linked to 2 tasks) -- Create initiative entry?
### Undefined terms
- "RBAC" (used in 2 task descriptions) -- Add to replacements?
For each gap, ask user to provide brief context, then create the memory entry or replacement.
## Update complete
| Category | Count |
|----------|-------|
| Tasks synced from project tracker | N |
| Overdue tasks flagged | N |
| Stale tasks flagged | N |
| Orphan tasks flagged | N |
| Memory gaps found | N |
| Memory entries created | N |
| Replacements added | N |
--comprehensive)All of default mode, PLUS:
If project tracker is configured:
Query the calendar integration for last 7 days of meetings (see reference/integrations.md Calendar section):
YYYY-MM-DD format, execute list_events operation with offset=7If calendar integration is not reachable, skip calendar scan and note the gap.
Scan memory for staleness:
completed that still have open reminders -> flagPresent:
## Stale memory candidates
- memory/initiatives/old-project.md -- Tagged completed, 2 open tasks reference it
- memory/people/former-vendor.md -- Not referenced in 90+ days
- memory/decisions/q3-decision.md -- 6+ months old, verify still relevant
From MCP sources scanned in Step 5:
Append to default report:
### Comprehensive scan results
| Category | Count |
|----------|-------|
| Unprocessed meetings found | N |
| New entities from MCP sources | N |
| Stale memory candidates | N |
Regenerate all _index.md files from current file contents and frontmatter.
For each category in memory/ (people, initiatives, decisions, products, vendors, competitors, organizational-context):
.md files in the folder (excluding _index.md and _template.md)title, aliases, tags, summary, updated_index.md with format:# [Category] index
| Name | Aliases | File | Summary | Updated |
|------|---------|------|---------|---------|
| Entity Name | alias1, alias2 | filename.md | One-line summary | YYYY-MM-DD |
For initiatives, separate into Active and Completed sections based on tags.
Generate memory/_index.md:
# Memory index
| Category | Path | Count |
|----------|------|-------|
| People | memory/people/ | N |
| Initiatives | memory/initiatives/ | N |
| Decisions | memory/decisions/ | N |
| Products | memory/products/ | N |
| Vendors | memory/vendors/ | N |
| Competitors | memory/competitors/ | N |
| Organizational context | memory/organizational-context/ | N |
For each month folder in journal/:
.md files (excluding _index.md)date, type, title, participants, initiativesjournal/YYYY-MM/_index.md:# [Month Year] journal index
| Date | Type | Title | Participants | Initiatives |
|------|------|-------|-------------|-------------|
| YYYY-MM-DD | meeting | Title | Names | Initiatives |
Scan contexts/products/ for product specification files:
.md files in the folder (excluding _index.md)title, type, status, owner, summary, updatedcontexts/products/_index.md:# Product specifications index
| Name | Status | Owner | Summary | Updated |
|------|--------|-------|---------|---------|
| Product Name | active | [[Owner Name]] | One-line summary | YYYY-MM-DD |
For files in memory/decisions/:
YYYY-MM-DD-{slug}.mdtopic-YYYY-MM-DD.md)For completed years, generate journal/YYYY-annual-index.md consolidating all month indexes.
Report what was regenerated and any issues found:
## Rebuild complete
### Indexes regenerated
| Area | Count |
|------|-------|
| Memory categories | N |
| Journal months | N |
| Contexts/products | N |
### Issues found
| Type | File | Issue | Suggested fix |
|------|------|-------|---------------|
| missing-frontmatter | path/file.md | No frontmatter | Add required fields |
| naming-violation | decisions/file.md | Missing date prefix | Rename to YYYY-MM-DD-slug.md |
| missing-required | path/file.md | Missing `summary` field | Add summary for index |
Run the automated rebuild script for deterministic index generation:
python3 scripts/rebuild-indexes.py {workspace_path}
This script performs Steps 1-5 deterministically (memory indexes, master index, journal indexes, contexts index, decision naming validation). Parse the JSON output and use it to populate the rebuild report.
The script returns JSON with:
stats: counts of memory categories, journal months, context products, and total entries rebuiltissues: array of problems found (missing-frontmatter, naming-violation, missing-required fields)total_issues: count of all issuesPresent the stats as the "Indexes regenerated" table. Present issues as the "Issues found" table. The script handles all file I/O — do not duplicate its work by manually reading and rewriting index files.
If scripts/rebuild-indexes.py is not available (e.g., workspace predates script extraction), fall back to the manual procedure above.
Batch-process all pending items in the inbox using isolated parallel sub-agents. Each item is processed by an independent sub-agent with its own context, ensuring no cross-contamination between items.
List all files in inbox/pending/. For each file:
transcript, article, email, notes, unknownIf no pending items, report "Inbox is empty" and exit.
Before spawning sub-agents, present the plan to the user:
## Inbox processing plan
| # | File | Detected type | Proposed action |
|---|------|---------------|-----------------|
| 1 | meeting-2026-02-05.txt | transcript | Process as meeting (tasks + memory) |
| 2 | article-ai-strategy.md | article | Extract wisdom |
| 3 | notes-from-call.txt | notes | Extract tasks + memory |
| 4 | unknown-file.txt | unknown | Skip (manual review needed) |
Process all items? (Confirm before proceeding)
Wait for user confirmation before spawning sub-agents. Allow the user to exclude specific items or change detected types.
Before spawning sub-agents, perform a single pass of name resolution across ALL confirmed transcript items to ensure consistency and minimize user interruptions.
Apply the name resolution protocol (core skill, Memory protocol section):
reference/replacements.md and memory/people/_index.mdPass the resolution table to each sub-agent in its prompt: "Use these resolved canonical names: Christopher = Christopher Smith, Mick = Michael Johnson"
This prevents: (a) each sub-agent independently guessing different resolutions for the same person, (b) the user being asked the same question by multiple sub-agents, (c) wrong names propagating to memory and tasks.
Skip this step if no transcript items are in the confirmed plan.
After user confirmation:
inbox/pending/ to inbox/processing/ as a sequential batch. Complete all moves before proceeding.The move-all-first ordering prevents any concurrent session from double-processing items.
Transcript items:
You are processing a meeting transcript from the inbox.
Use the meeting skill pipeline (skills/meeting/SKILL.md) as your execution guide.
Source file: inbox/processing/{filename}
Step 1: Load reference files (MANDATORY before reading transcript)
- Read reference/replacements.md. Apply canonical names to ALL content.
- If the main agent provided a name resolution table, apply those resolved names.
Do NOT re-resolve names in the table. Only resolve names NOT in the table using replacements.md.
- Read reference/integrations.md (Calendar and Tasks sections).
- Read memory indexes: memory/people/_index.md, memory/initiatives/_index.md, memory/decisions/_index.md.
These indexes are required for wikilink validation in Step 4.
Step 2: Resolve transcript content
- Read the transcript file.
- Resolve speaker names: if speakers are generic ("Speaker 1"), infer real names from context.
NEVER use generic speaker labels in output.
- Query calendar integration for this meeting date to retrieve attendee list, official title, and organizer.
If calendar is unavailable, proceed with transcript data and note the gap.
Step 3: Generate structured report
Produce ALL five sections:
- Topics: discussion points as bullets
- Updates: status updates from people other than the user (name, project, date)
- Concerns: risks raised (who, issue, deadline)
- Decisions: what was decided and who made the call
- Action items: classified as For me / For others / Unassigned
Step 4: Save to journal
- Filename: journal/YYYY-MM/YYYY-MM-DD-meeting-{slug}.md
- Frontmatter: date, title, type: meeting, participants, organizer, topics, initiatives, source
- Body: use [[wikilink]] syntax for all entity references (people, initiatives, decisions)
- BEFORE writing any [[Name]], verify the name exists in the memory indexes read in Step 1.
If a name does NOT appear in any index AND is not in replacements.md: add it to reference/replacements.md
with placeholder "?? (needs canonical form)" and include it in the unverified_wikilinks field.
Do NOT fabricate wikilinks for unverified names.
Step 5: Extract tasks
- Apply accountability test (never create tasks for "Team" or "We" without a specific lead)
- Check for duplicates across all configured task lists
- Create via task integration. Resolve relative dates to YYYY-MM-DD.
- Check each tool response. Only count a task as created if the response confirms success.
- After all creation attempts, execute list_reminders for each list that received new tasks.
Verify each task appears by matching title. Add any missing tasks to creation_unverified.
NEVER report tasks as created without verification.
Step 6: Extract memory
- Apply durability test to each insight
- Persist durable insights to memory/ using the folder mapping from the core skill
- Update relevant _index.md files after writing
- Use .lock files for memory writes (cowork protocol)
After all steps complete, move the source file to inbox/completed/{filename}.
Return JSON:
{
"status": "ok" | "partial" | "error",
"source_file": "{filename}",
"content_type": "transcript",
"journal_path": "journal/YYYY-MM/...",
"tasks_created": 0,
"memory_updates": 0,
"creation_unverified": [],
"unverified_wikilinks": [],
"errors": []
}
Status "partial": journal entry was saved but one or more downstream steps failed.
Status "error": the journal entry could not be saved.
Article/wisdom items:
You are extracting wisdom from an article in the inbox.
Use the wisdom extraction pipeline (skills/learn/SKILL.md, Mode B) as your execution guide.
Source file: inbox/processing/{filename}
Step 1: Load reference files (MANDATORY before reading article)
- Read reference/replacements.md. Apply canonical names to ALL content.
- If the main agent provided a name resolution table, apply those resolved names.
Do NOT re-resolve names in the table.
- Read memory indexes: memory/people/_index.md, memory/initiatives/_index.md, memory/decisions/_index.md.
Required for wikilink validation before writing to memory.
Step 2: Read and classify article content.
Step 3: Extract insights
- Apply durability test to each insight (all four criteria from core skill memory protocol).
- Discard insights that fail.
Step 4: Persist durable insights to memory
- Map each passing insight to the correct memory folder (people, initiatives, decisions, etc.)
- BEFORE writing any [[wikilink]], verify the entity name exists in the memory indexes read in Step 1.
If a name does NOT appear in any index: add it to reference/replacements.md with placeholder
"?? (needs canonical form)" and include it in the unverified_wikilinks field.
Do NOT fabricate wikilinks for unverified names.
- Update relevant _index.md after each write.
- Use .lock files for memory writes (cowork protocol).
Step 5: Save extraction report
- Filename: journal/YYYY-MM/YYYY-MM-DD-wisdom-{slug}.md
Step 6: Extract tasks
- Apply accountability test.
- Create via task integration.
- Check each tool response. Only count a task as created if the response confirms success.
- After all creation attempts, execute list_reminders for each list that received new tasks.
Verify each task appears by matching title. Add any missing tasks to creation_unverified.
NEVER report tasks as created without verification.
After all steps complete, move the source file to inbox/completed/{filename}.
Return JSON:
{
"status": "ok" | "partial" | "error",
"source_file": "{filename}",
"content_type": "article",
"journal_path": "journal/YYYY-MM/...",
"insights_persisted": 0,
"tasks_created": 0,
"creation_unverified": [],
"unverified_wikilinks": [],
"errors": []
}
Status "partial": journal entry was saved but one or more downstream steps failed.
Status "error": the journal entry could not be saved.
Notes items:
You are processing notes from the inbox.
Source file: inbox/processing/{filename}
Step 1: Load reference files (MANDATORY before reading notes)
- Read reference/replacements.md. Apply canonical names to ALL content.
- If the main agent provided a name resolution table, apply those resolved names.
Do NOT re-resolve names in the table.
- Read reference/integrations.md Tasks section for task creation.
- Read memory indexes: memory/people/_index.md, memory/initiatives/_index.md, memory/decisions/_index.md.
Required for wikilink validation before writing to memory or journal.
Step 2: Read the notes file.
Step 3: Extract tasks
- Apply accountability test (never create tasks for "Team" or "We" without a specific lead)
- Check for duplicates across all configured task lists
- Create via task integration. Resolve relative dates to YYYY-MM-DD.
- Check each tool response. Only count a task as created if the response confirms success.
- After all creation attempts, execute list_reminders for each list that received new tasks.
Verify each task appears by matching title. Add any missing tasks to creation_unverified.
NEVER report tasks as created without verification.
Step 4: Extract durable memory
- Apply durability test to each insight.
- BEFORE writing any [[wikilink]], verify the entity name exists in the memory indexes read in Step 1.
If a name does NOT appear in any index: add it to reference/replacements.md with placeholder
"?? (needs canonical form)" and include it in the unverified_wikilinks field.
Do NOT fabricate wikilinks for unverified names.
- Persist passing insights to memory/. Update relevant _index.md files.
- Use .lock files for memory writes (cowork protocol).
Step 5: Save notes summary
- Filename: journal/YYYY-MM/YYYY-MM-DD-notes-{slug}.md
- Body: use [[wikilink]] syntax for verified entity references only.
After all steps complete, move the source file to inbox/completed/{filename}.
Return JSON:
{
"status": "ok" | "partial" | "error",
"source_file": "{filename}",
"content_type": "notes",
"journal_path": "journal/YYYY-MM/...",
"tasks_created": 0,
"memory_updates": 0,
"creation_unverified": [],
"unverified_wikilinks": [],
"errors": []
}
Status "partial": journal entry was saved but one or more downstream steps failed.
Status "error": the journal entry could not be saved.
After all sub-agents complete, collect JSON results from each sub-agent and evaluate status:
Status "ok": All steps completed successfully. No action needed beyond reporting.
Status "partial": The journal entry was saved but one or more downstream steps failed.
inbox/failed/ (the journal entry exists and is valid).inbox/completed/ (sub-agent moved it).errors[] array).unverified_wikilinks[] so the user can resolve them.creation_unverified[] tasks prominently so the user knows to check.Status "error": The journal entry could not be saved. The sub-agent failed entirely.
inbox/processing/ to inbox/failed/..error file at inbox/failed/{filename}.error containing the errors[] array.Generate consolidated report (format defined in "Inbox mode output" below).
| Content type | Input | Output | Failure mode |
|---|---|---|---|
| Transcript | Source file, replacements.md, integrations.md, memory indexes (people, initiatives, decisions) | JSON: journal path, tasks created, memory updates, creation unverified, unverified wikilinks | ok/partial: source to completed; error: source to failed with .error file |
| Article | Source file, replacements.md, memory indexes (people, initiatives, decisions) | JSON: journal path, insights persisted, tasks created, creation unverified, unverified wikilinks | ok/partial: source to completed; error: source to failed with .error file |
| Notes | Source file, replacements.md, integrations.md, memory indexes (people, initiatives, decisions) | JSON: journal path, tasks created, memory updates, creation unverified, unverified wikilinks | ok/partial: source to completed; error: source to failed with .error file |
Shared constraints for all inbox sub-agents:
.lock files (see core skill cowork protocol)list_reminders after all creation attempts## Inbox processing complete
### Processed items
| # | File | Type | Journal | Tasks | Memory | Status |
|---|------|------|---------|-------|--------|--------|
| 1 | meeting-2026-02-05.txt | transcript | journal/2026-02/... | 3 | 2 | ok |
| 2 | article-ai-strategy.md | article | journal/2026-02/... | 0 | 4 | partial (tasks failed) |
| 3 | notes-from-call.txt | notes | journal/2026-02/... | 2 | 1 | ok |
### Partial items
| File | Journal saved | Failed steps | Unverified wikilinks | Unverified tasks |
|------|---------------|-------------|----------------------|------------------|
| article-ai-strategy.md | journal/2026-02/... | tasks (0 created) | [[Unknown Vendor]] | -- |
### Failed items
| File | Error |
|------|-------|
| (none) | |
### Summary
- Items processed: N (ok: N, partial: N)
- Items failed: N
- Tasks created: N (total across all items, verified via list_reminders)
- Tasks unverified: N (created but not confirmed in list)
- Memory updates: N (total across all items)
- Journal entries created: N
- Unverified wikilinks: N (names not found in memory indexes)
1. Scan inbox and classify items [in_progress → completed]
2. Present processing plan for approval [pending → completed]
3. Process item: {filename1} (parallel) [pending → completed]
4. Process item: {filename2} (parallel) [pending → completed]
5. Process item: {filename3} (parallel) [pending → completed]
6. Collect results and generate report [pending → completed]
Mark all item-processing todos as in_progress simultaneously when spawning sub-agents. Mark each completed as its sub-agent returns.
Update workspace reference files to match the installed plugin version. Preserves user customizations (name replacements, KPI definitions, schedule items) while applying structural changes from plugin updates.
Read reference/.housekeeping-state.yaml for plugin_version.
Read the plugin's .claude-plugin/plugin.json for the current version.
If both versions match, report "Workspace reference files are up to date (v{version})" and exit.
If the workspace has no plugin_version field, this is the first update — proceed.
Run the update script in preview mode:
python3 scripts/update-reference.py {workspace_path} {plugin_path} --dry-run
Parse the JSON output and present to the user:
## Reference file update preview (v{old} → v{new})
### Files to update
| File | Action | User data preserved |
|------|--------|-------------------|
| taxonomy.md | Full replace | (none — no user data) |
| integrations.md | Section merge | status: configured (Tasks) |
### New files to create
| File | Description |
|------|-------------|
| shortcuts.md | Command reference |
### Files unchanged
- replacements.md, schedule.md
### Warnings
- (any conflicts or issues)
Proceed with update?
Wait for user confirmation.
After user confirmation, run without --dry-run:
python3 scripts/update-reference.py {workspace_path} {plugin_path}
Parse the JSON output and report:
## Reference files updated (v{new})
- Files updated: N
- Files created: N
- Files unchanged: N
- User data preserved: (list)
- Warnings: (list)
If scripts/update-reference.py is not found, provide manual update guidance:
Safe to overwrite (no user data): taxonomy.md, workflows.md, shortcuts.md, guardrails.yaml
Requires manual merge (contain user data):
integrations.md: Copy new constraint rules from plugin, keep your status: fieldsreplacements.md: Copy updated header/instructions from plugin, keep your name/team/product rowsschedule.md: Copy updated format spec from plugin, keep your recurring/one-time itemskpis.md: Copy updated header/instructions from plugin, keep your team/initiative sectionsState files (machine-managed): .housekeeping-state.yaml, maturity.yaml
## Update complete
| Category | Count |
|----------|-------|
| Files updated | N |
| Files created | N |
| Files unchanged | N |
| User data preserved | N sections |
| Warnings | N |
Before performing manual checks, run the automated scripts for deterministic validation:
python3 scripts/health-check.py {workspace_path}
This script performs Steps 2-6 deterministically (naming validation, frontmatter checks, index sync, wikilink detection, replacements coverage). Parse the JSON output and use it to populate the issues table in the report. Only manually investigate items the script cannot assess (Step 7: information redundancy, cross-reference depth).
python3 scripts/archive.py {workspace_path}
or for preview only:
python3 scripts/archive.py {workspace_path} --dry-run
This script scans memory files for staleness and archives expired content. Run with --dry-run first to preview, then confirm with the user before running without the flag. Parse the JSON output and include archived file counts in the report.
The scripts return JSON. Key fields:
health-check.py: issues array (each with category, file, issue, suggested_fix), auto_fixes array, summary statsarchive.py: files_archived, expired_lines_removed, archived_files array with paths and reasonsAfter parsing health-check.py JSON, classify each issue by fixability and execute safe auto-fixes:
Auto-fixable (execute immediately):
| Issue category | Fix action | Safety condition |
|---|---|---|
naming (decision files) | Rename file to the suggested_fix target | Only if frontmatter date field exists and is valid YYYY-MM-DD |
index (orphan entries) | Remove orphan row from the category _index.md | Only if source file confirmed absent from disk |
index (files not in index / stale summaries) | Run python3 scripts/rebuild-indexes.py {workspace_path} once | Deterministic index regeneration |
replacements (uncovered names) | Add ?? placeholder entries to reference/replacements.md | Existing behavior |
NOT auto-fixable (present to user):
| Issue category | Why |
|---|---|
frontmatter (missing fields) | Values require user judgment |
frontmatter (invalid status) | Correct status requires understanding intent |
wikilink (broken references) | May need entity creation or reference correction |
replacements (with ?? placeholder) | User must provide canonical name |
If any auto-fix fails (file locked, permission error, target exists), demote it to the manual-fix list with the error reason.
Health mode:
_index.md filesreplacements.mdSync mode:
reference/schedule.md if existsRebuild mode:
Inbox mode:
inbox/pending/ file list + first 50 lines of each file for classificationreference/replacements.md + reference/integrations.md + memory/people/_index.md + memory/initiatives/_index.md + memory/decisions/_index.md (all three indexes mandatory for wikilink validation) + list_reminders queries for task verificationHealth mode:
Sync mode:
Rebuild mode:
Inbox mode:
.lock files for memory writes from parallel sub-agentsunknown (require manual review)inbox/pending/ files to inbox/processing/ BEFORE spawning any sub-agents[[wikilinks]] for names not verified against memory indexes (flag as unverified instead)list_reminders after creationWhen building future functionality, consider whether the housekeeping script should be updated to include relevant validation elements.
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Use this skill whenever the user wants to do anything with PDF files. This includes reading or extracting text/tables from PDFs, combining or merging multiple PDFs into one, splitting PDFs apart, rotating pages, adding watermarks, creating new PDFs, filling PDF forms, encrypting/decrypting PDFs, extracting images, and OCR on scanned PDFs to make them searchable. If the user mentions a .pdf file or asks to produce one, use this skill.
Use this skill any time a .pptx file is involved in any way — as input, output, or both. This includes: creating slide decks, pitch decks, or presentations; reading, parsing, or extracting text from any .pptx file (even if the extracted content will be used elsewhere, like in an email or summary); editing, modifying, or updating existing presentations; combining or splitting slide files; working with templates, layouts, speaker notes, or comments. Trigger whenever the user mentions "deck," "slides," "presentation," or references a .pptx filename, regardless of what they plan to do with the content afterward. If a .pptx file needs to be opened, created, or touched, use this skill.