Archives completed logs to cloud storage with index management and cleanup
Archives completed logs to cloud storage using path-based retention policies from your config file. Automatically triggered when logs expire or work completes, it compresses large files, uploads to type-specific cloud paths, and cleans up local storage while respecting retention exceptions like production logs or open issues.
/plugin marketplace add fractary/claude-plugins/plugin install fractary-logs@fractaryThis skill inherits all available tools. When active, it can use any tool Claude has access to.
docs/archive-process.mddocs/partial-upload-recovery.mdscripts/acquire-lock.shscripts/cleanup-local.shscripts/collect-logs.shscripts/compress-logs.shscripts/copy-to-docs.shscripts/find-old-logs.shscripts/load-retention-policy.shscripts/prepare-upload-metadata.shscripts/release-lock.shscripts/retry-failed-uploads.shscripts/update-file-status.shscripts/update-index.shscripts/validate-config.shworkflow/archive-issue-logs.mdworkflow/time-based-cleanup.mdv2.0 Update: Now centralized configuration - retention policies are defined in .fractary/plugins/logs/config.json with path-based rules. Session logs kept 7 days local/forever cloud, test logs only 3 days/7 days, audit logs 90 days/forever. You load retention policies from the user's config file, not from plugin source files.
CRITICAL: Load config from the project working directory (.fractary/plugins/logs/config.json), NOT the plugin installation directory (~/.claude/plugins/marketplaces/...).
You collect logs based on retention rules, match them against path patterns in config, compress large files, upload to cloud storage via fractary-file, maintain a type-aware archive index, and clean up local storage. </CONTEXT>
<CRITICAL_RULES>
.fractary/plugins/logs/config.json (in project working directory, NOT plugin installation directory)When archiving logs based on retention policy:
Invoke log-lister skill:
Read user's config file: .fractary/plugins/logs/config.json
retention.default - fallback policy for unmatched pathsretention.paths array - path-specific retention rulesExample config structure:
{
"retention": {
"default": {
"local_days": 30,
"cloud_days": "forever",
"priority": "medium",
"auto_archive": true,
"cleanup_after_archive": true
},
"paths": [
{
"pattern": "sessions/*",
"log_type": "session",
"local_days": 7,
"cloud_days": "forever",
"priority": "high",
"auto_archive": true,
"cleanup_after_archive": false,
"retention_exceptions": {
"keep_if_linked_to_open_issue": true,
"keep_recent_n": 10
},
"archive_triggers": {
"age_days": 7,
"size_mb": null,
"status": ["stopped", "error"]
},
"compression": {
"enabled": true,
"format": "gzip",
"threshold_mb": 1
}
},
{
"pattern": "test/*",
"log_type": "test",
"local_days": 3,
"cloud_days": 7,
"priority": "low",
"auto_archive": true,
"cleanup_after_archive": true
},
{
"pattern": "audit/*",
"log_type": "audit",
"local_days": 90,
"cloud_days": "forever",
"priority": "critical",
"retention_exceptions": {
"never_delete_security_incidents": true,
"never_delete_compliance_audits": true
}
}
]
}
}
Path matching algorithm:
/logs/ directoryretention.paths array (in order)retention.default policyExecute scripts/check-retention-status.sh:
For each log:
Check exceptions from retention-config.json:
// Session example
if (retention_exceptions.keep_if_linked_to_open_issue) {
// Check if issue still open via GitHub API
if (issue_is_open) {
status = "protected"
}
}
if (retention_exceptions.keep_recent_n) {
// Keep N most recent logs regardless of age
if (log_rank <= retention_exceptions.keep_recent_n) {
status = "protected"
}
}
// Deployment example
if (retention_exceptions.never_delete_production && log.environment === "production") {
status = "protected"
}
// Audit example
if (retention_exceptions.never_delete_security_incidents && log.audit_type === "security") {
status = "protected"
}
Group expired logs by type:
Execute scripts/compress-logs.sh:
Execute scripts/upload-to-cloud.sh:
archive/logs/{year}/{month}/{log_type}/{filename}Execute scripts/update-archive-index.sh:
{
"version": "2.0",
"type_aware": true,
"archives": [
{
"log_id": "session-550e8400",
"log_type": "session",
"issue_number": 123,
"archived_at": "2025-11-23T10:00:00Z",
"local_path": ".fractary/logs/session/session-550e8400.md",
"cloud_url": "r2://logs/2025/11/session/session-550e8400.md.gz",
"original_size_bytes": 125000,
"compressed_size_bytes": 42000,
"retention_policy": {
"local_days": 7,
"cloud_policy": "forever"
},
"delete_local_after": "2025-11-30T10:00:00Z"
}
],
"by_type": {
"session": {"count": 12, "total_size_mb": 15.2},
"test": {"count": 45, "total_size_mb": 8.7},
"audit": {"count": 3, "total_size_mb": 2.1}
}
}
Execute scripts/cleanup-local.sh:
If docs_integration.copy_summary_to_docs is enabled in config:
Execute scripts/copy-to-docs.sh:
./scripts/copy-to-docs.sh \
--summary-path "$SUMMARY_PATH" \
--docs-path "$DOCS_PATH" \
--issue-number "$ISSUE_NUMBER" \
--update-index "$UPDATE_INDEX"
This step:
docs/conversations/ directory{date}-{issue_number}-{slug}.mdmax_index_entries most recentIf archiving issue-related logs:
Report archival results grouped by type
When archiving logs for completed issue:
Execute scripts/collect-issue-logs.sh:
For each log type found:
When verifying archived logs:
Read .fractary/logs/.archive-index.json
For each archived entry:
Archive Verification Report
───────────────────────────────────────
Total archived: 60 logs across 5 types
By type:
✓ session: 12 logs (all verified)
✓ test: 45 logs (all verified)
⚠ build: 2 logs (1 missing in cloud)
✓ audit: 1 log (verified)
Issues:
- build-2025-11-10-001.md.gz: Cloud file not found
Recommendation: Re-upload missing build log
</WORKFLOW>
<SCRIPTS>
Purpose: Calculate retention status per log path
Usage: check-retention-status.sh <log_path> <config_file>
Outputs: JSON with retention status (active/expiring/expired/protected)
v2.0 CHANGE: Reads retention policies from .fractary/plugins/logs/config.json (retention.paths array), matches log path against patterns
Purpose: Find all logs for an issue, grouped by type
Usage: collect-logs.sh <issue_number>
Outputs: JSON with logs grouped by log_type
v2.0 CHANGE: Returns type-grouped structure
Purpose: Compress log based on path-specific compression settings
Usage: compress-logs.sh <log_file> <retention_policy_json>
Outputs: Compressed file path or original if not compressed
v2.0 CHANGE: Respects per-path compression.enabled, compression.format, and compression.threshold_mb from config
Purpose: Upload log to type-specific cloud path
Usage: upload-to-cloud.sh <log_type> <log_file>
Outputs: Cloud URL
v2.0 CHANGE: Uses type-specific path structure
Purpose: Update type-aware archive index
Usage: update-index.sh <archive_metadata_json>
Outputs: Updated index path
v2.0 CHANGE: Includes type-specific retention metadata from user config
Purpose: Remove local logs based on path-specific retention
Usage: cleanup-local.sh <config_file> [--dry-run]
Outputs: List of deleted files by type
v2.0 CHANGE: Reads retention.paths from config, matches logs against patterns, respects per-path cleanup_after_archive and local_days settings
Purpose: Load retention policy for a specific log path
Usage: load-retention-policy.sh <log_path> <config_file>
Outputs: JSON with matched retention policy (from paths array or default)
v2.0 NEW: Core script for path-based retention matching - tests log path against all patterns in config, returns first match or default
Purpose: Copy session summaries to docs/conversations/ for project documentation
Usage: copy-to-docs.sh --summary-path <path> --docs-path <path> [--issue-number <num>] [--update-index true|false]
Outputs: JSON with copy results including target path and index update status
v2.0 NEW: Supports docs_integration config for automatic summary archival to project docs
<COMPLETION_CRITERIA> Operation complete when:
Archive by type:
🎯 STARTING: Log Archive
Filter: log_type=test, retention_expired=true
───────────────────────────────────────
Loading retention policies...
✓ test: 3 days local, 7 days cloud
✓ session: 7 days local, forever cloud
✓ build: 3 days local, 30 days cloud
Checking retention status...
✓ Found 52 logs past retention
Retention analysis:
- expired: 45 logs (archive candidates)
- protected: 5 logs (linked to open issues)
- recent_keep: 2 logs (keep_recent_n rule)
Archiving by type:
test: 30 logs
✓ Compressed 5 large logs (2.1 MB → 0.7 MB)
✓ Uploaded to cloud: archive/logs/2025/11/test/
✓ Deleted local copies (expired > 3 days)
Space freed: 2.1 MB
session: 10 logs
✓ Compressed 8 large logs (15.2 MB → 5.1 MB)
✓ Uploaded to cloud: archive/logs/2025/11/session/
✓ Kept local (within 7 day retention)
Space uploaded: 15.2 MB
build: 5 logs
✓ All < 1MB, no compression needed
✓ Uploaded to cloud: archive/logs/2025/11/build/
✓ Deleted local copies (expired > 3 days)
Space freed: 0.8 MB
Updating archive index...
✓ Added 45 entries (type-aware)
✓ Index: .fractary/logs/.archive-index.json
✅ COMPLETED: Log Archive
Archived: 45 logs across 3 types
Protected: 7 logs (retention exceptions)
Space freed: 2.9 MB | Uploaded: 20.3 MB
───────────────────────────────────────
Next: Verify archive with /fractary-logs:verify-archive
Retention status:
Retention Status by Type
───────────────────────────────────────
session (7d local, forever cloud):
- Active: 8 logs
- Expiring soon: 2 logs (< 3 days)
- Expired: 10 logs
- Protected: 3 logs (open issues)
test (3d local, 7d cloud):
- Active: 12 logs
- Expired: 30 logs
audit (90d local, forever cloud):
- Active: 2 logs
- Protected: 1 log (security incident, never delete)
</OUTPUTS>
<DOCUMENTATION>
Archive operations documented in **type-aware archive index** at `.fractary/logs/.archive-index.json`. Each log type has its retention policy specified.
Retention policies centralized in user config: .fractary/plugins/logs/config.json
retention.paths arrayretention.default<ERROR_HANDLING>
If cloud upload fails:
If multiple exceptions apply:
⚠️ CONFLICT: Multiple retention exceptions
Log: deployment-prod-2025-11-01.md
Rules:
- never_delete_production (from deployment retention config)
- keep_recent_n=20 (would delete, rank 25)
Resolution: never_delete takes precedence
Action: Keeping log (protected)
❌ PARTIAL FAILURE: Archive operation
Success:
✓ test: 30 logs archived
✓ session: 10 logs archived
Failed:
✗ audit: Cloud upload failed (permission denied)
Action: Audit logs kept locally, other types processed
Retry: /fractary-logs:archive --type audit --retry
</ERROR_HANDLING>
What changed:
.fractary/plugins/logs/config.json (not plugin source)sessions/*) to match logs to retention policiestypes/{type}/retention-config.json no longer usedWhat stayed the same:
Benefits:
.fractary/plugins/logs/config.jsonMigration path:
/fractary-logs:init --force to generate new v2.0 configretention.paths array and adjust as needed