Archives completed FABER workflow state and artifacts to cloud storage for historical tracking and analysis
Archives completed workflow artifacts to cloud storage when triggered by workflow-manager during Release phase or manual `/fractary-faber:archive` command. Verifies issue completion, uploads specs and logs in dependency order, updates GitHub with archive URLs, and commits index changes while preserving all historical data.
/plugin marketplace add fractary/claude-plugins/plugin install fractary-faber@fractaryThis skill inherits all available tools. When active, it can use any tool Claude has access to.
You are invoked by the workflow-manager agent when archive operations are requested, either manually via /fractary-faber:archive or automatically during the Release phase (if configured).
</CONTEXT>
<CRITICAL_RULES> NEVER VIOLATE THESE RULES:
Pre-Conditions
Archive Order
GitHub Updates
State Management
Error Recovery
Required Parameters:
operation: "archive"issue_number (string): Issue number to archiveOptional Parameters:
skip_specs (boolean): Skip spec archival (default: false)skip_logs (boolean): Skip log archival (default: false)force (boolean): Skip pre-checks (default: false)skip_checks (boolean): Skip pre-checks (internal, default: false)Context Provided:
{
"issue_number": "123",
"skip_specs": false,
"skip_logs": false,
"force": false
}
</INPUTS>
<WORKFLOW>
Validate inputs before starting:
# Validate issue number format (prevent injection)
# Allow only alphanumeric, hyphens, and underscores (for Jira-style IDs like "PROJ-123")
if ! echo "$ISSUE_NUMBER" | grep -qE '^[A-Za-z0-9_-]+$'; then
echo "ā Error: Invalid issue number format: $ISSUE_NUMBER"
echo "Issue numbers must contain only letters, numbers, hyphens, and underscores"
exit 2
fi
# Validate issue number length (reasonable limit)
if [ ${#ISSUE_NUMBER} -gt 50 ]; then
echo "ā Error: Issue number too long (max 50 characters): $ISSUE_NUMBER"
exit 2
fi
šÆ STARTING: Archive Workflow Skill
Issue: #{issue_number}
Options: {skip_specs ? "Skip Specs" : ""} {skip_logs ? "Skip Logs" : ""} {force ? "Force" : ""}
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Skip this step if force or skip_checks is true.
Use @agent-fractary-work:work-manager to get issue status:
{
"operation": "get-issue",
"issue_number": "{{issue_number}}"
}
Expected result:
Decision logic:
if issue.state == "closed" OR pr.state == "merged":
ā Issue/PR complete
else:
ā Issue still open and PR not merged
Prompt: "Issue not closed. Continue anyway? (y/n/cancel)"
if user says no: exit 0
Look for recent documentation updates:
# Cross-platform date formatting function
format_timestamp() {
local ts="$1"
if [ "$ts" = "0" ]; then
echo "never"
return
fi
# Try GNU date first (Linux)
if date -d "@$ts" +"%Y-%m-%d" 2>/dev/null; then
return
fi
# Fall back to BSD date (macOS)
if date -r "$ts" +"%Y-%m-%d" 2>/dev/null; then
return
fi
# Last resort
echo "unknown"
}
# Check when docs were last updated (get timestamps for proper comparison)
DOCS_MODIFIED_TS=$(git log -1 --format="%ct" -- docs/ 2>/dev/null || echo "0")
SPEC_CREATED_TS=$(git log -1 --format="%ct" -- specs/ 2>/dev/null || echo "0")
if [ "$DOCS_MODIFIED_TS" -lt "$SPEC_CREATED_TS" ]; then
DOCS_MODIFIED=$(format_timestamp "$DOCS_MODIFIED_TS")
SPEC_CREATED=$(format_timestamp "$SPEC_CREATED_TS")
echo "ā Documentation not updated since spec creation"
echo " Docs last updated: $DOCS_MODIFIED"
echo " Spec created: $SPEC_CREATED"
fi
Decision logic:
if docs_outdated:
ā Documentation may be outdated
Prompt: "Update docs first? (yes/no/cancel)"
if user says yes:
Guide user to update docs
exit 0
if user says cancel:
exit 0
Check if specs have been validated:
# Check for validation markers in spec files
SPEC_FILES=$(find specs/ -name "spec-${issue_number}-*.md" 2>/dev/null)
if [ -n "$SPEC_FILES" ]; then
# Check if validation section exists
VALIDATED=$(grep -l "## Validation" $SPEC_FILES)
if [ -z "$VALIDATED" ]; then
echo "ā Specs not validated"
fi
fi
Decision logic:
if specs_not_validated:
ā Specs not validated
Note: This is non-blocking, just a warning
Show summary and ask for confirmation:
Ready to archive issue #{{issue_number}}
Status:
{{issue_status}}
{{docs_status}}
{{spec_status}}
Continue with archive? (y/n)
Wait for user response. If "n", exit 0.
Skip this step if skip_specs is true.
Use @agent-fractary-spec:spec-manager to archive specs:
{
"operation": "archive",
"issue_number": "{{issue_number}}",
"skip_checks": true
}
Expected result:
{
"success": true,
"specs_archived": 2,
"spec_urls": [
{"filename": "spec-123-feature.md", "url": "https://...", "size": "12.3 KB"},
{"filename": "spec-123-api.md", "url": "https://...", "size": "8.7 KB"}
],
"index_updated": true
}
Error handling:
if spec_archive_fails:
ā Spec archive failed
Error: {error_message}
This is a critical failure. Cannot proceed with log archival.
Recovery:
1. Check cloud storage configuration
2. Verify network connectivity
3. Retry: /fractary-faber:archive {{issue_number}}
exit 1
On success:
ā Specs archived
- Uploaded {{count}} specifications
- Total size: {{total_size}}
- Index updated
Skip this step if skip_logs is true.
Use @agent-fractary-logs:log-manager to archive logs:
{
"operation": "archive",
"issue_number": "{{issue_number}}",
"skip_checks": true
}
Expected result:
{
"success": true,
"logs_archived": 4,
"log_urls": [
{"type": "session", "filename": "session-123.log", "url": "https://...", "size": "45.2 KB"},
{"type": "build", "filename": "build-123.log", "url": "https://...", "size": "23.1 KB"},
{"type": "test", "filename": "test-123.log", "url": "https://...", "size": "18.9 KB"},
{"type": "debug", "filename": "debug-123.log.gz", "url": "https://...", "size": "102.4 KB"}
],
"compressed": 1,
"index_updated": true
}
Error handling:
if log_archive_fails:
ā Log archive failed (specs already archived)
Error: {error_message}
Specs were successfully archived, but log archival failed.
Recovery:
1. Manual retry: /fractary-logs:archive {{issue_number}}
2. Or retry full archive: /fractary-faber:archive {{issue_number}} --skip-specs
Continue with GitHub updates anyway? (y/n)
On success:
ā Logs archived
- Uploaded {{count}} logs ({{types}})
- Compressed: {{compressed_count}} large logs
- Total size: {{total_size}}
- Index updated
Use @agent-fractary-work:work-manager to post comment:
{
"operation": "comment",
"issue_number": "{{issue_number}}",
"comment": "{{formatted_archive_comment}}"
}
Comment format:
ā
**FABER Workflow Archived**
All artifacts for this work have been permanently archived!
**Specifications** ({{spec_count}}):
{{#each spec_urls}}
- [{{filename}}]({{url}}) ({{size}})
{{/each}}
**Logs** ({{log_count}}):
{{#each log_urls}}
- [{{type}}: {{filename}}]({{url}}) ({{size}})
{{/each}}
**Total Size**: {{total_size}} (compressed)
**Archived**: {{timestamp}}
These artifacts are searchable via:
- `/fractary-spec:read {{issue_number}}`
- `/fractary-logs:read {{issue_number}}`
- `/fractary-logs:search "query"`
If PR URL exists from pre-checks:
{
"operation": "comment-pr",
"pr_number": "{{pr_number}}",
"comment": "{{formatted_pr_comment}}"
}
PR Comment format:
š¦ **Artifacts Archived**
Specifications and logs for this PR have been archived to cloud storage.
See issue #{{issue_number}} for complete archive details.
Error handling:
if github_comment_fails:
ā Failed to comment on GitHub
Error: {error_message}
Archive succeeded, but GitHub comment failed.
You may want to manually comment with the archive URLs.
Continue anyway (archive is complete).
On success:
ā GitHub updated
- Commented on issue #{{issue_number}}
{{#if pr_exists}}
- Commented on PR #{{pr_number}}
{{/if}}
Clean up local files and commit index changes:
# Stage archive index updates
git add specs/.archive-index.json logs/.archive-index.json 2>/dev/null
# Check if there are actually changes to commit
if git diff --cached --quiet; then
echo "ā
Archive indexes (no changes to commit)"
else
# Commit archive index updates (must succeed before considering cleanup complete)
if ! git commit -m "Archive artifacts for issue #{{issue_number}}"; then
echo "ā Failed to commit archive index updates"
echo ""
echo "Archives uploaded successfully, but local cleanup incomplete."
echo "Manual cleanup required:"
echo " git add specs/.archive-index.json logs/.archive-index.json"
echo " git commit -m 'Archive cleanup for issue #{{issue_number}}'"
exit 1
fi
echo "ā
Archive indexes committed"
fi
Note: Archived files are removed by fractary-spec and fractary-logs agents during their archive operations (Steps 3 and 4), not in this cleanup step. This step only commits the index updates.
On success:
ā Local cleanup
- Archived files removed (by spec/log managers)
- Archive indexes committed
Output completion message and return structured result:
ā
COMPLETED: Archive Workflow Skill
Issue: #{{issue_number}}
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Results:
- Specs archived: {{spec_count}}
- Logs archived: {{log_count}}
- Total size: {{total_size}}
- GitHub updated: {{github_updated}}
- Local cleaned: {{local_cleaned}}
Next: Archive complete! All artifacts permanently stored.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Return structured result to workflow-manager:
{
"success": true,
"issue_number": "{{issue_number}}",
"specs_archived": {{spec_count}},
"logs_archived": {{log_count}},
"total_size_bytes": {{total_size}},
"archive_urls": {
"specs": [...],
"logs": [...]
},
"local_cleaned": true,
"github_updated": true
}
</WORKFLOW>
<ERROR_HANDLING>
To maintain consistency with the archive command, use these exit codes:
Always provide clear next steps:
</ERROR_HANDLING>
<COMPLETION_CRITERIA>
Archive is complete when:
</COMPLETION_CRITERIA>
<OUTPUTS>Return structured JSON result to workflow-manager:
{
"success": true,
"issue_number": "123",
"specs_archived": 2,
"logs_archived": 4,
"total_size_bytes": 204800,
"archive_urls": {
"specs": [
{"filename": "spec-123-feature.md", "url": "https://...", "size": "12.3 KB"}
],
"logs": [
{"type": "session", "filename": "session-123.log", "url": "https://...", "size": "45.2 KB"}
]
},
"local_cleaned": true,
"github_updated": true,
"warnings": []
}
If errors occurred:
{
"success": false,
"issue_number": "123",
"error": "Spec archive failed",
"partial_success": {
"specs_archived": 0,
"logs_archived": 0
},
"recovery_steps": [
"Check cloud storage configuration",
"Verify network connectivity",
"Retry: /fractary-faber:archive 123"
]
}
</OUTPUTS>
<DOCUMENTATION>
This skill documents its work through:
The workflow-manager uses the structured result to:
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.