Classifies logs by type (session, test, build) using path patterns and frontmatter analysis
Analyzes log content, metadata, and context to classify logs into specific types (session, test, build, deployment, etc.) using pattern matching and keyword detection. Triggers when processing logs to determine their category or reclassify existing untyped logs.
/plugin marketplace add fractary/claude-plugins/plugin install fractary-logs@fractaryThis skill inherits all available tools. When active, it can use any tool Claude has access to.
scripts/classify-log.shscripts/generate-recommendation.shYou work by applying classification rules and pattern matching to identify log characteristics, then recommend the most appropriate type. You can also reclassify existing _untyped logs into specific types. </CONTEXT>
<CRITICAL_RULES>
For new log classification:
content - Log content (markdown or raw text)metadata - Optional metadata object (fields, keywords, source)context - Optional context (command executed, trigger event)For reclassification:
log_path - Path to existing log fileforce - If true, reclassify even if already typedExample request:
{
"operation": "classify-log",
"content": "Test suite execution results: 45 passed, 3 failed...",
"metadata": {
"command": "pytest",
"exit_code": 1,
"duration": 12.5
}
}
</INPUTS>
<WORKFLOW>
## Step 1: Extract Classification Signals
Analyze input to identify:
- **Keywords**: session_id, build, deploy, test, error, audit, backup, etc.
- **Commands**: pytest, npm build, terraform apply, git commit, etc.
- **Patterns**: UUID patterns, version numbers, timestamps, stack traces
- **Structure**: Frontmatter presence, section headers, metadata fields
- **Metadata**: Exit codes, durations, repositories, environments
Execute scripts/classify-log.sh with extracted signals:
For each candidate type, score 0-100 based on:
Threshold: Recommend type if score >= 70
Execute scripts/generate-recommendation.sh to format output:
High confidence (>= 90):
{
"recommended_type": "test",
"confidence": 95,
"reasoning": "Strong indicators: pytest command, test counts, coverage metrics",
"matched_patterns": ["test framework", "pass/fail counts", "duration"],
"suggested_fields": {
"test_id": "test-2025-11-16-001",
"test_framework": "pytest",
"total_tests": 48,
"passed_tests": 45,
"failed_tests": 3
}
}
Medium confidence (70-89):
{
"recommended_type": "operational",
"confidence": 75,
"reasoning": "Detected backup operation keywords and duration metrics",
"alternative_types": ["_untyped"],
"review_recommended": true
}
Low confidence (< 70):
{
"recommended_type": "_untyped",
"confidence": 45,
"reasoning": "Insufficient patterns to classify confidently",
"candidates": [
{"type": "debug", "score": 45},
{"type": "operational", "score": 38}
],
"manual_review_required": true
}
</WORKFLOW>
<COMPLETION_CRITERIA> ✅ Classification signals extracted from content ✅ All type rules evaluated with scores ✅ Confidence score calculated ✅ Recommendation generated with reasoning ✅ Suggested fields provided (if high confidence) </COMPLETION_CRITERIA>
<OUTPUTS> Return to caller: ``` 🎯 STARTING: Log Classifier Content size: {bytes} bytes Metadata fields: {count} ───────────────────────────────────────📊 Classification Analysis: Signals detected:
Type scores:
✅ COMPLETED: Log Classifier Recommended type: test Confidence: 95% (high) Reasoning: {explanation} ─────────────────────────────────────── Next: Use log-writer to create typed log, or log-validator to verify structure
</OUTPUTS>
<DOCUMENTATION>
Write to execution log:
- Operation: classify-log
- Recommended type: {type}
- Confidence: {score}
- Alternative types: {list}
- Timestamp: ISO 8601
</DOCUMENTATION>
<ERROR_HANDLING>
**Empty content:**
❌ ERROR: No content provided for classification Provide either 'content' field or 'log_path' to existing file
**File not found (reclassification):**
❌ ERROR: Log file not found Path: {log_path} Cannot reclassify non-existent log
**Classification failed:**
⚠️ WARNING: Classification uncertain All type scores below confidence threshold (< 70) Defaulting to '_untyped' with manual review flag Suggestion: Add more context or metadata to improve classification
</ERROR_HANDLING>
## Scripts
This skill uses two supporting scripts:
1. **`scripts/classify-log.sh {content_file} {metadata_json}`**
- Analyzes content and metadata for classification signals
- Returns scored list of candidate types
- Exits 0 always (classification uncertainty is not an error)
2. **`scripts/generate-recommendation.sh {scores_json}`**
- Formats classification results as recommendation
- Adds reasoning and suggested fields
- Outputs JSON recommendation object