Analyze skill content for optimal placement (Skill vs Passive Context vs Hybrid). Compress markdown to pipe-delimited format (60-80% token reduction). Validate content placement compliance against decision framework. Based on Vercel research showing passive context achieves 100% pass rates vs 53-79% for skills.
Analyzes and optimizes Claude context placement by classifying content, compressing markdown, and validating skill vs passive context decisions.
/plugin marketplace add rjmurillo/ai-agents/plugin install project-toolkit@ai-agentsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Tooling suite for optimizing Claude Code context placement based on Vercel research demonstrating that passive context (AGENTS.md, @imports) achieves 100% pass rates versus 53-79% for skills due to elimination of decision points.
Use this skill when you need to:
analyze skill placement or classify content as Skill vs Passive Contextcompress markdown or reduce token count for context filesvalidate compliance of skill/passive context placement decisionsoptimize context for lower API costs and better agent performanceanalyze_skill_placement.py to classify contentcompress_markdown_content.py to reduce token countstest_skill_passive_compliance.py to check compliance| Script | Purpose | Exit Codes |
|---|---|---|
analyze_skill_placement.py | Classify content as Skill/PassiveContext/Hybrid | 0=success, 1=error |
compress_markdown_content.py | Compress markdown with token reduction metrics | 0=success, 1=error, 2=config, 3=external |
test_skill_passive_compliance.py | Validate compliance with decision framework | 0=pass, 1=violations |
path_validation.py | Shared CWE-22 repo-root-anchored path validation | N/A (library module) |
The compression script requires Python 3.10+ with the tiktoken library for local token counting:
# Install project dependencies (includes tiktoken)
uv pip install -e ".[dev]"
# Or install tiktoken directly
pip install tiktoken
Note: tiktoken is an offline tokenizer that uses OpenAI's cl100k_base encoding (GPT-4). It does not require an API key or network connection. While this repository uses Claude (not GPT-4), tiktoken provides consistent relative compression metrics (before vs after token counts) which is sufficient for the compression utility's purpose.
For exact Claude token counts, the script can optionally use Anthropic's API (requires ANTHROPIC_API_KEY environment variable).
Script: scripts/analyze_skill_placement.py
Analyzes skill content and recommends whether it should be a Skill, Passive Context, or Hybrid.
Classification Logic:
Usage:
# Analyze a skill directory (assuming running from repo root)
python3 .claude/skills/context-optimizer/scripts/analyze_skill_placement.py -p .claude/skills/github
# Analyze a specific SKILL.md
python3 .claude/skills/context-optimizer/scripts/analyze_skill_placement.py -p .claude/skills/github/SKILL.md
# Get detailed metrics
python3 .claude/skills/context-optimizer/scripts/analyze_skill_placement.py -p .claude/skills/github -d
# Analyze content directly
python3 .claude/skills/context-optimizer/scripts/analyze_skill_placement.py -c "Content of .claude/skills/github/SKILL.md"
Output:
{
"classification": "Hybrid",
"confidence": 85,
"reasoning": "High tool execution (12 calls); User-triggered workflow (5 triggers); High reference content ratio (0.75)",
"recommendations": {
"Passive": [
"Routing Rules",
"Classification Framework",
"Reference Data"
],
"Skill": [
"Get-UnaddressedComments.ps1",
"Post-PRCommentReply.ps1",
"Process section"
]
},
"metrics": {
"tool_calls": 12,
"action_verbs": 8,
"reference_content_ratio": 0.75,
"user_triggers": 5,
"always_needed": 2
}
}
Classification Thresholds:
| Classification | Criteria | Confidence |
|---|---|---|
| Skill | skillScore > passiveScore + 3 | 70-90% |
| PassiveContext | passiveScore > skillScore + 3 | 70-90% |
| Hybrid | |skillScore - passiveScore| <= 3 | 50-70% |
Hybrid Recommendations:
Script: scripts/compress_markdown_content.py
Compress markdown to pipe-delimited format (Vercel pattern) achieving 60-80% token reduction while maintaining 100% information density.
Compression Techniques:
|key: value|key2: value2|[Section] |item1 |item2Usage:
# Basic compression (JSON output to stdout)
python3 scripts/compress_markdown_content.py -i README.md -l medium
# Save to file
python3 scripts/compress_markdown_content.py \
-i CRITICAL-CONTEXT.md \
-l aggressive \
-o compressed.txt
# With verbose metrics
python3 scripts/compress_markdown_content.py \
-i input.md -l medium -v
# Programmatic use (parse JSON output)
result=$(python3 scripts/compress_markdown_content.py -i input.md -l medium)
echo "$result" | jq '.metrics.reduction_percent'
Compression Levels:
| Level | Reduction | Techniques |
|---|---|---|
| Light | 40-50% | Headers, tables, whitespace |
| Medium | 50-60% | + redundant words, tighter whitespace |
| Aggressive | 60-80% | + H3 compression, lists, abbreviations |
Output:
{
"success": true,
"compressed_content": "...",
"metrics": {
"original_tokens": 1000,
"compressed_tokens": 250,
"reduction_percent": 75.0,
"original_size": 4000,
"compressed_size": 1000,
"compression_level": "aggressive"
}
}
Examples:
Before (52 tokens):
## Session Protocol
The session protocol has multiple phases:
1. Serena Activation - You must activate Serena
2. Read HANDOFF.md - Read the handoff file
After Aggressive (30 tokens, 42% reduction):
[Session Protocol]
session protocol has multiple phases:
1. Serena Activation - activate Serena
2. Read HANDOFF.md - handoff file
Script: scripts/test_skill_passive_compliance.py
Validate that content placement follows the skill vs passive context decision framework. Returns structured JSON with violations, warnings, and recommendations.
6 Compliance Checks:
name and description fieldsUsage:
# Scan .claude directory (JSON output)
python3 scripts/test_skill_passive_compliance.py
# Scan specific directory with table output
python3 scripts/test_skill_passive_compliance.py \
--path .claude/skills/github \
--format table
# Custom CLAUDE.md path
python3 scripts/test_skill_passive_compliance.py \
--claude-md-path CLAUDE.md \
--format json
Exit Codes:
| Code | Meaning |
|---|---|
| 0 | All compliance checks passed |
| 1 | One or more violations detected |
Example Output (JSON):
{
"timestamp": "2026-02-08T13:45:00.123456",
"path": ".claude",
"claudeMdPath": "CLAUDE.md",
"violations": [
{
"check": "CLAUDE.md Line Count",
"severity": "error",
"message": "CLAUDE.md has 250 lines (exceeds 200 line limit) - use @imports to split",
"recommendation": "Split content into separate files and use @imports"
},
{
"check": "Skill Frontmatter (test-skill)",
"severity": "error",
"message": "Missing required frontmatter field: name",
"recommendation": "Add required frontmatter fields (name, description) to test-skill/SKILL.md"
}
],
"warnings": [
{
"check": "Skill Has Actions (reference-skill)",
"message": "No action verbs, scripts, or tool execution found - consider moving to passive context"
}
],
"recommendations": [
"Consider moving reference-skill to passive context (SKILL-QUICK-REF.md)",
"Extract action patterns from CLAUDE.md to a skill"
],
"summary": {
"total_checks": 10,
"passed": 7,
"failed": 2,
"warnings": 1
}
}
Example Output (Table):
Skill/Passive Context Compliance Check
======================================================================
Timestamp: 2026-02-08T13:45:00.123456
Path: .claude
CLAUDE.md: CLAUDE.md
Summary:
Total Checks: 10
Passed: 7
Failed: 2
Warnings: 1
Violations:
❌ CLAUDE.md Line Count
Severity: ERROR
Issue: CLAUDE.md has 250 lines (exceeds 200 line limit)
Fix: Split content into separate files and use @imports
❌ Skill Frontmatter (test-skill)
Severity: ERROR
Issue: Missing required frontmatter field: name
Fix: Add required frontmatter fields to test-skill/SKILL.md
Warnings:
⚠️ Skill Has Actions (reference-skill)
No action verbs, scripts, or tool execution found
Recommendations:
💡 Consider moving reference-skill to passive context
💡 Extract action patterns from CLAUDE.md to a skill
[FAIL] Compliance violations detected
Severity Levels:
| Severity | Meaning | Effect |
|---|---|---|
| error | Blocks compliance | Exit code 1 |
| warning | Informational only | Exit code 0 |
| none | Check passed | Exit code 0 |
Common Violations:
| Violation | Fix |
|---|---|
| CLAUDE.md too long | Split into separate files, add @imports |
| Missing @import file | Create file or remove @import directive |
| Skill missing frontmatter | Add --- block with name: and description: |
| Skill has no actions | Add scripts, tool execution, or move to passive context |
| Passive has actions | Extract executable content to a skill |
| Duplicate content | Remove redundant content from skill or passive |
Based on: SKILL-QUICK-REF.md lines 152-203
| Configuration | Pass Rate |
|---|---|
| Baseline (no docs) | 53% |
| Skill (default) | 53% |
| Skill + explicit instructions | 79% |
| AGENTS.md passive context | 100% |
Key Insight: Skills create decision points where agents must choose whether to retrieve documentation. These decision points introduce 4 failure modes:
Passive context eliminates all four failure modes by being always-available.
Run pytest tests:
# Run all tests
python3 -m pytest tests/
# Run specific tool tests
python3 -m pytest tests/test_skill_passive_compliance_test.py -v
# Run with coverage
python3 -m pytest tests/ --cov=scripts --cov-report=term-missing
Coverage - Compliance Validator (95%, 19/20 tests passing):
Coverage - Analyzer:
Coverage - Compressor:
Research:
.agents/analysis/vercel-passive-context-vs-skills-research.mdpassive-context-vs-skills-vercel-researchProject Documentation:
SKILL-QUICK-REF.md lines 152-203CRITICAL-CONTEXT.md, SKILL-QUICK-REF.md.claude/skills/github/, .claude/skills/pr-comment-responder/Input: GitHub skill with gh pr create, gh issue close commands
Output:
{
"classification": "Skill",
"confidence": 85,
"reasoning": "High tool execution (8 calls); Many action verbs (12)"
}
Input: Memory hierarchy reference with tables and always-needed patterns
Output:
{
"classification": "PassiveContext",
"confidence": 90,
"reasoning": "High reference content ratio (0.85); Always-needed information (5 indicators)"
}
Input: PR comment responder with routing rules + script execution
Output:
{
"classification": "Hybrid",
"confidence": 65,
"reasoning": "High reference content ratio (0.72); Some tool execution (4 calls); User-triggered workflow (3 triggers); Mixed indicators suggest hybrid approach",
"recommendations": {
"Passive": ["Routing Rules", "Classification Framework"],
"Skill": ["Get-UnaddressedComments.ps1", "Post-PRCommentReply.ps1"]
}
}
These tools enable:
context-optimizer.agents/analysis/vercel-passive-context-vs-skills-research.mdpassive-context-vs-skills-vercel-researchActivates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.