From project-toolkit
Analyzes Claude Code skill content for optimal placement (Skill vs Passive Context vs Hybrid), compresses markdown to pipe-delimited format (60-80% token reduction), and validates compliance with decision framework.
npx claudepluginhub rjmurillo/ai-agents --plugin project-toolkitThis skill is limited to using the following tools:
Tooling suite for optimizing Claude Code context placement. Passive context (AGENTS.md, @imports) achieves 100% pass rates versus 53-79% for skills by eliminating decision points.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Tooling suite for optimizing Claude Code context placement. Passive context (AGENTS.md, @imports) achieves 100% pass rates versus 53-79% for skills by eliminating decision points.
analyze skill placement - classify content as Skill vs Passive Contextcompress markdown - reduce token count for context filesvalidate compliance - check skill/passive context placement decisionsoptimize context - lower API costs and improve agent performanceextract and index - split markdown into detail files with compact indexanalyze_skill_placement.py to classify contentcompress_markdown_content.py to reduce token countstest_skill_passive_compliance.py to check compliance| Script | Purpose | Exit Codes |
|---|---|---|
analyze_skill_placement.py | Classify content as Skill/PassiveContext/Hybrid | 0=success, 1=error |
compress_markdown_content.py | Compress markdown with token reduction metrics | 0=success, 1=error, 2=config, 3=external |
test_skill_passive_compliance.py | Validate compliance with decision framework | 0=pass, 1=violations |
extract_and_index.py | Extract sections into detail files with pipe-delimited index | 0=success, 1=error, 2=config, 3=external |
path_validation.py | Shared CWE-22 repo-root-anchored path validation | N/A (library module) |
Python 3.12+ with tiktoken for local token counting:
uv pip install -e ".[dev]" # includes tiktoken
pip install tiktoken # or install directly
tiktoken is an offline tokenizer (cl100k_base encoding) that approximates Claude tokenization. No API key is required for these scripts.
| Configuration | Pass Rate |
|---|---|
| Baseline (no docs) | 53% |
| Skill (default) | 53% |
| Skill + explicit instructions | 79% |
| AGENTS.md passive context | 100% |
Skills create decision points where agents must choose whether to retrieve documentation. These introduce 4 failure modes: late retrieval, partial retrieval, integration failure, and instruction fragility. Passive context eliminates all four by being always-available.
.agents/analysis/vercel-passive-context-vs-skills-research.mdpassive-context-vs-skills-vercel-researchSKILL-QUICK-REF.md (see the "Decision Framework" section)Script: scripts/analyze_skill_placement.py
Analyzes skill content and recommends Skill, Passive Context, or Hybrid placement.
Classification Logic:
Usage:
# Analyze a skill directory (from repo root)
python3 .claude/skills/context-optimizer/scripts/analyze_skill_placement.py -p .claude/skills/github
# Analyze a specific SKILL.md
python3 .claude/skills/context-optimizer/scripts/analyze_skill_placement.py -p .claude/skills/github/SKILL.md
# Get detailed metrics
python3 .claude/skills/context-optimizer/scripts/analyze_skill_placement.py -p .claude/skills/github -d
Output:
{
"classification": "Hybrid",
"confidence": 85,
"reasoning": "High tool execution (12 calls); High reference content ratio (0.75)",
"recommendations": {
"Passive": ["Routing Rules", "Classification Framework"],
"Skill": ["Get-UnaddressedComments.ps1", "Post-PRCommentReply.ps1"]
}
}
Classification Thresholds:
| Classification | Criteria | Confidence |
|---|---|---|
| Skill | skillScore > passiveScore + 3 | 70-90% |
| PassiveContext | passiveScore > skillScore + 3 | 70-90% |
| Hybrid | abs(skillScore - passiveScore) <= 3 | 50-70% |
Script: scripts/compress_markdown_content.py
Compress markdown to pipe-delimited format achieving 60-80% token reduction while maintaining 100% information density.
Compression Techniques:
|key: value|key2: value2|[Section] |item1 |item2Usage:
# Basic compression (JSON output to stdout)
python3 scripts/compress_markdown_content.py -i README.md -l medium
# Save to file with aggressive compression
python3 scripts/compress_markdown_content.py -i CRITICAL-CONTEXT.md -l aggressive -o compressed.txt
# With verbose metrics
python3 scripts/compress_markdown_content.py -i input.md -l medium -v
Compression Levels:
| Level | Reduction | Techniques |
|---|---|---|
| Light | 40-50% | Headers, tables, whitespace |
| Medium | 50-60% | + redundant words, tighter whitespace |
| Aggressive | 60-80% | + H3 compression, lists, abbreviations |
Example (26 tokens -> 18 tokens, 31% reduction):
Before:
## Session Protocol
The session protocol has multiple phases:
1. Serena Activation - You must activate Serena
After:
[Session Protocol]
session protocol has multiple phases:
1. Serena Activation - activate Serena
Script: scripts/extract_and_index.py
Implements the Vercel extract-and-index pattern for 60-80% token reduction. Splits markdown by headings into detail files, generates a compact pipe-delimited index.
Usage:
# Extract sections and output JSON to stdout
python3 scripts/extract_and_index.py -i AGENTS.md -d .agents-details
# Write index to a file
python3 scripts/extract_and_index.py -i AGENTS.md -d .agents-details -o AGENTS-INDEX.md
# Custom reference path in index
python3 scripts/extract_and_index.py -i AGENTS.md -d .agents-details -r .agents-docs -o AGENTS-INDEX.md
Output Index Format (Vercel pattern):
[Architecture]
|Layered design with separation of concerns (see: .agents-details/architecture.md)
[Testing]
|80% coverage required for business logic (see: .agents-details/testing.md)
Works with CLAUDE.md @import mechanism. Reference via @AGENTS-INDEX.md.
Script: scripts/test_skill_passive_compliance.py
Validates content placement against the skill vs passive context decision framework.
6 Compliance Checks:
name and description)Usage:
# Scan .claude directory (JSON output)
python3 scripts/test_skill_passive_compliance.py
# Scan specific directory with table output
python3 scripts/test_skill_passive_compliance.py --path .claude/skills/github --format table
Exit Codes: 0 = all passed, 1 = violations detected
Common Violations:
| Violation | Fix |
|---|---|
| CLAUDE.md too long | Split into separate files, add @imports |
| Missing @import file | Create file or remove @import directive |
| Skill missing frontmatter | Add --- block with name: and description: |
| Skill has no actions | Add scripts or move to passive context |
| Passive has actions | Extract executable content to a skill |
| Duplicate content | Remove redundant content from skill or passive |
Input: GitHub skill with gh pr create, gh issue close commands
{"classification": "Skill", "confidence": 85, "reasoning": "High tool execution (8 calls); Many action verbs (12)"}
### Clear Passive Classification
**Input**: Memory hierarchy reference with tables and always-needed patterns
```json
{"classification": "PassiveContext", "confidence": 90, "reasoning": "High reference content ratio (0.85); Always-needed information (5 indicators)"}
Input: PR comment responder with routing rules + script execution
{
"classification": "Hybrid",
"confidence": 65,
"reasoning": "High reference content ratio (0.72); Some tool execution (4 calls)",
"recommendations": {
"Passive": ["Routing Rules", "Classification Framework"],
"Skill": ["Get-UnaddressedComments.ps1", "Post-PRCommentReply.ps1"]
}
}
python3 -m pytest tests/ # all tests
python3 -m pytest tests/test_skill_passive_compliance_test.py -v # specific
python3 -m pytest tests/ --cov=scripts --cov-report=term-missing # coverage
Coverage Summary:
| Component | Tests | Key Areas |
|---|---|---|
| Compliance Validator | 19/20 (95%) | Line count, @imports, frontmatter, duplicates, exit codes |
| Analyzer | Full | Tool calls, action verbs, classification logic, confidence scoring |
| Extract-and-Index | 36 | Slug generation, parsing, index format, 60%+ reduction targets |
| Compressor | Full | All levels, code block preservation, 40-80% reduction targets |