From judges
Compresses single artifacts via tiered summarization to fit token budgets for judge evaluation, using isolated forked context, Bash for scripting, and Python for token counting.
npx claudepluginhub closedloop-ai/claude-plugins --plugin judgesThis skill is limited to using the following tools:
Compress individual artifacts within a specified token budget using tiered summarization strategies. This skill operates in isolated forked context to prevent polluting the parent agent's context with large raw artifacts.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Compress individual artifacts within a specified token budget using tiered summarization strategies. This skill operates in isolated forked context to prevent polluting the parent agent's context with large raw artifacts.
You are responsible for compressing a single artifact file to fit within a token budget. Your responsibilities:
Success criteria:
You receive three required parameters:
| Parameter | Type | Description |
|---|---|---|
artifact_path | string | Path to artifact file relative to $CLOSEDLOOP_WORKDIR |
task_description | string | Compression guidance (e.g., "preserve function signatures") |
token_budget | integer | Maximum allowed tokens for compressed output |
Read the artifact from its absolute path:
# Construct full path
ARTIFACT_FULL_PATH="$CLOSEDLOOP_WORKDIR/$artifact_path"
Use the Read tool to load the artifact content. If the file does not exist, skip to error handling.
Invoke the count_tokens.py script to get accurate token count:
cd "$CLOSEDLOOP_WORKDIR" && uv run count_tokens.py "$artifact_path"
Expected output format:
{
"input_tokens": 1234
}
Parse the JSON output and extract input_tokens as raw_tokens.
Error handling:
raw_tokens = len(content) / 4[WARNING: Token count estimated via heuristic due to count_tokens.py failure]\n\nChoose strategy based on raw token count relative to budget:
Condition: raw_tokens <= token_budget
Action: Return artifact unchanged
Metadata:
compacted_tokens = raw_tokenstruncated = falseNo further processing needed.
Condition: token_budget < raw_tokens <= token_budget * 1.5
Action: Apply artifact-type-specific compression preserving structure
Compression strategies by artifact type:
| Artifact Type | Strategy |
|---|---|
| Code diffs | Keep function signatures, class declarations, and error messages. Summarize method bodies with // ... (implementation omitted for brevity). Remove comments and blank lines. |
| JSON files | Keep all keys and structure. For arrays longer than 10 items, keep first 5 and last 2, replace middle with {"_truncated": "N items omitted"}. Truncate string values over 200 chars. |
| Log files | Keep all ERROR and WARNING lines. Summarize consecutive INFO lines with ... (N info lines omitted). Keep first and last 10 lines intact. |
| Plan/PRD markdown | Keep all headings, tables, and code blocks. Summarize paragraph text preserving key nouns and action verbs. Remove redundant examples. |
Validation:
After compression, count tokens again:
cd "$CLOSEDLOOP_WORKDIR" && uv run count_tokens.py <(echo "$compressed_content")
Decision tree:
compacted_tokens <= token_budget: Success → Return with truncated = falsecompacted_tokens > token_budget: Compression failed → Fallback to Tier 3Condition: raw_tokens > token_budget * 1.5 OR compression in Tier 2 exceeded budget
Action: Hard truncate at character boundary
Algorithm:
char_limit = token_budget * 4 (heuristic: 4 chars per token)\n\n) before char_limittruncated_tokens = raw_tokens - token_budget[TRUNCATED: content exceeds budget, remaining {truncated_tokens} tokens omitted]
Metadata:
compacted_tokens = token_budget (approximate)truncated = trueReturn structured JSON with strict schema:
{
"artifact_name": "path/to/artifact.ext",
"raw_tokens": 5000,
"compacted_tokens": 2000,
"truncated": false,
"content": "compressed artifact content here..."
}
Field descriptions:
| Field | Type | Description |
|---|---|---|
artifact_name | string | Original artifact_path parameter |
raw_tokens | integer | Token count from count_tokens.py on raw artifact |
compacted_tokens | integer | Token count after compression (from count_tokens.py validation or estimate for Tier 1/3) |
truncated | boolean | true if Tier 3 truncation applied, false otherwise |
content | string | Compressed or truncated artifact content |
Condition: artifact_path does not exist at $CLOSEDLOOP_WORKDIR/<artifact_path>
Response:
{
"artifact_name": "path/to/missing.ext",
"raw_tokens": 0,
"compacted_tokens": 0,
"truncated": true,
"content": "[ERROR: artifact not found at $CLOSEDLOOP_WORKDIR/path/to/missing.ext]"
}
Condition: Script exits with non-zero code or returns invalid JSON
Action:
raw_tokens = len(content) / 4[WARNING: Token count estimated via heuristic due to count_tokens.py failure]
<original content follows>
truncated = false unless Tier 3 is appliedCondition: Tier 2 compression produces malformed content (e.g., invalid JSON syntax for JSON artifacts)
Action: Immediately fallback to Tier 3 hard truncation with truncated = true
Input:
artifact_path: plan.jsontoken_budget: 5000Output:
{
"artifact_name": "plan.json",
"raw_tokens": 3200,
"compacted_tokens": 3200,
"truncated": false,
"content": "<full plan.json content>"
}
Input:
artifact_path: git_difftoken_budget: 10000Compression applied: Remove comments, summarize method bodies, keep signatures
Output:
{
"artifact_name": "git_diff",
"raw_tokens": 12000,
"compacted_tokens": 9500,
"truncated": false,
"content": "<compressed diff with function signatures preserved>"
}
Input:
artifact_path: outcomes.logtoken_budget: 3000Action: Hard truncate at ~12000 chars (3000 tokens * 4)
Output:
{
"artifact_name": "outcomes.log",
"raw_tokens": 25000,
"compacted_tokens": 3000,
"truncated": true,
"content": "<first ~2900 tokens of log>\n\n[TRUNCATED: content exceeds budget, remaining 22000 tokens omitted]"
}