From shannon
Use automatically when prompts exceed 3000 characters, files exceed 500 lines, or large files are referenced - enforces complete line-by-line reading protocol with quantitative comprehension verification before processing, preventing partial comprehension and ensuring thorough understanding
npx claudepluginhub krzemienski/shannon-framework --plugin shannonThis skill uses the workspace's default tool permissions.
Large prompts and files are frequently skimmed rather than read completely, leading to partial comprehension and missed requirements.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Large prompts and files are frequently skimmed rather than read completely, leading to partial comprehension and missed requirements.
Core principle: Automatically enforce complete reading when content exceeds comprehension thresholds.
Shannon enhancement: Quantitative tracking of reading completeness and comprehension verification.
This skill activates AUTOMATICALLY when:
Shannon detection:
def should_activate_forced_reading(context: dict) -> bool:
"""Determine if forced reading protocol should activate"""
prompt = context.get("prompt", "")
referenced_files = context.get("referenced_files", [])
# Trigger 1: Long prompt
if len(prompt) > 3000 or prompt.count('\n') > 100:
return True
# Trigger 2: Large file references
for file_path in referenced_files:
if file_exists(file_path):
line_count = count_lines(file_path)
if line_count > 500:
return True
# Trigger 3: Multiple files
total_lines = sum([count_lines(f) for f in referenced_files if file_exists(f)])
if total_lines > 1000:
return True
# Trigger 4: Explicit keywords
if any(kw in prompt.lower() for kw in ["large file", "long document", "comprehensive spec"]):
return True
return False
WHEN ACTIVATED:
1. MUST read EVERY line before responding
2. MUST NOT skip sections
3. MUST NOT summarize without complete read
4. MUST track progress quantitatively (lines_read / total_lines)
5. MUST verify comprehension with checkpoints
When triggered, IMMEDIATELY announce:
๐ด FORCED READING PROTOCOL ACTIVATED
**Trigger**: {reason}
**Content size**: {X} characters, {Y} lines
**Estimated reading time**: {Z} minutes
**Shannon requirement**: Complete line-by-line reading before response
**Progress tracking**: Enabled (quantitative)
Read in checkpoints (every 100 lines or 5000 characters):
## Reading Checkpoint 1/5
**Lines**: 1-100 (20% complete)
**Key points extracted**:
- {Point 1}
- {Point 2}
- {Point 3}
**Comprehension verification**: โ
PASS
**Proceeding to next checkpoint...**
Shannon tracking:
reading_checkpoint = {
"checkpoint_id": 1,
"lines_read": 100,
"total_lines": 500,
"progress_percent": 20.0,
"key_points_extracted": 3,
"comprehension_verified": True,
"timestamp": ISO_timestamp
}
serena.write_memory(f"forced_reading/{session_id}/checkpoint_{1}", reading_checkpoint)
After reading ALL content:
## ๐ด FORCED READING COMPLETE
**Total lines read**: 500/500 (100%)
**Total checkpoints**: 5/5
**Reading duration**: 12 minutes
**Key requirements identified**: 47
**Comprehension score**: 0.95/1.00
**Shannon verification**: โ
ALL LINES READ
**Ready to respond with complete understanding.**
EVERY response must:
Example:
Based on complete reading:
**Lines 45-67**: Authentication requirements specify JWT with 15min expiry
**Lines 120-145**: Database schema requires 3 tables (users, sessions, logs)
**Lines 230-267**: Performance requirements: <200ms p95 latency
**Checkpoint 4 (lines 301-400)**: Error handling patterns defined
Comprehension formula:
def calculate_comprehension_score(reading_session: dict) -> float:
"""
Score comprehension quality: 0.00 (poor) to 1.00 (excellent)
"""
checkpoints = reading_session["checkpoints"]
# Factors
completion = reading_session["lines_read"] / reading_session["total_lines"]
key_points_density = reading_session["key_points_extracted"] / reading_session["total_lines"]
checkpoint_pass_rate = len([c for c in checkpoints if c["verified"]]) / len(checkpoints)
citation_accuracy = reading_session["citations_used"] / reading_session["response_claims"]
# Weighted score
score = (
completion * 0.40 + # Did you read it all?
key_points_density * 100 * 0.20 + # Did you extract insights?
checkpoint_pass_rate * 0.20 + # Did you verify understanding?
citation_accuracy * 0.20 # Did you cite specifics?
)
return min(1.0, score)
# Example
comprehension = {
"score": 0.95,
"grade": "A",
"completion": 1.00, # Read 100%
"key_points": 0.094, # 47 points / 500 lines = 0.094
"checkpoints": 1.00, # All checkpoints passed
"citations": 0.89, # 89% of claims cited
"quality": "EXCELLENT"
}
Hook: hooks/user-prompt-submit-hook.sh
#!/bin/bash
# Auto-detect large prompts and activate forced reading
PROMPT="$PROMPT_CONTENT"
PROMPT_LENGTH=${#PROMPT}
PROMPT_LINES=$(echo "$PROMPT" | wc -l)
# Detect referenced files
REFERENCED_FILES=$(echo "$PROMPT" | grep -oE '@[^ ]+' | sed 's/@//' || true)
# Calculate total lines
TOTAL_LINES=$PROMPT_LINES
for file in $REFERENCED_FILES; do
if [ -f "$file" ]; then
FILE_LINES=$(wc -l < "$file" 2>/dev/null || echo "0")
TOTAL_LINES=$((TOTAL_LINES + FILE_LINES))
# Check individual file size
if [ "$FILE_LINES" -gt 500 ]; then
echo "โ ๏ธ LARGE FILE DETECTED: $file ($FILE_LINES lines)"
echo "๐ด AUTO-ACTIVATING: forced-reading-protocol"
echo ""
echo "REQUIREMENT: Must read all $FILE_LINES lines before responding"
echo "REQUIREMENT: Use checkpoints every 100 lines"
echo "REQUIREMENT: Verify comprehension at each checkpoint"
echo "REQUIREMENT: Cite specific line numbers in response"
echo ""
fi
fi
done
# Check total content size
if [ "$PROMPT_LENGTH" -gt 3000 ] || [ "$PROMPT_LINES" -gt 100 ] || [ "$TOTAL_LINES" -gt 1000 ]; then
echo "๐ด FORCED READING PROTOCOL ACTIVATED"
echo ""
echo "**Trigger**: Large content detected"
echo "**Prompt**: $PROMPT_LENGTH characters, $PROMPT_LINES lines"
echo "**Referenced files**: $(echo "$REFERENCED_FILES" | wc -w) files"
echo "**Total lines**: $TOTAL_LINES"
echo ""
echo "**Shannon requirement**: Complete line-by-line reading"
echo "**Progress tracking**: Enabled (checkpoints every 100 lines)"
echo "**Comprehension verification**: MANDATORY"
echo ""
echo "๐ Beginning forced reading protocol..."
echo ""
# Log to Serena
serena_write "forced_reading/activation" "{
\"prompt_length\": $PROMPT_LENGTH,
\"prompt_lines\": $PROMPT_LINES,
\"referenced_files\": $(echo "$REFERENCED_FILES" | wc -w),
\"total_lines\": $TOTAL_LINES,
\"timestamp\": \"$(date -Iseconds)\"
}"
fi
Track reading performance:
reading_metrics = {
"session_id": session_id,
"content_size": {
"characters": 15420,
"lines": 523,
"words": 3892
},
"reading_performance": {
"duration_minutes": 12.5,
"reading_speed_wpm": 311, # words per minute
"checkpoint_count": 5,
"avg_checkpoint_time": 2.5 # minutes
},
"comprehension": {
"score": 0.95,
"key_points": 47,
"citations": 34,
"verification_passed": True
},
"efficiency": {
"lines_per_minute": 41.8,
"checkpoints_per_minute": 0.4,
"quality_per_minute": 0.076 # comprehension / duration
}
}
serena.write_memory(f"forced_reading/sessions/{session_id}", reading_metrics)
Learn from reading history:
# Query historical forced reading sessions
sessions = serena.query_memory("forced_reading/sessions/*")
# Analyze patterns
patterns = {
"avg_comprehension_by_size": {
"small_500": 0.97, # <500 lines
"medium_1000": 0.92, # 500-1000 lines
"large_2000": 0.85, # 1000-2000 lines
"xlarge_5000": 0.78 # 2000+ lines
},
"optimal_checkpoint_size": 100, # lines per checkpoint
"avg_reading_speed": 320, # words per minute
"comprehension_decay": {
"1_checkpoint": 0.98,
"5_checkpoints": 0.95,
"10_checkpoints": 0.88, # Fatigue sets in
"20_checkpoints": 0.75 # Quality drops significantly
},
"recommendations": [
"Break documents >2000 lines into separate sessions",
"Take 5min break after 10 checkpoints",
"Increase checkpoint frequency for dense technical content"
]
}
If you catch yourself:
ALL of these mean: STOP. Activate forced reading protocol.
Skip forced reading when:
BUT: Always announce if skipping and WHY:
**Forced reading NOT activated**: Content is 245 lines (under 500 threshold)
Proceeding with standard reading approach.
This skill integrates with:
Shannon integration:
Before forced reading:
After forced reading (Shannon data):
improvement = {
"missed_requirements": {
"before": 0.23, # 23% missed
"after": 0.02, # 2% missed
"improvement": "91% reduction"
},
"comprehension_score": {
"before": 0.67, # Estimated
"after": 0.95, # Measured
"improvement": "+42%"
},
"response_quality": {
"before": "vague summaries",
"after": "specific citations"
}
}
Large content requires systematic reading, not heroic skimming.
Shannon's auto-activation + quantitative tracking turns thorough reading from aspiration into enforced practice.
Measure completeness. Verify comprehension. Respond with confidence.