Use when analyzing critical documents, specifications, or large files (>3000 lines), before any synthesis or conclusions - enforces complete line-by-line reading with quantitative verification to prevent skimming that leads to incomplete understanding
Enforces complete line-by-line reading of critical documents before any analysis. Automatically activates for files ≥3000 lines, .md specifications, and user prompts >3000 characters. Prevents synthesis until quantitative verification confirms 100% reading completion.
/plugin marketplace add krzemienski/shannon-framework/plugin install shannon@shannon-frameworkThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Prevent skimming and superficial reading through mandatory line-by-line comprehension with quantitative verification.
Core principle: Thoroughness cannot be optional. Complete reading is architectural enforcement, not best practice suggestion.
Violating the letter of this protocol is violating the spirit of this protocol.
Automatic activation for:
.md files (specifications, plans, documentation)Manual activation:
/shannon:read_complete <file> - Force protocol for any fileDon't use for:
/sh_read_normal <file>)NO SYNTHESIS WITHOUT COMPLETE READING FIRST
If you haven't verified lines_read == total_lines, you cannot make conclusions.
MANDATORY: Count total lines BEFORE reading ANY content.
Commands:
# Count lines first
total_lines=$(wc -l < file.md)
Output required:
File has {N} lines. Now I will read all {N} lines completely.
No exceptions: You cannot read without knowing total line count first.
Read EVERY line sequentially from 1 to N.
NOT allowed:
REQUIRED:
Tracking: Count each line as you read it.
Verify: lines_read == total_lines
Formula:
IF lines_read < total_lines THEN
missing_lines = total_lines - lines_read
ERROR: "INCOMPLETE READING: Missing {missing_lines} lines"
BLOCK: No analysis, no synthesis, no conclusions
REQUIRED: Return to Step 2, read missing lines
END IF
Verification output:
✅ COMPLETE READING VERIFIED
Total lines: {N}
Lines read: {N}
Completeness: 100.0%
Status: READY FOR SYNTHESIS
Use Sequential MCP for deep thinking about what was read.
Minimum thinking steps (based on file size):
NOT allowed until verified complete:
ONLY after verification:
Captured from baseline testing (RED phase):
Baseline behavior: Agent uses grep/search to find "relevant" sections, skips complete reading.
Rationalization captured:
"Let me search for the key requirements first to understand scope"
Shannon counter:
⚠️ STOP. "Relevant sections" = incomplete understanding.
REALITY CHECK:
- You don't know what's relevant until you've read everything
- "Relevant" is determined by your assumptions, not reality
- Critical details are often in "irrelevant" sections
REQUIRED ACTION:
1. Count total lines
2. Read ALL lines sequentially (line 1, 2, 3, ..., N)
3. THEN identify what's relevant (after complete reading)
NO EXCEPTIONS.
Baseline behavior: Agent reads partial file to "get started", never returns to read rest.
Rationalization captured:
"I'll read the first 200 lines to understand structure, then continue"
Shannon counter:
⚠️ STOP. Partial reading is incomplete reading.
REALITY CHECK:
- "Get started" reading = never finish reading
- Lines 201-N contain critical details you'll miss
- "Overview" = superficial understanding
REQUIRED ACTION:
1. Count total lines
2. Read ALL lines (not just first 200)
3. Verify lines_read == total_lines
4. NO synthesis until verification passes
NO EXCEPTIONS.
Baseline behavior: Agent rationalizes skimming for files >2000 lines.
Rationalization captured:
"This 2,500-line specification is quite extensive - I'll skim efficiently for key points"
Shannon counter:
⚠️ STOP. "Too long" is not an exemption.
REALITY CHECK:
- Long files are EXACTLY why this protocol exists
- "Skim efficiently" = miss critical details
- Mission-critical work cannot tolerate "good enough" understanding
REQUIRED ACTION:
1. Yes, read all 2,500 lines
2. Use Sequential MCP for 200+ synthesis thoughts
3. This takes time (30-60 min) - that's acceptable
4. Complete understanding > speed
NO EXCEPTIONS.
Baseline behavior: Agent skips re-reading based on session memory.
Rationalization captured:
"I already read this file earlier in the session"
Shannon counter:
⚠️ STOP. Memory is not verification.
REALITY CHECK:
- Remembering != having read every line
- Files may have changed
- "Key points" = selective memory, not complete understanding
REQUIRED ACTION:
1. Re-count lines (file may have changed)
2. Either verify previous complete reading OR re-read:
- If previous verification exists (100% complete) → synthesis allowed
- If no verification exists → re-read completely
NO EXCEPTIONS.
If you catch yourself using these phrases, STOP - you're about to violate:
Skip-reading triggers:
Partial-reading triggers:
Length-based rationalization triggers:
Memory-based triggers:
ALL OF THESE MEAN: STOP. Count lines. Read all lines. Verify completeness.
Shannon enhancement: Track reading completeness across session.
Save to Serena:
serena.write_memory("shannon/reading/{file_hash}", {
"file_path": file_path,
"total_lines": total_lines,
"lines_read": lines_read,
"completeness": lines_read / total_lines,
"synthesis_steps": synthesis_steps,
"timestamp": ISO_timestamp,
"status": "COMPLETE" if completeness == 1.0 else "INCOMPLETE"
})
Query reading history:
# Check if file was completely read before
history = serena.read_memory("shannon/reading/{file_hash}")
if history and history["completeness"] == 1.0:
# Previously verified complete
# Can skip re-reading OR re-verify if file changed
Enhanced commands that enforce this protocol:
/shannon:spec: Before 8D analysis:
1. Count specification lines
2. Read ALL lines sequentially
3. Verify completeness (100%)
4. Sequential MCP synthesis (100+ thoughts)
5. THEN present 8D analysis
/shannon:analyze: Before analysis:
1. For each critical file, count lines
2. Read ALL lines per file
3. Verify per-file completeness
4. Aggregate verification
5. THEN synthesize findings
/shannon:wave: Before wave execution:
1. Count wave plan lines
2. Read ALL tasks sequentially
3. Verify plan completeness
4. Sequential synthesis
5. THEN execute waves
Per-project config: .shannon/reading-enforcement.json
{
"enforcement_enabled": true,
"critical_file_patterns": [
"*.md",
"SPEC_*",
"PLAN_*",
"**/skills/**/SKILL.md"
],
"size_threshold": 3000,
"minimum_synthesis_steps": {
"small": 50,
"medium": 100,
"large": 200,
"critical": 500
},
"override_allowed": true,
"override_audit": true
}
Override for legitimate cases:
/sh_read_normal <file> # Disables enforcement
# All overrides logged to Serena for audit
| Excuse | Reality |
|---|---|
| "Too simple to read completely" | Simple files hide critical details. Read all lines. |
| "I'll complete reading after initial analysis" | Analysis before complete reading = wrong analysis. |
| "File is too long for complete reading" | Long files are EXACTLY why protocol exists. |
| "Relevant sections are enough" | You don't know what's relevant until you've read everything. |
| "I already know this file's content" | Knowledge ≠ verification. Re-verify or re-read. |
| "Skimming is more efficient" | Efficient incompleteness = unreliable results. |
This skill is prerequisite for:
Complementary skills:
For mission-critical work, complete understanding is mandatory.
Same as NO MOCKS for testing: superficial comprehension = unreliable results.
If you follow "read every line" for code reviews, follow it for specifications.
This protocol is not optional. It's architectural enforcement of thoroughness.
Target domains: Finance, Healthcare, Legal, Security, Aerospace - where AI hallucinations from incomplete reading are unacceptable.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.