Knowledge about AI context windows, token budgets, and signal-to-noise ratio. Use when assessing AI readiness or explaining how context impacts AI performance.
From ai-readinessnpx claudepluginhub bailejl/dev-plugins --plugin ai-readinessThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
This skill provides knowledge about AI context windows, token budgets, and signal-to-noise ratio — the foundational concepts behind AI readiness audits.
AI coding assistants (Claude Code, Cursor, GitHub Copilot, etc.) operate within a context window — a fixed-size buffer of tokens that holds everything the AI can "see" at once.
| Model Family | Context Window | Approximate Lines of Code |
|---|---|---|
| Claude 3.5+ | 200K tokens | ~150,000 lines |
| GPT-4 Turbo | 128K tokens | ~96,000 lines |
| Gemini 1.5 | 1M–2M tokens | ~750K–1.5M lines |
Despite large windows, effective context is much smaller. Research shows performance degrades well before the window is full.
The most critical concept in AI readiness: the ratio of useful, accurate information to irrelevant, misleading, or outdated content in the AI's context.
| Finding | Source | Implication |
|---|---|---|
| Adding just 10% irrelevant content reduces AI accuracy by 23% | Prompt engineering research | Even small amounts of noise significantly degrade output |
| Even with perfect retrieval, performance drops 13.9–85% as input length grows | "Lost in the Middle" research | More context ≠ better results |
| LLMs can track at most 5–10 variables before performance degrades to random guessing | Cognitive load studies | Complex, intertwined code overwhelms AI reasoning |
| AI models treat existing codebase patterns as implicit instructions | Pattern replication studies | Messy code breeds more messy code |
| Codebases are a proven prompt injection attack surface with success rates of 41–84% | Security research | Code content can manipulate AI behavior |
| Noise Source | Token Cost | Impact |
|---|---|---|
| Commented-out code | High — every line consumed | AI may treat as valid alternatives |
| Stale TODO/FIXME comments | Medium | Creates false urgency, distracts from real work |
| Outdated documentation | High — treated as authoritative rules | AI follows wrong instructions confidently |
| Generated files (bundles, lockfiles) | Very high — thousands of tokens | Consumes budget with zero learning signal |
| Duplicate code | High — repeated patterns amplified | AI replicates duplication patterns |
| Dead code / unused files | Medium-High | Confuses dependency understanding |
| Vendor/node_modules | Extreme | Can dominate entire context |
| Signal Source | Value | Why |
|---|---|---|
| Type annotations | Very high | Contracts the AI can reason about |
| Well-named functions/variables | High | Intention-revealing code guides AI output |
| Well-written tests | Very high | Executable documentation of expected behavior |
| CLAUDE.md / AI instruction files | High | Direct guidance for AI behavior |
| Consistent patterns | High | Clear templates for AI to follow |
| Accurate inline docs (why, not what) | Medium-High | Explains intent and constraints |
| File Type | Avg Tokens/Line | Typical File Size | Token Cost |
|---|---|---|---|
| TypeScript/JavaScript | ~4–6 | 200 lines | 800–1,200 |
| Python | ~3–5 | 150 lines | 450–750 |
| JSON config | ~2–3 | 50 lines | 100–150 |
| Markdown docs | ~4–6 | 100 lines | 400–600 |
| package-lock.json | ~3–4 | 10,000+ lines | 30,000–40,000 |
| Minified JS bundle | ~8–12 | 5,000+ lines | 40,000–60,000 |
.claudeignore / .cursorignore to exclude generated files, vendor dirs, lock files, build output.AI Output Quality ≈ f(Signal Quality × Signal Volume / Total Context Size)
Improving AI readiness means: