From the-workflow
CLAUDE.md instruction quality: writing effective project instructions, diagnosing why Claude ignores rules, routing content to the right layer, and systematic improvement. Invoke whenever task involves any interaction with CLAUDE.md files — writing, reviewing, auditing, improving, or debugging instruction compliance.
npx claudepluginhub xobotyi/cc-foundry --plugin the-workflowThis skill uses the workspace's default tool permissions.
**CLAUDE.md is a prompt that persists across every session.** All prompt engineering principles apply, but the failure
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
CLAUDE.md is a prompt that persists across every session. All prompt engineering principles apply, but the failure modes are specific: instructions exist but get ignored, content drifts from reality, and the file grows until nothing gets followed. The goal is not completeness — it's compliance. A short CLAUDE.md that Claude follows beats a comprehensive one it ignores.
CLAUDE.md is for project-specific instructions that change Claude's default behavior. If removing an instruction wouldn't change output quality, it doesn't belong.
Include:
Exclude:
Content belongs in the layer where it's most effective:
The test: If it's a rule Claude must follow in every task regardless of domain → CLAUDE.md. If it's a rule for a specific type of work → skill. If it must happen automatically without Claude's judgment → hook.
Every instruction must be specific enough that Claude can verify compliance.
Use Vitest for all tests. Test files live in __tests__/ adjacent to source.Write comprehensive tests for all changes.Run yarn lint && yarn test before committing. Fix all errors.Ensure code quality before committing.API routes use kebab-case: /api/user-settings, not /api/userSettings.Follow consistent naming conventions.The test: could two different agents reading this instruction produce different behavior? If yes, it's too vague.
Instructions must describe behaviors Claude can actually perform, not qualities it should aspire to.
Every public function has a JSDoc comment with @param and @returns.Write thorough documentation.Prefer composition over inheritance. No class hierarchies deeper than 2.Use good object-oriented design.Every token in CLAUDE.md competes for attention. The IFScale benchmark (2025) demonstrated a clear negative correlation between instruction density and compliance — even frontier models drop to 68% accuracy under high instruction load. Shorter CLAUDE.md files produce higher instruction adherence.
Explanatory prose and terse directives serve different purposes. Using the wrong style in the wrong place degrades compliance.
Prose (1-3 paragraphs) for project identity only. A brief "what is this project and why does it exist" section at the top orients Claude for edge cases where rules don't cover the situation. This is the one place where explaining why earns its tokens — understanding project intent helps the model make correct judgment calls.
Bullet lists for all operational rules. Conventions, constraints, commands, verification workflows — everything Claude must do goes in bullet lists. Research consistently shows:
The split: Top 10-15% of the file can be explanatory prose (project context). The remaining 85-90% should be structured bullet lists organized by topic.
Exception: If a rule is genuinely counterintuitive and Claude would resist following it without understanding why, a one-sentence rationale after the rule is acceptable. This is rare — most rules don't need justification, they need clarity.
Models follow a U-shaped attention curve: beginning and end are followed most reliably, middle content degrades.
For rules that absolutely must be followed, state them early AND reinforce at the end with different phrasing.
CLAUDE.md files cascade: subdirectory > project root > user global. Later files override earlier ones on the same topic.
~/.claude/CLAUDE.md) — personal defaults: communication preferences, tool preferences, workflow habits
that apply across all projects./CLAUDE.md) — repo-specific: structure, stack, conventions, constraints. This is the primary
file. Checked into version control, shared by the team../packages/api/CLAUDE.md) — module-specific overrides. Use sparingly — only when a module has
conventions that genuinely differ from the project root (e.g., a Python service in a mostly-TypeScript monorepo).Don't repeat. If the project root says "use Vitest", don't restate it in every subdirectory. Subdirectory files should contain only what differs from or adds to the root.
When Claude ignores CLAUDE.md rules, the cause is almost always one of these:
The instruction exists but Claude doesn't follow it. The file is too long, the rule is buried in the middle after hundreds of tokens of context, and attention has decayed.
Diagnosis: The file is painful for a human to skim in under 60 seconds. Critical rules sit inside prose paragraphs rather than standalone bullets. Multiple sections restate the same thing in different words.
Fix: Ruthlessly prune. Move anything Claude already knows by default. Promote buried rules to the top or bottom. Convert prose to bullet lists.
Claude produces plausible but wrong output — right code style, wrong location; idiomatic patterns, wrong framework.
Diagnosis: Instructions use abstract language ("clean code", "good tests", "consistent naming") without repo-specific specifics. Architecture descriptions are slogans ("hexagonal architecture") without file paths.
Fix: Replace every abstract instruction with a concrete, verifiable directive. Add file paths, command names, and specific patterns.
Claude references files that don't exist, suggests deprecated patterns, or proposes tooling the team abandoned.
Diagnosis: CLAUDE.md mentions old frameworks, directory names, or commands. Major architecture shifts never made it into the file. No one updates it alongside code changes.
Fix: Audit against the actual codebase. Remove dead references. Add a convention: update CLAUDE.md alongside architecture changes, not as a separate task.
Claude's behavior varies between sessions — sometimes adds tests, sometimes doesn't; sometimes uses one tool, sometimes another.
Diagnosis: Different sections disagree. Copy-paste over months introduced conflicting rules. CI scripts embed instructions that override CLAUDE.md.
Fix: Search for opposing directives on the same topic. Establish one canonical rule per concern. Audit hooks and CI for shadow instructions.
An instruction is in CLAUDE.md but should be in a skill, hook, or settings. Claude follows it inconsistently because CLAUDE.md rules compete for attention across all tasks, while the rule only matters for specific work.
Diagnosis: Rules like "when writing tests, always..." or "when reviewing PRs, follow this checklist..." — these are domain-specific behavioral rules that belong in skills, where they load only when relevant.
Fix: Move domain-specific instructions to skills. Move automated behaviors to hooks. Keep only universal project rules in CLAUDE.md.
When improving a CLAUDE.md, follow this sequence: