From universe
Lints coding agent instruction files (CLAUDE.md, .cursorrules, AGENTS.md) for vagueness, conflicts, redundancies, ordering issues, misplaced rules, weak emphasis, and token budget. Provides ratings, gap detection, and consolidations.
npx claudepluginhub mbwsims/claude-universe --plugin universeThis skill is limited to using the following tools:
Analyze instruction files for quality issues: vague rules, conflicts, redundancies, ordering
Lints Claude Code plugin, agent, and skill instructions against rules from Anthropic Opus 4.6 and Sonnet 4.6 model cards. Use for authoring or reviewing prompts.
Optimizes CLAUDE.md hierarchies, .claude/rules, ecosystem files, and docs/ folders per Anthropic best practices. Detects redundancies, conflicts, and suggests improvements.
Maintains CLAUDE.md and AGENTS.md instruction files by enforcing size limits (<300 lines), progressive disclosure via docs/ references, multi-agent compatibility, and tool-first content. Use for creation, updates, audits.
Share bugs, ideas, or general feedback.
Analyze instruction files for quality issues: vague rules, conflicts, redundancies, ordering problems, misplaced rules, weak emphasis, and token budget. Then provide deep effectiveness ratings, coverage gap detection, and consolidation suggestions.
Works out of the box with no dependencies. If the alignkit npm package is installed, analysis is enhanced with precise token counts and exhaustive deterministic static analyzers.
Call the alignkit_local_lint tool first (bundled, no external dependency). If unavailable,
fall back to alignkit_lint (external). Pass the file argument if the user specified one;
otherwise omit it for auto-discovery of instruction files.
If no instruction files are found, explain that the project has no CLAUDE.md or similar
files and suggest creating one. Offer to run /discover to find conventions.
If alignkit_lint is unavailable (MCP server not running or tool call fails),
perform manual analysis instead. This fallback is fully self-contained:
Find instruction files using Glob:
CLAUDE.md (project root).claude/rules/**/*.md.claude/agents/**/*.md.claude/skills/**/SKILL.mdRead each file and parse rules: lines starting with -, *, or N. under headings.
Count total rules. Strip YAML frontmatter (between --- markers) before parsing.
Collect project context: read package.json (dependencies, scripts), inspect any
available config files that shape architecture or tooling, and list top-level directories
using Glob.
Run manual diagnostics on each rule:
.claude/rules/) or describe automation (belong as hooks).Estimate token count: Roughly 1 token per 4 characters. Calculate total across all instruction files. Context window percentage = tokens / 200000 * 100.
Proceed to steps 2-5 below (Deep Effectiveness, Coverage Gaps, Consolidation) using the manually gathered project context.
Note to the user: "Running without alignkit MCP server -- token counts are estimated.
For precise token counting and session-based adherence tracking via /check-rules, ensure the
alignkit MCP server is running."
If alignkit_lint returns an error or empty result, check:
file argument path is correctPresent the diagnostic breakdown, starting with the highest-impact findings.
Report format:
## Instruction Lint — {file}
{ruleCount} rules · ~{estimated tokens, ~1 token per 4 characters across all files} tokens ({estimated}% of context)
### Top Issues
{Prioritize from the diagnostics: list the 3-5 highest-impact findings. For each,
state the issue and the concrete fix. Prioritize CONFLICT and METADATA errors first,
then VAGUE and REDUNDANT warnings with the most token savings.}
### Issues by Type
| Issue | Count | Severity |
|-------|-------|----------|
| Vague rules | {n} | warning |
| Conflicting rules | {n} | warning |
| Redundant rules | {n} | warning |
| Ordering issues | {n} | warning |
| Placement suggestions | {n} | warning |
| Weak emphasis | {n} | warning |
| Linter-job rules | {n} | warning |
| Metadata issues | {n} | error |
When multiple instruction files are found, present a per-file summary first:
### Files Analyzed
| File | Rules | Issues |
|------|-------|--------|
| CLAUDE.md | 12 | 3 |
| .claude/rules/test-patterns.md | 4 | 1 |
| .claude/agents/reviewer.md | 2 | 0 |
Then list each diagnostic with the specific rule text and actionable guidance.
For placement suggestions, be specific about where the rule should move:
scoped-rule → "Move to .claude/rules/ with a glob pattern"hook → "Convert to a Claude Code hook for automated enforcement"skill → "Move to .claude/skills/ as a reusable workflow"subagent → "Move to .claude/agents/ as a specialized agent"Consult references/diagnostic-codes.md for the full list of diagnostic codes, their
meanings, and how to advise on each.
Using the projectContext from the lint results (dependencies, tsconfig, directory tree),
rate each rule as HIGH, MEDIUM, or LOW effectiveness per the methodology in
references/deep-analysis-guide.md. Present only MEDIUM and LOW rules:
### Effectiveness Issues
| Rule | Rating | Reason |
|------|--------|--------|
| "Follow best practices" | LOW | Too vague — rewrite with specific practices |
| "Use React hooks" | LOW | No React in dependencies |
| "Handle errors properly" | LOW | Claude knows this — remove to save tokens |
| "Run vitest" | MEDIUM | Good but should specify flags: `vitest run` |
For each LOW rule, provide a concrete suggested rewrite or recommend removal with "REMOVE" if the rule wastes instruction budget (e.g., things Claude already knows).
Consult references/deep-analysis-guide.md for detailed effectiveness evaluation methodology.
Analyze the project context to identify 3-5 important behaviors NOT covered by existing rules.
See references/deep-analysis-guide.md for detection methodology. Each gap must reference
specific evidence from the project.
Present as:
### Coverage Gaps
1. **Database migrations** — Prisma is in dependencies but no rules about migration
workflow. Suggested rule: "Run `npx prisma migrate dev` after schema changes.
Never edit migration files directly."
2. **Test organization** — Tests exist in multiple locations but no rule explains where new
tests should live or how they should be named. Suggested rule: "Match each package's
existing test location and naming pattern."
Each gap must reference specific evidence from the project (real dependency names, real directory paths). Never suggest gaps for technologies not present.
Identify groups of related rules that can merge into fewer, stronger rules to save tokens.
Look for:
Present as:
### Consolidation Opportunities
1. **Merge 3 review-report rules** (~30 tokens saved)
- "State what changed"
- "State what was verified"
- "State what remains unverified"
→ **"Summarize the change, the verification you completed, and any gaps that remain."**
The merged text must preserve all original constraints.
Start with quick wins — these are the highest-value, lowest-effort improvements
Be specific in rewrites — reference actual project directories, dependencies, and patterns
Token budget matters — every rule costs context window. Flag rules that waste budget
Placement advice is actionable — don't just say "move this" without saying where
Coverage gaps must be evidence-based — only suggest rules for technologies actually present
Handle tool call edge cases:
/discover to find conventions to populate it/discover -- Use to find conventions in the codebase that should become rules/check-rules -- Use to verify whether the rules are actually being followedreferences/diagnostic-codes.md — Full reference for all diagnostic codes and how to
advise on eachreferences/deep-analysis-guide.md — Detailed methodology for effectiveness ratings,
coverage gap detection, and consolidation analysis