From universe
Discovers unwritten project conventions from codebase analysis (package.json, configs, source samples, dir structure) and generates paste-ready instruction rules.
npx claudepluginhub mbwsims/claude-universe --plugin universeThis skill is limited to using the following tools:
Reverse-engineer the conventions a project actually follows by analyzing the codebase.
Analyzes multi-language projects (Java, Kotlin, TypeScript, Python, Rust, Go) to extract etalon classes, patterns, and layer-based architecture. Generates convention documents and rules in .claude/convention/.
Analyzes codebase to create or update .conventions/ directory with gold standards, anti-patterns, and checks for naming, imports, and patterns. Use for setting up coding standards or bootstrapping new projects.
Detects and enforces project-specific coding conventions by analyzing codebase patterns including naming, folder structure, test organization, and style. Uses MCP tools like get_public_api and get_project_graph.
Share bugs, ideas, or general feedback.
Reverse-engineer the conventions a project actually follows by analyzing the codebase. Every project has unwritten rules — import patterns, naming conventions, error handling styles, testing approaches, architectural boundaries. This skill makes them explicit and actionable as instruction rules.
Works with zero dependencies. No alignkit installation needed.
Most CLAUDE.md files document what the developer intended. The codebase reveals what actually happened. Conventions that exist in practice but not in instructions are invisible to Claude — it may follow them by accident in one session and violate them in the next. Discovering and codifying these conventions makes Claude's behavior consistent.
If the user specified a focus area (e.g., "imports", "testing", "API routes"), narrow the analysis to that domain. Otherwise, run a broad analysis across all convention categories.
Read the existing instruction files (CLAUDE.md, .claude/rules/*) first to avoid suggesting rules that already exist.
Collect foundational data about the project:
Read a representative sample of source files across the project. Do not read every file — sample strategically:
The goal is coverage of different parts of the codebase, not exhaustiveness.
For each category in references/convention-categories.md, look for consistent patterns
across the sampled files. A convention must appear in most or all sampled files to
qualify — a pattern in 2 out of 10 files is not a convention.
For each discovered convention:
Format discoveries as a numbered list with evidence. Group by category.
Report format:
## Discovered Conventions — {project name}
{n} conventions found across {n files sampled} files
### Import & Module Patterns
1. **{Pattern name}** — {brief description of what's consistent}
Suggested rule: "{concrete, paste-ready rule text}"
Evidence: {specific files, counts, or grep results}
For each discovery, include:
Value filtering — this is critical:
Before including a convention, ask: "If Claude violated this, would it cause a real problem?"
High value: Include conventions whose violation would cause bugs, architectural damage, security regressions, or tooling breakage. Typical examples are layer boundaries, ownership checks, API response contracts, and build-sensitive import rules.
Medium value: Violations cause inconsistency but not breakage. Naming conventions, type organization, export style. Include these but mark as medium.
Low value — OMIT THESE: Patterns Claude would follow anyway from reading existing code (function vs arrow syntax, where props are defined, logging format). Also omit implementation details that describe HOW something was built rather than a rule to follow (e.g., "SSE uses TextEncoder" is an implementation detail, not a convention). Also omit patterns that might be gaps rather than intentional choices (e.g., "no Zod for API inputs" might mean they haven't added it yet, not that it's a convention to avoid it).
Aim for 8-12 high/medium conventions, not 17+ with filler. Fewer, stronger rules are more valuable than a comprehensive list that dilutes signal.
Projects with no existing CLAUDE.md: When no instruction files exist, this is a greenfield opportunity. Focus on the highest-value conventions first:
After presenting discoveries, offer to create a new CLAUDE.md with the selected rules organized by category. Use this structure:
# Project Instructions
## Architecture
{architecture boundary rules}
## Code Style
{naming, import, export rules}
## Testing
{test framework, package-specific placement, assertion rules}
## Workflow
{tool constraints, process rules}
Small projects (fewer than 8 source files): Reduce sampling -- read all source files instead of a sample. Adjust evidence thresholds downward since the sample IS the population:
Note to the user: "This is a small project -- conventions may solidify as it grows. These rules reflect what exists now."
After presenting discoveries, offer to add selected rules to CLAUDE.md:
Add rules: [1] [2] [3] [all] — or specify which to add
If the user selects rules, add them to the appropriate section of CLAUDE.md. If no CLAUDE.md exists, create one with the selected rules organized by category.
When adding rules, place them in the most appropriate location:
.claude/rules/ with appropriate glob patterns/lint-rules — After adding discovered conventions to CLAUDE.md, use /lint-rules to check
the overall quality of the instruction file/check-rules — Use to verify whether the adopted rules still match the codebasereferences/convention-categories.md — Detailed list of convention categories to
analyze, with specific patterns to look for and grep commands for detection