From vanity-engineering-review
Reviews codebases, architectures, PRs, and technical plans for vanity engineering—unnecessary complexity driven by ego rather than user or business value.
npx claudepluginhub bencium/bencium-marketplace --plugin vanity-engineering-reviewThis skill uses the workspace's default tool permissions.
A diagnostic skill that identifies code, architecture, and technical decisions built to impress
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
A diagnostic skill that identifies code, architecture, and technical decisions built to impress rather than to ship. Vanity engineering is entropy disguised as craftsmanship — it increases complexity without proportional capability gain, and it compounds maintenance cost while delivering zero additional user value.
The only legitimate purpose of engineering is to solve a problem someone actually has.
Everything else — elegant abstractions nobody traverses, microservices that serve one endpoint, custom frameworks that replicate existing tools, type systems more complex than the domain they model — is vanity. It may feel productive. It is not.
This skill does not oppose quality, rigour, or good engineering. It opposes engineering that exists to satisfy the builder rather than the user.
Apply this review to any of:
Before examining any code, establish what the system actually needs to do. Without this anchor, you cannot distinguish necessary complexity from vanity complexity.
Ask (or determine from context):
If the user cannot answer these, that is itself a vanity signal — building without defined requirements.
Scan the codebase or architecture against the detection patterns in
references/detection-patterns.md. Read that file before proceeding.
Score each finding using the Vanity Severity scale:
Produce a structured assessment:
## Vanity Engineering Assessment
### Summary
[One paragraph: What this codebase does vs what it is engineered to do.
The gap between these two is the vanity surface area.]
### Requirement-to-Complexity Ratio (RCR)
[Scale 1-10. 1 = minimal viable solution. 10 = PhD thesis disguised as a CRUD app.
Most production systems should score 2-4.]
### Top Findings (max 7)
For each finding:
- What: The specific pattern detected
- Where: File/module/component
- Severity: V0-V3
- Why it is vanity: How it fails the "does a user need this?" test
- What it should be instead: The simpler alternative
- Kill cost: Effort to remove or simplify (hours/days)
### Vanity Debt Estimate
[Total accumulated complexity cost from vanity engineering.
Express as: person-hours of maintenance per month attributable to
vanity patterns rather than actual requirements.]
### The Hard Question
[One direct, uncomfortable question the team needs to answer honestly.
Example: "If you deleted the entire plugin system and hardcoded the
three integrations you actually use, what would you lose?"]
For every system or feature reviewed, generate a kill criteria framework. This is the most important deliverable — it prevents vanity engineering from recurring.
Read references/kill-criteria-template.md for the full template, then generate a
project-specific version.
Kill criteria exist because humans are bad at stopping things. We are wired to continue what we started (sunk cost), to add rather than remove (addition bias), and to interpret complexity as value (effort justification). Kill criteria counteract all three by making the stop decision automatic, pre-committed, and ego-independent.
These trigger immediate shutdown with no debate. They exist for situations where continuing causes escalating damage. No human approval needed — if the condition is met, the thing dies.
Examples:
These do not kill automatically but force a mandatory review with a default-to-kill bias. The burden of proof is on continuing, not on stopping.
Examples:
These define what "success" looks like. If these are not met within the defined timeframe, the default is kill. This inverts the normal dynamic where features survive by default.
30-day evaluation window example:
"If I deleted this, who would notice and when?" If the answer is "nobody" or "only the person who built it," it is vanity.
"Could this be replaced by a simpler thing that does 90% of the job?" If yes, the remaining 10% must justify the additional complexity. It rarely does.
"Could a competent engineer new to this codebase understand this in under an hour?" If not, the abstraction serves the author's mental model, not the team's.
"Is this complexity justified by current scale, or by imagined future scale?" Building for 10M users when you have 500 is not prudent engineering. It is fantasy.
"Would removing this technology from the stack make the project less interesting to talk about in an interview?" If yes, that is probably why it is there.
"Does this dependency earn its keep?" Every dependency is a liability. A library that saves 200 lines but adds 50KB to the bundle and an upgrade treadmill is not earning its keep.
"How many concrete implementations does this abstraction have?" One implementation behind an interface is not abstraction. It is indirection. Two is suspicious. Three is where abstraction starts to pay off.
Vanity engineering is a specific manifestation of entropy. When the negentropy-lens skill is available, cross-reference findings:
Be direct. Be specific. Name the pattern, show the evidence, propose the simpler alternative. Do not soften findings to protect egos — the entire point of this review is to surface what politeness hides.
However: distinguish vanity from learning. A junior developer over-abstracting is learning abstraction. A senior developer over-abstracting is indulging. Calibrate accordingly.
Frame findings as: "This complexity is not justified by the current requirements. Here is what would be." The goal is a better system, not a humiliated engineer.