From code
YAGNI decision frameworks for evaluating whether to build a feature, detecting speculative generality, and preventing unnecessary feature bloat. Use when planning involves new features, design decisions, or "should we build this" questions during brainstorm, scope, or architecture review. Covers four costs of presumptive features, build-vs-not-build decision framework, speculative generality detection, and bloat antipatterns.
npx claudepluginhub smileynet/line-cook --plugin code-spiceThis skill uses the workspace's default tool permissions.
| Evidence | Build It | Don't Build It |
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
| Evidence | Build It | Don't Build It |
|---|---|---|
| Concrete user request with measured demand | Yes | — |
| Speculative ("we might need it") | — | Wait for evidence |
| One stakeholder's opinion, no data | — | Validate first |
| Enables a committed near-term deliverable | Yes | — |
| "While we're at it" during adjacent work | — | Separate ticket; evaluate independently |
| Framework/library code (multiple consumers) | Yes (interfaces expected) | — |
| Cost | What It Means | Why It Matters |
|---|---|---|
| Build | Time and effort to implement now | Diverts resources from confirmed requirements |
| Delay | Opportunity cost — what else could ship instead | Features have time value; delay erodes ROI |
| Carry | Ongoing maintenance, testing, documentation | Every feature is a liability until proven valuable |
| Repair | Refactoring when assumptions prove wrong | Wrong abstractions are harder to fix than missing ones |
At Microsoft, Kohavi et al. found:
Similar results at Amazon, Netflix, and other companies with rigorous A/B testing. The default assumption should be that a feature will fail to deliver value until measured otherwise.
| Signal | What It Looks Like | Action |
|---|---|---|
| Single-implementation interface | IFooService with only FooServiceImpl | Inline; extract interface when second impl arrives |
| Unused extension points | Plugin registry with one plugin | Remove the registry; add when needed |
| Test-only consumers | Code used only in tests, not production | Likely dead; verify and remove |
| One-type factory | Factory that only creates one type | Replace with direct construction |
| Unused parameters | Parameters passed but never read | Delete (Try Delete Then Compile) |
| Unnecessary delegation | Wrapper that just calls through | Inline the wrapper |
| Future-oriented naming | V2, New, Enhanced prefix/suffix | Rename to describe current behavior |
Step 1: Requirement Concreteness
├── Concrete (user stories, measured demand, committed deliverable)
│ → Proceed to Step 2
├── Speculative ("might need", "just in case", single opinion)
│ → STOP. Don't build. Revisit when evidence appears.
└── No identified users
→ STOP. Apply YAGNI.
Step 2: Cost of Deferral
├── High (security, data integrity, architectural foundation)
│ → Build now — deferral creates larger problems
├── Medium (performance, UX polish)
│ → Build if within current sprint scope; otherwise defer
└── Low (convenience features, edge cases)
→ Defer — build when concrete demand materializes
Step 3: Codebase Malleability
├── Easy to add later (modular, well-tested, clean interfaces)
│ → Defer — you can add it cheaply when needed
└── Hard to add later (tightly coupled, no tests, deep integration)
→ Consider building now if Steps 1-2 support it
| Situation | YAGNI (Don't Build) | Good Design (Do Build) |
|---|---|---|
| Interface with no second implementation | YAGNI — inline it | Good design if needed for testing now |
| Error handling for unlikely scenario | YAGNI if truly unlikely | Good design if failure is catastrophic |
| Configuration for values that never change | YAGNI — hardcode it | Good design if ops needs runtime control |
| Abstraction over a single dependency | YAGNI if dependency is stable | Good design if dependency may change |
| Performance optimization | YAGNI without profiling data | Good design if measured hot path |
| Evidence Level | Estimated Success Rate | Action |
|---|---|---|
| A/B tested with positive results | ~70-80% | Build with confidence |
| Requested by multiple users with data | ~50-60% | Build, but measure |
| Requested by one stakeholder | ~30-40% | Prototype first, validate |
| "We think users will want this" | ~15-25% | Don't build; gather evidence |
| "We might need this someday" | <10% | YAGNI — park it |
V2, New, Enhanced)