From claude-reliability
Guides systematic implementation of new features: clarify intent with scenarios and requirements, verify assumptions via docs, analyze existing code before changes.
npx claudepluginhub drmaciver/claude-reliability --plugin claude-reliabilityThis skill uses the workspace's default tool permissions.
Before writing any code, ensure you understand **why** this feature exists and **what success looks like**.
Guides feature development systematically: explore codebase with agents, clarify requirements, design architecture, implement, test, and review.
Guides feature or multi-file implementation with structured workflow: clarify requirements, select personas, implement safely, add tests, and validate.
Guides end-to-end feature implementation workflow as vertical slices covering data model, domain logic, API, and UI. Use for new features or feature request issues.
Share bugs, ideas, or general feedback.
Before writing any code, ensure you understand why this feature exists and what success looks like.
1. Clarify the intent:
2. Sketch out usage scenarios:
3. Define requirements for usefulness:
4. Validate understanding:
Document these findings. Write down the intent, scenarios, and requirements somewhere (a work item note, a comment, or just in your working memory). You'll need to verify against them after implementation.
If you are unsure how to do something or believe it is impossible, always check the official documentation for the relevant tools and libraries. Do not make claims about what an API, library, or tool can or cannot do based on assumptions. Incorrect assumptions waste time and erode trust.
Common mistakes to avoid:
When in doubt, check the docs first. When confident, still consider checking the docs.
Before writing any new code:
Write tests before implementing the feature. This ensures:
Steps:
For each piece of functionality:
Actually run the code. Tests are necessary but not sufficient. Before declaring a feature complete, execute it in a realistic scenario.
Why this matters:
How to validate:
For features with a user interface or CLI:
For internal features or APIs:
tmp/, .scratch/)For bug fixes:
Iterate until it works:
After validation:
Key principle: Never ask "does this work?" when you can answer it yourself by running the code. Self-validate before asking humans.
Revisit the feature intent:
Transform unstructured data into structured types at system boundaries rather than checking validity repeatedly throughout code. Validation discards information (returning a boolean); parsing preserves it by producing a refined type. Concentrate parsing at the system edge so that internal code can trust its inputs structurally.
Map/Dict over a list of pairs when uniqueness is required. Prefer an enum with fixed variants over a string when there are known valid values.Move conditional logic up toward callers so callees have simpler, unconditional implementations. Move loops down into lower-level functions so they operate on batches.
Optional<X> and the first thing it does is return early on None, change the signature to require X and make the caller responsible for the check.Every line of code is a liability. Optimize for disposability over extensibility. Write straightforward, slightly duplicated code that can be easily understood and deleted, rather than building elaborate abstractions. Write modules with clear boundaries so that removing a feature means deleting a file, not performing surgery across twenty files.
Wrong abstractions are more damaging than duplication. DRY only makes sense when it reduces the cost of change more than it increases the cost of understanding. Three similar lines of code is better than a premature abstraction. Resist creating abstractions until they insist upon being created — if you have seen only one or two instances, it is too early to abstract.
Default to established tools and patterns already used in the project. Do not introduce new dependencies, frameworks, or languages without a compelling reason that outweighs the cost of adding something unfamiliar to the stack. Each new dependency is a long-term maintenance commitment.
If you need to use a library or tool and aren't sure of the interface, read the documentation or source code. Wrong guesses lead to wasted effort and broken code. In the face of ambiguity, refuse the temptation to guess — read the reference, use a debugger, be thorough.
Implement what's requested. Don't add configuration options, feature flags, or abstractions that weren't asked for. Three lines of straightforward code is better than a premature abstraction. Before reaching for complex architectures, verify that a simple single-threaded implementation would not be sufficient.
If the project has test infrastructure, use it. If there's a coverage requirement, meet it. Run the tests before declaring the work done.
Before writing error handling code, classify the error. Bugs are conditions you didn't expect (null dereferences, logic errors, failed assertions) — the correct response is to fail fast. Recoverable errors are foreseeable conditions (network failures, parse errors, invalid input) — the correct response is explicit handling: retry, fallback, or propagate.
Do not write elaborate recovery logic for "impossible" conditions. Assert and fail instead. Focus recovery code on genuinely recoverable situations.
Errors you can't handle must be propagated, not hidden. Silent error handling makes debugging nearly impossible and hides real problems:
unwrap_or_default() to hide failures, or let _ = ... to discard Results?, return Result, or log the error visibly before continuingThe only valid reason to swallow an error is when the operation is genuinely optional and failing to perform it has no user-visible consequences. Even then, prefer logging.
Validate preconditions at the top of functions. Use assertions at the end of functions to verify your work. Treat assertions as documentation of invariants — they communicate intent to future readers.
When you need to touch an area of code to deliver a feature, clean up that area as you go. Do not propose sweeping refactoring campaigns across code you do not need to touch for the current task. Small, incremental improvements bundled with feature work are easy to review. If you rename a concept or change an approach, do so consistently throughout the codebase — don't leave a refactoring half-done with two patterns coexisting.
Writing tests after implementation often leads to tests that merely confirm the implementation rather than verifying the intended behavior. Writing tests first helps define what "correct" means before you build it.
That said, writing tests after the fact is fine when needed — especially for edge cases discovered during implementation, for coverage gaps, or when the interface wasn't clear enough upfront. The goal is not to need to write tests after the fact, not to avoid it entirely. If you find yourself there, write the tests — it's better to have them than not.
Before diving into complex debugging, verify: Am I editing the right file? Did the changes actually get saved? Am I running the code I think I am? Is there stale build state? Did I forget a step? These take seconds to check and eliminate the most common causes of "impossible" bugs.
Before marking a feature as complete: