From test
Guides TDD with red-green-refactor loop using integration tests on public APIs, avoiding internal mocks and implementation coupling.
npx claudepluginhub udecode/dotai --plugin testThis skill uses the workspace's default tool permissions.
**Core principle**: Tests should verify behavior through public interfaces, not implementation details. Code can change entirely; tests shouldn't.
Guides TDD with red-green-refactor loop using vertical slicing, integration tests through public interfaces, and avoiding implementation-coupled tests for features and bug fixes.
Guides test-driven development using red-green-refactor loop with vertical slicing, integration tests via public interfaces, and avoidance of implementation-coupled tests.
Guides TDD workflow with red-green-refactor cycle: plan interfaces, tracer bullet tests, minimal implementation to green, refactor under tests. For explicit TDD requests only.
Share bugs, ideas, or general feedback.
Core principle: Tests should verify behavior through public interfaces, not implementation details. Code can change entirely; tests shouldn't.
Good tests are integration-style: they exercise real code paths through public APIs. They describe what the system does, not how it does it. A good test reads like a specification - "user can checkout with valid cart" tells you exactly what capability exists. These tests survive refactors because they don't care about internal structure.
// GOOD: Tests observable behavior
test("user can checkout with valid cart", async () => {
const cart = createCart();
cart.add(product);
const result = await checkout(cart, paymentMethod);
expect(result.status).toBe("confirmed");
});
// GOOD: Verifies through interface
test("createUser makes user retrievable", async () => {
const user = await createUser({ name: "Alice" });
const retrieved = await getUser(user.id);
expect(retrieved.name).toBe("Alice");
});
Characteristics of good tests:
Bad tests are coupled to implementation. They mock internal collaborators, test private methods, or verify through external means (like querying a database directly instead of using the interface). The warning sign: your test breaks when you refactor, but behavior hasn't changed.
// BAD: Tests implementation details
test("checkout calls paymentService.process", async () => {
const mockPayment = jest.mock(paymentService);
await checkout(cart, payment);
expect(mockPayment.process).toHaveBeenCalledWith(cart.total);
});
// BAD: Bypasses interface to verify
test("createUser saves to database", async () => {
await createUser({ name: "Alice" });
const row = await db.query("SELECT * FROM users WHERE name = ?", ["Alice"]);
expect(row).toBeDefined();
});
Red flags:
Prefer writing tests before implementation. If you've already written code, consider starting fresh from tests rather than retrofitting — tests written after tend to verify what you built, not what's required.
Mock at system boundaries only:
Don't mock:
Use dependency injection — pass external dependencies in rather than creating them internally:
// Easy to mock
function processPayment(order, paymentClient) {
return paymentClient.charge(order.total);
}
// Hard to mock
function processPayment(order) {
const client = new StripeClient(process.env.STRIPE_KEY);
return client.charge(order.total);
}
Prefer SDK-style interfaces — specific functions for each external operation:
// GOOD: Each function is independently mockable
const api = {
getUser: (id) => fetch(`/users/${id}`),
getOrders: (userId) => fetch(`/users/${userId}/orders`),
createOrder: (data) => fetch("/orders", { method: "POST", body: data }),
};
// BAD: Mocking requires conditional logic inside the mock
const api = {
fetch: (endpoint, options) => fetch(endpoint, options),
};
Accept dependencies, don't create them
// Testable
function processOrder(order, paymentGateway) {}
// Hard to test
function processOrder(order) {
const gateway = new StripeGateway();
}
Return results, don't produce side effects
// Testable
function calculateDiscount(cart): Discount {}
// Hard to test
function applyDiscount(cart): void {
cart.total -= discount;
}
Small surface area — fewer methods = fewer tests needed, fewer params = simpler test setup
Deep modules (from "A Philosophy of Software Design"): small interface + lots of implementation. When designing, ask: Can I reduce methods? Simplify params? Hide more complexity inside?
DO NOT write all tests first, then all implementation. This is "horizontal slicing" - treating RED as "write all tests" and GREEN as "write all code."
This produces crap tests:
Correct approach: Vertical slices via tracer bullets. One test → one implementation → repeat. Each test responds to what you learned from the previous cycle.
WRONG (horizontal):
RED: test1, test2, test3, test4, test5
GREEN: impl1, impl2, impl3, impl4, impl5
RIGHT (vertical):
RED→GREEN: test1→impl1
RED→GREEN: test2→impl2
RED→GREEN: test3→impl3
...
Before writing any code:
Ask: "What should the public interface look like? Which behaviors are most important to test?"
You can't test everything. Confirm with the user exactly which behaviors matter most. Focus testing effort on critical paths and complex logic, not every possible edge case.
Write ONE test that confirms ONE thing about the system:
RED: Write test → run test → confirm it FAILS correctly
GREEN: Write minimal code → run test → confirm it PASSES
This is your tracer bullet - proves the path works end-to-end.
For each remaining behavior:
RED: Write next test → run test → confirm it FAILS correctly
GREEN: Write minimal code → run test → confirm it PASSES
Rules:
After all tests pass, look for refactor candidates:
Refactor candidates: duplication → extract function/class, long methods → break into private helpers, shallow modules → combine or deepen, feature envy → move logic to where data lives, primitive obsession → introduce value objects.
Never refactor while RED. Get to GREEN first.
[ ] Test describes behavior, not implementation
[ ] Test uses public interface only
[ ] Test would survive internal refactor
[ ] Code is minimal for this test
[ ] No speculative features added
[ ] Watched test fail before writing code
[ ] Failure was for expected reason (missing feature, not typo)
[ ] All other tests still pass
TDD applies to bug fixes — write a test that reproduces the bug first.
# Bug: empty email passes validation
RED: test("rejects empty email", () => {
const result = validateEmail("");
expect(result.valid).toBe(false);
});
→ Run test → FAILS (empty string passes validation) ✓
GREEN: Add check: if (!email || !email.includes("@")) return { valid: false }
→ Run test → PASSES ✓
Verify all other validation tests still pass.
Compile-time type assertions. No runtime — just bun typecheck. Catches regressions in generics, conditional types, and type constraints that runtime tests can't see.
When: generic APIs, utility types, complex inference, mapped/conditional types, ensuring invalid usage errors. Not: trivial stuff like string prop accepts string.
Search for a file exporting Expect and Equal. If none exists, create one:
export function Expect<T extends true>() {}
export type Equal<X, Y> = (<T>() => T extends X ? 1 : 2) extends <
T
>() => T extends Y ? 1 : 2
? true
: false;
export type Not<T extends boolean> = T extends true ? false : true;
export type IsAny<T> = 0 extends 1 & T ? true : false;
export type IsNever<T> = [T] extends [never] ? true : false;
Expect<T extends true> — compile error = test failureEqual<X, Y> — exact type equality (defeats any widening)Not, IsAny, IsNever — edge case guards (any/never break naive comparisons)import { Expect, Equal, Not, IsAny } from "./utils";
// Block scope each test to avoid name collisions
{
type Result = ReturnType<typeof myGenericFn<SomeInput>>;
Expect<Equal<Result, { id: string; name: string }>>;
Expect<Not<IsAny<Result>>>;
}
@ts-expect-error must be on the line immediately before the error. Always include a reason. Unused directive = failing test (constraint is missing).
// ✅ directive on line immediately before error
doSomething({
// @ts-expect-error - name must be string
name: 123,
});
// ❌ directive too far from error
doSomething({
// @ts-expect-error - name must be string
...defaults,
name: 123,
});
declare const for mock values without runtime: declare const ctx: SomeCtx;type _name = Expect<...> when you need a type-level-only assertion (no runtime Expect() call needed)/* biome-ignore-all lint */ at file top for type-only files — suppresses unused variable warningsRun with bun typecheck. If it compiles, it passes.