Use when implementing features or fixes - test-driven development with RED-GREEN-REFACTOR cycle and full code coverage requirement
Implements test-driven development with RED-GREEN-REFACTOR cycle and 100% code coverage enforcement.
/plugin marketplace add troykelly/claude-skills/plugin install issue-driven-development@troykelly-skillsThis skill is limited to using the following tools:
Test-Driven Development with full code coverage.
Core principle: If you didn't watch the test fail, you don't know if it tests the right thing.
Announce at start: "I'm using TDD to implement this feature."
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
Wrote code before a test? Delete it. Start over.
┌─────────────────────────────────────────────┐
│ │
▼ │
┌───────┐ ┌───────┐ ┌──────────┐ │
│ RED │────►│ GREEN │────►│ REFACTOR │─────────┘
└───────┘ └───────┘ └──────────┘
Write Write Clean
failing minimal up code
test code (stay green)
Write ONE test for ONE behavior.
// Test one specific thing
test('rejects empty email', async () => {
const result = await validateEmail('');
expect(result.valid).toBe(false);
expect(result.error).toBe('Email is required');
});
MANDATORY. Never skip.
pnpm test --grep "rejects empty email"
Confirm:
If test passes → You're testing existing behavior. Fix the test.
Write the SIMPLEST code to pass the test.
function validateEmail(email: string): ValidationResult {
if (!email) {
return { valid: false, error: 'Email is required' };
}
return { valid: true };
}
Don't add:
MANDATORY.
pnpm test --grep "rejects empty email"
Confirm:
After green, improve code quality:
Keep tests green during refactoring.
Write next failing test for next behavior.
# Check coverage
pnpm test --coverage
# Verify new code is covered
# Lines: 100%
# Branches: 100%
# Functions: 100%
# Statements: 100%
| Covered | Not Covered (Fix It) |
|---|---|
| All branches tested | Some if/else paths missed |
| All functions called | Unused functions |
| All error handlers triggered | Error paths untested |
| All edge cases verified | Only happy path |
These MAY have lower coverage (discuss with team):
Document exceptions in coverage config:
// jest.config.js
module.exports = {
coverageThreshold: {
global: {
branches: 100,
functions: 100,
lines: 100,
statements: 100,
},
},
coveragePathIgnorePatterns: [
'/node_modules/',
'/generated/',
'config.ts',
],
};
Core principle: Unit tests with mocks are necessary but not sufficient. You MUST ALSO test against real services.
| Layer | Purpose | Uses Mocks? | Uses Real Services? |
|---|---|---|---|
| Unit Tests (TDD) | Verify logic, enable RED-GREEN-REFACTOR | YES | No |
| Integration Tests | Verify real service behavior | No | YES |
Both layers are REQUIRED. Unit tests alone miss real-world failures. Integration tests alone are too slow for TDD.
We've experienced 80% failure rates with ORM migrations because:
Mocks don't catch: Schema mismatches, constraint violations, migration failures, connection issues, transaction behavior.
| Code Change | Unit Tests (with mocks) | Integration Tests (with real services) |
|---|---|---|
| Database model/migration | ✅ Required | ✅ Also required |
| Repository/ORM layer | ✅ Required | ✅ Also required |
| Cache operations | ✅ Required | ✅ Also required |
| Pub/sub messages | ✅ Required | ✅ Also required |
| Queue workers | ✅ Required | ✅ Also required |
After completing TDD cycle (unit tests with mocks):
docker-compose up -d)pnpm migrate)// LAYER 1: Unit tests with mocks (TDD cycle)
describe('UserRepository (unit)', () => {
const mockDb = { query: jest.fn() };
it('calls correct SQL for findById', async () => {
mockDb.query.mockResolvedValue([{ id: 1, email: 'test@example.com' }]);
const user = await userRepo.findById(1);
expect(mockDb.query).toHaveBeenCalledWith('SELECT * FROM users WHERE id = $1', [1]);
});
});
// LAYER 2: Integration tests with real postgres (ALSO required)
describe('UserRepository (integration)', () => {
beforeAll(async () => {
await db.migrate.latest();
});
it('actually persists and retrieves users', async () => {
await userRepo.create({ email: 'test@example.com' });
const user = await userRepo.findByEmail('test@example.com');
expect(user).toBeDefined();
expect(user.email).toBe('test@example.com');
});
it('enforces unique email constraint', async () => {
await userRepo.create({ email: 'unique@example.com' });
// Real postgres will throw - mocks won't catch this
await expect(
userRepo.create({ email: 'unique@example.com' })
).rejects.toThrow(/unique constraint/);
});
});
Skill: local-service-testing
// GOOD: Clear name, tests one thing
test('calculates tax for positive amount', () => {
const result = calculateTax(100, 0.08);
expect(result).toBe(8);
});
test('returns zero tax for zero amount', () => {
const result = calculateTax(0, 0.08);
expect(result).toBe(0);
});
test('throws for negative amount', () => {
expect(() => calculateTax(-100, 0.08)).toThrow('Amount must be positive');
});
// BAD: Tests multiple things
test('calculateTax works', () => {
expect(calculateTax(100, 0.08)).toBe(8);
expect(calculateTax(0, 0.08)).toBe(0);
expect(() => calculateTax(-100, 0.08)).toThrow();
});
// BAD: Tests mock, not real code
test('calls the tax service', () => {
const mockTaxService = jest.fn().mockReturnValue(8);
const result = calculateTax(100, 0.08);
expect(mockTaxService).toHaveBeenCalled(); // Testing mock, not behavior
});
test('description', () => {
// Arrange - set up test data
const user = createTestUser({ email: 'test@example.com' });
const input = { userId: user.id, action: 'update' };
// Act - perform the action
const result = processAction(input);
// Assert - verify the outcome
expect(result.success).toBe(true);
expect(result.timestamp).toBeDefined();
});
test('throws for invalid input', () => {
expect(() => validateInput(null)).toThrow(ValidationError);
expect(() => validateInput(null)).toThrow('Input is required');
});
test('async throws for invalid input', async () => {
await expect(asyncValidate(null)).rejects.toThrow(ValidationError);
});
test('logs error on failure', async () => {
const logSpy = jest.spyOn(logger, 'error');
await processWithFailure();
expect(logSpy).toHaveBeenCalledWith(
expect.stringContaining('Failed to process')
);
});
| Mock | Don't Mock |
|---|---|
| External APIs | Your own code |
| Database (integration) | Simple functions |
| File system | Pure logic |
| Time/dates | Deterministic code |
| Network requests | Internal modules |
// GOOD: Mock the external boundary
const fetchMock = jest.spyOn(global, 'fetch').mockResolvedValue(
new Response(JSON.stringify({ data: 'test' }))
);
// BAD: Mock internal implementation
const internalMock = jest.spyOn(utils, 'internalHelper');
| Problem | Solution |
|---|---|
| Test passes when should fail | Check assertion (expect syntax) |
| Test fails unexpectedly | Check test isolation (cleanup) |
| Flaky tests | Remove timing dependencies |
| Hard to test | Improve code design |
Before completing a feature:
This skill is called by:
issue-driven-development - Step 7, 8, 11This skill uses:
strict-typing - Tests should be typedinline-documentation - Document test utilitiesThis skill ensures:
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.