From bmad-skills
Guides E2E and integration testing for TypeScript/NestJS projects with Jest, Supertest, Docker (Kafka, PostgreSQL, MongoDB, Redis), and Given-When-Then pattern. For .e2e-spec.ts files, test/e2e/ dirs, setup, writing, review, run, debug, optimize tests including flakiness and isolation.
npx claudepluginhub bmad-labs/skills --plugin bmad-skillsThis skill uses the workspace's default tool permissions.
E2E testing validates complete workflows from user perspective, using real infrastructure via Docker.
LICENSEreferences/api/examples.mdreferences/api/knowledge.mdreferences/api/mocking.mdreferences/api/rules.mdreferences/api/test-helper.mdreferences/common/best-practices.mdreferences/common/debugging.mdreferences/common/examples.mdreferences/common/knowledge.mdreferences/common/nestjs-setup.mdreferences/common/rules.mdreferences/common/test-case-creation-guide.mdreferences/kafka/docker-setup.mdreferences/kafka/examples.mdreferences/kafka/isolation.mdreferences/kafka/knowledge.mdreferences/kafka/performance.mdreferences/kafka/rules.mdreferences/kafka/test-helper.mdSuggests manual /compact at logical task boundaries in long Claude Code sessions and multi-phase tasks to avoid arbitrary auto-compaction losses.
Share bugs, ideas, or general feedback.
E2E testing validates complete workflows from user perspective, using real infrastructure via Docker.
For comprehensive step-by-step guidance, use the appropriate workflow:
| Workflow | When to Use |
|---|---|
| Setup E2E Test | Setting up E2E infrastructure for a new or existing project |
| Writing E2E Test | Creating new E2E test cases with proper GWT pattern |
| Review E2E Test | Reviewing existing tests for quality and correctness |
| Running E2E Test | Executing tests with proper verification |
| Debugging E2E Test | Systematically fixing failing tests |
| Optimize E2E Test | Improving test suite performance |
IMPORTANT: Before starting any E2E testing task, identify the user's intent and load the appropriate workflow.
| User Says / Wants | Workflow to Load | File |
|---|---|---|
| "Set up E2E tests", "configure docker-compose", "add E2E to project", "create test helpers" | Setup | workflows/setup/workflow.md |
| "Write E2E tests", "add integration tests", "test this endpoint", "create e2e-spec" | Writing | workflows/writing/workflow.md |
| "Review E2E tests", "check test quality", "audit tests", "is this test correct?" | Reviewing | workflows/review/workflow.md |
| "Run E2E tests", "execute tests", "start docker and test", "check if tests pass" | Running | workflows/running/workflow.md |
| "Fix E2E tests", "debug tests", "tests are failing", "flaky test", "connection error" | Debugging | workflows/debugging/workflow.md |
| "Speed up E2E tests", "optimize tests", "tests are slow", "reduce test time" | Optimizing | workflows/optimize/workflow.md |
references/ files to readImportant: Each workflow includes instructions to load relevant knowledge from the references/ folder before and after completing tasks.
references/
├── common/ # Shared testing fundamentals
│ ├── knowledge.md # Core E2E concepts and test pyramid
│ ├── rules.md # Mandatory testing rules (GWT, timeouts, logging)
│ ├── best-practices.md # Test design and cleanup patterns
│ ├── test-case-creation-guide.md # GWT templates for all scenarios
│ ├── nestjs-setup.md # NestJS app bootstrap and Jest config
│ ├── debugging.md # VS Code config and log analysis
│ └── examples.md # Comprehensive examples by category
│
├── kafka/ # Kafka-specific testing
│ ├── knowledge.md # Why common approaches fail, architecture
│ ├── rules.md # Kafka-specific testing rules
│ ├── test-helper.md # KafkaTestHelper implementation
│ ├── docker-setup.md # Redpanda/Kafka Docker configs
│ ├── performance.md # Optimization techniques
│ ├── isolation.md # Pre-subscription pattern details
│ └── examples.md # Kafka test examples
│
├── postgres/ # PostgreSQL-specific testing
│ ├── knowledge.md # PostgreSQL testing concepts
│ ├── rules.md # Cleanup, transaction, assertion rules
│ ├── test-helper.md # PostgresTestHelper implementation
│ └── examples.md # CRUD, transaction, constraint examples
│
├── mongodb/ # MongoDB-specific testing
│ ├── knowledge.md # MongoDB testing concepts
│ ├── rules.md # Document cleanup and assertion rules
│ ├── test-helper.md # MongoDbTestHelper implementation
│ ├── docker-setup.md # Docker and Memory Server setup
│ └── examples.md # Document and aggregation examples
│
├── redis/ # Redis-specific testing
│ ├── knowledge.md # Redis testing concepts
│ ├── rules.md # TTL and pub/sub rules
│ ├── test-helper.md # RedisTestHelper implementation
│ ├── docker-setup.md # Docker configuration
│ └── examples.md # Cache, session, rate limit examples
│
└── api/ # API testing (REST, GraphQL, gRPC)
├── knowledge.md # API testing concepts
├── rules.md # Request/response assertion rules
├── test-helper.md # Auth and Supertest helpers
├── examples.md # REST, GraphQL, validation examples
└── mocking.md # MSW and Nock external API mocking
Tip: For detailed step-by-step guidance, use the Workflows section above.
Workflow: Setup E2E Test
references/common/knowledge.md - Understand E2E fundamentalsreferences/common/nestjs-setup.md - Project setupdocker-setup.md files as neededWorkflow: Writing E2E Test
references/common/rules.md - GWT pattern, timeoutsreferences/common/test-case-creation-guide.md - Templatesreferences/kafka/knowledge.md → test-helper.md → isolation.mdreferences/postgres/rules.md → test-helper.mdreferences/mongodb/rules.md → test-helper.mdreferences/redis/rules.md → test-helper.mdreferences/api/rules.md → test-helper.mdWorkflow: Review E2E Test
references/common/rules.md - Check against mandatory patternsreferences/common/best-practices.md - Quality standardsrules.md filesWorkflow: Running E2E Test
npm run test:e2e > /tmp/e2e-${E2E_SESSION}-output.log 2>&1Workflow: Debugging E2E Test
references/common/debugging.md/tmp/e2e-${E2E_SESSION}-failures.md tracking fileWorkflow: Optimize E2E Test
references/common/best-practices.md - Performance patternsreferences/kafka/performance.md for Kafka testsreferences/common/examples.md for general patternsexamples.md for detailed scenariosALWAYS redirect E2E test output to temp files, NOT console. E2E output is verbose and bloats agent context.
IMPORTANT: Redirect output to temp files only (NO console output). Use unique session ID to prevent conflicts.
# Generate unique session ID at start of debugging session
export E2E_SESSION=$(date +%s)-$$
# Standard pattern - redirect to file only (no console output)
npm run test:e2e > /tmp/e2e-${E2E_SESSION}-output.log 2>&1
# Read summary only (last 50 lines)
tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# Get failure details
grep -B 2 -A 15 "FAIL\|✕" /tmp/e2e-${E2E_SESSION}-output.log
# Cleanup when done
rm -f /tmp/e2e-${E2E_SESSION}-*.log /tmp/e2e-${E2E_SESSION}-*.md
Temp Files (with ${E2E_SESSION} unique per agent):
/tmp/e2e-${E2E_SESSION}-output.log - Full test output/tmp/e2e-${E2E_SESSION}-failures.log - Filtered failure output/tmp/e2e-${E2E_SESSION}-failures.md - Tracking file for one-by-one fixing/tmp/e2e-${E2E_SESSION}-debug.log - Debug runs/tmp/e2e-${E2E_SESSION}-verify.log - Verification runsTest against actual services via Docker. Never mock databases or message brokers for E2E tests.
ALL E2E tests MUST follow Given-When-Then:
it('should create user and return 201', async () => {
// GIVEN: Valid user data
const userData = { email: 'test@example.com', name: 'Test' };
// WHEN: Creating user
const response = await request(httpServer)
.post('/users')
.send(userData)
.expect(201);
// THEN: User created with correct data
expect(response.body.data.email).toBe('test@example.com');
});
Each test MUST be independent:
beforeEachAssert exact values, not just existence:
// WRONG
expect(response.body.data).toBeDefined();
// CORRECT
expect(response.body).toMatchObject({
code: 'SUCCESS',
data: { email: 'test@example.com', name: 'Test' }
});
project-root/
├── src/
├── test/
│ ├── e2e/
│ │ ├── feature.e2e-spec.ts
│ │ ├── setup.ts
│ │ └── helpers/
│ │ ├── test-app.helper.ts
│ │ ├── postgres.helper.ts
│ │ ├── mongodb.helper.ts
│ │ ├── redis.helper.ts
│ │ └── kafka.helper.ts
│ └── jest-e2e.config.ts
├── docker-compose.e2e.yml
├── .env.e2e
└── package.json
// test/jest-e2e.config.ts
const config: Config = {
preset: 'ts-jest',
testEnvironment: 'node',
testMatch: ['**/*.e2e-spec.ts'],
testTimeout: 25000,
maxWorkers: 1, // CRITICAL: Sequential execution
clearMocks: true,
forceExit: true,
detectOpenHandles: true,
};
| Technology | Wait Time | Strategy |
|---|---|---|
| Kafka | 10-20s max (polling) | Smart polling with 50ms intervals |
| PostgreSQL | <1s | Direct queries |
| MongoDB | <1s | Direct queries |
| Redis | <100ms | In-memory operations |
| External API | 1-5s | Network latency |
CRITICAL: Fix ONE test at a time. NEVER run full suite repeatedly while fixing.
When E2E tests fail:
export E2E_SESSION=$(date +%s)-$$
/tmp/e2e-${E2E_SESSION}-failures.md with all failing testsnpm run test:e2e -- -t "test name" > /tmp/e2e-${E2E_SESSION}-debug.log 2>&1
tail -50 /tmp/e2e-${E2E_SESSION}-debug.log
for i in {1..5}; do npm run test:e2e -- -t "test name" > /tmp/e2e-${E2E_SESSION}-run$i.log 2>&1 && echo "Run $i: PASS" || echo "Run $i: FAIL"; done
rm -f /tmp/e2e-${E2E_SESSION}-*.log /tmp/e2e-${E2E_SESSION}-*.mdWHY: Running full suite wastes time and context. Each failing test pollutes output, making debugging harder.
beforeEach(async () => {
await new Promise(r => setTimeout(r, 500)); // Wait for in-flight
await repository.clear(); // PostgreSQL
// OR
await model.deleteMany({}); // MongoDB
});
// Use pre-subscription + buffer clearing (NOT fromBeginning: true)
const kafkaHelper = new KafkaTestHelper();
await kafkaHelper.subscribeToTopic(outputTopic, false);
// In beforeEach: kafkaHelper.clearMessages(outputTopic);
beforeEach(async () => {
await redis.flushdb();
});
mockServer.use(
http.post('https://api.external.com/endpoint', () => {
return HttpResponse.json({ status: 'success' });
})
);
// Use smart polling instead of fixed waits
await kafkaHelper.publishEvent(inputTopic, event, event.id);
const messages = await kafkaHelper.waitForMessages(outputTopic, 1, 20000);
expect(messages[0].value).toMatchObject({ id: event.id });
All commands redirect output to temp files only (no console output).
# Initialize session (once at start)
export E2E_SESSION=$(date +%s)-$$
# Run specific test (no console output)
npm run test:e2e -- -t "should create user" > /tmp/e2e-${E2E_SESSION}-output.log 2>&1 && tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# Run specific file
npm run test:e2e -- test/e2e/user.e2e-spec.ts > /tmp/e2e-${E2E_SESSION}-output.log 2>&1 && tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# Run full suite
npm run test:e2e > /tmp/e2e-${E2E_SESSION}-output.log 2>&1 && tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# Get failure details from last run
grep -B 2 -A 15 "FAIL\|✕" /tmp/e2e-${E2E_SESSION}-output.log
# Debug with breakpoints (requires console for interactive debugging)
node --inspect-brk node_modules/.bin/jest --config test/jest-e2e.config.ts --runInBand
# View application logs (limited)
tail -100 logs/e2e-test.log
grep -i error logs/e2e-test.log | tail -50
# Cleanup session files
rm -f /tmp/e2e-${E2E_SESSION}-*.log /tmp/e2e-${E2E_SESSION}-*.md