From walkeros
Guides walkerOS package testing with env mocking instead of Jest, dev example linkages via it.each, and real behavior assertions for reliable, documented tests.
npx claudepluginhub elbwalker/walkerosThis skill uses the workspace's default tool permissions.
walkerOS uses a layered testing approach with built-in patterns for mocking and
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
walkerOS uses a layered testing approach with built-in patterns for mocking and documentation sync. This skill ensures tests are reliable, efficient, and maintainable.
Core principle: Test real behavior using the env pattern, link to dev
examples, verify before claiming complete.
env for Mocking, Not JestwalkerOS has a built-in dependency injection pattern via env in context. This
is lighter than Jest mocks, enables documentation generation, and keeps tests in
sync with examples.
Wrong:
jest.mock('../ga4', () => ({ initGA4: jest.fn() }));
expect(initGA4).toHaveBeenCalledWith(...);
Right:
import { examples } from '../dev';
import { mockEnv } from '@walkeros/core';
const calls: Array<{ path: string[]; args: unknown[] }> = [];
const testEnv = mockEnv(examples.env.push, (path, args) => {
calls.push({ path, args });
});
await destination.push(event, { ...context, env: testEnv });
expect(calls).toContainEqual({
path: ['window', 'gtag'],
args: ['event', 'page_view', { page_title: 'Home' }],
});
dev ExamplesThe dev.ts export provides examples.env, examples.events,
examples.mapping, and examples.step. Using these in tests ensures
documentation stays in sync.
import { examples } from '../dev';
// Use examples.env for mock environment
const testEnv = mockEnv(examples.env.push, interceptor);
// Assert against examples.events (documented expected output)
expect(calls[0].args).toEqual(examples.events.ga4PageView());
// Test with examples.mapping configurations
const config = { mapping: examples.mapping.ecommerce };
it.eachStep examples (examples.step) provide { in, out } pairs for each step. Use
it.each to iterate over them:
import { examples } from '../dev';
describe('step examples', () => {
it.each(Object.entries(examples.step))(
'%s',
async (name, { in: input, out: expected }) => {
const result = await step.push(input, context);
if (expected === false) {
expect(result).toBe(false);
} else {
expect(result).toEqual(expected);
}
},
);
});
See using-step-examples for the full lifecycle including the Three Type Zones and naming conventions.
If you're asserting that a mock was called, you're testing the mock works, not the code.
Red flags:
expect(mockFn).toHaveBeenCalled() without verifying the mock produces real
effects*-mock test IDsFix: Test what the code actually does. If external APIs must be mocked, verify the real API would receive correct data.
If you didn't see the test fail, you don't know it tests the right thing.
Process:
Red flags:
Production classes shouldn't have methods only tests use.
Wrong:
class Session {
destroy() {
/* only used in tests */
}
}
Right:
// In test-utils/
export function cleanupSession(session: Session) { ... }
"Should pass now" is not verification.
Process:
| Type | When to Add | Example |
|---|---|---|
| Integration | New usage pattern, new external API interaction, new data flow path | Collector → Destination → gtag() |
| Unit | Combinatorics, edge cases, pure function logic | Mapping variations, core utilities |
| Contract | Boundary validation | Destination output matches vendor API, source input validation |
Guideline: Integration tests prove things work when stuck together. Unit tests efficiently cover variations. Contract tests catch API drift.
Simulation testing uses the CLI push command with --simulate flags. The
collector does not export a simulate() function — simulation is a CLI concern
that maps to mock/disabled config properties at runtime.
CLI usage:
# Simulate a destination (mocks its push, captures API calls)
walkeros push flow.json -e '{"entity":"page","action":"view"}' --simulate destination.ga4
# Simulate a source (captures events it pushes, disables all destinations)
walkeros push flow.json --simulate source.browser
# Mock a destination with a specific return value
walkeros push flow.json -e event.json --mock destination.ga4='{"status":"ok"}'
Programmatic usage:
import { push } from '@walkeros/cli';
// Simulate a destination
const result = await push('flow.json', { entity: 'page', action: 'view' }, {
simulate: ['destination.ga4'],
});
// result.usage = API call tracking data from wrapEnv
// Simulate a source
const result = await push('flow.json', undefined, {
simulate: ['source.browser'],
});
// result.captured = events captured from source env.push
Key points:
--simulate destination.X sets config.mock = {} on the target and
config.disabled = true on all other destinations--simulate source.X wraps env.push with a capture function and disables
all destinations/dev env.push is auto-loaded to provide mock globals (fake
window.gtag, etc.)PushResult with result, captured (source), and usage
(destination)mockEnv() and env pattern examples above remain correct for unit testing
individual step functions directly| Package | Approach |
|---|---|
| core | Unit tests only - pure functions, no env needed |
| collector | Integration tests critical - input/output consistency is paramount |
| browser source | Maintain walker algorithm coverage |
| web destinations | Integration tests per unique pattern + unit tests for mappings, use env pattern |
| server destinations | Same as web destinations |
| cli/docker | Integration tests for spawn behavior, explore dev pattern to reduce duplication |
| sources | Contract tests for input validation, integration tests for event capture |
Each destination/source defines an env type that specifies external
dependencies:
// Destination-specific env type
export interface Env extends DestinationWeb.Env {
window: {
gtag: Gtag.Gtag;
dataLayer: unknown[];
};
document: {
createElement: (tagName: string) => HTMLElement;
head: { appendChild: (node: unknown) => void };
};
}
The mockEnv() function from @walkeros/core creates a Proxy that intercepts
all function calls:
import { mockEnv } from '@walkeros/core';
const calls: Array<{ path: string[]; args: unknown[] }> = [];
const testEnv = mockEnv(examples.env.push, (path, args) => {
calls.push({ path, args });
// Optionally return a value
});
// Now use testEnv in your destination context
await destination.push(event, { ...context, env: testEnv });
// Assert on captured calls
expect(calls).toContainEqual({
path: ['window', 'gtag'],
args: ['event', 'purchase', expect.objectContaining({ value: 99.99 })],
});
Each package with external dependencies should have:
// src/dev.ts
export * as schemas from './schemas';
export * as examples from './examples';
// src/examples/index.ts
export * as env from './env';
export * as events from './events';
export * as mapping from './mapping';
export * as step from './step'; // Step examples { in, out }
Sources accept platform dependencies via env. Mock window, document, or
library imports by passing them through env instead of mocking globals:
// Instead of mocking window.performance globally:
const mockWindow = {
performance: {
getEntriesByType: jest.fn().mockReturnValue([{ type: 'navigate' }]),
},
location: { href: 'https://test.com/' },
} as unknown as Window & typeof globalThis;
await createSessionSource(collector, undefined, { window: mockWindow });
// Instead of mocking express import:
const mockExpress = Object.assign(jest.fn().mockReturnValue(mockApp), {
json: jest.fn().mockReturnValue(middleware),
});
await sourceExpress(createSourceContext({}, { express: mockExpress as never }));
This pattern avoids global state pollution between tests and enables simulation in non-browser environments.
jest.mock() for internal modules when env pattern is available../dev# Run all tests
npm run test
# Run tests for specific package
cd packages/[name] && npm run test
# Run single test file
npm run test -- path/to/file.test.ts
# Watch mode
npm run test -- --watch
Reference: