Skill
applying-outbox-pattern
Apply when creating, refactoring, changing, planning (plan mode) or reviewing any code that implements the transactional outbox pattern. This includes adding, modifying any outbox related files/classes but also when performing dual writes in the code base, specially in application service layer.
From kotlin-patternsInstall
1
Run in your terminal$
npx claudepluginhub allousas/claude-code-plugins --plugin kotlin-patternsTool Access
This skill uses the workspace's default tool permissions.
Supporting Assets
View in Repositoryexamples.mdSkill Content
Purpose
Guarantee that domain events are published reliably by storing them in an outbox table within the same database transaction as the state change. A separate process reads the outbox and publishes events to the message broker, ensuring at-least-once delivery and avoiding the dual-write problem.
Typical Flow
- Application service performs domain operation and persists state change
- In the same transaction, application service writes event to the outbox table via
OutboxRepository- The event is an integration event (e.g. protobuf or JSON message ...), not a domain event object
- Transaction commits atomically (state change + outbox entry)
- A separate poller or CDC process reads unpublished outbox entries
- Poller publishes events to Kafka (or other broker)
- Poller marks outbox entries as published
- Consumers handle events idempotently (at-least-once delivery means duplicates are possible)
Guidelines
DO:
- Make sure that the application service or code that triggered the upper layer action os wrapped in a transaction
- Store events in the outbox table within the same database transaction as the entity state change
- Include
id,aggregate_id,event_type,payload(JSON),created_at, andpublishedflag in the outbox table - Use a separate scheduled poller or CDC (Change Data Capture) to read and publish events
- Mark events as published after successful broker delivery
- Combine with the
DomainEventPublisherpattern to publish domain events
DON'T:
- Publish to Kafka and write to DB in separate transactions (dual-write problem - one can succeed while the other fails)
- Delete outbox entries immediately after publishing - keep them for debugging/auditing and clean up with a scheduled job
- Process outbox entries without ordering guarantees - use
created_atordering per aggregate - Skip idempotency in consumers - at-least-once delivery means duplicates will happen
Spring specifics
- Use
@Scheduledfor the outbox poller - The poller runs its own
@Transactionalto read and mark entries - Use
@EnableSchedulingin a configuration class
Examples
Please use always these examples as reference: examples.md
Similar Skills
Stats
Parent Repo Stars1
Parent Repo Forks0
Last CommitFeb 25, 2026