From guidewire-pack
Consume Guidewire App Events into downstream systems (SQS/SNS, Kafka, webhooks) and survive the event-side failures — events not firing because Gosu registration was missed, duplicates from queue redelivery, out-of-order arrival on the same resource, replay from a checkpoint for backfill, and back-pressure when consumers cannot keep up with producers. Use when registering App Events in Gosu, building an event-consumer service, or recovering from a missed-event window. Trigger with "guidewire app events", "guidewire webhooks", "guidewire event consumer", "guidewire event replay", "guidewire idempotent consumer".
npx claudepluginhub flight505/skill-forge --plugin guidewire-packThis skill is limited to using the following tools:
Wire Guidewire's event system into downstream consumers — analytics warehouses, fraud-detection services, broker-portal cache invalidators, customer-notification services. Guidewire emits **App Events** (typed business events fired on entity-state transitions); they are configured server-side in Gosu and routed to a destination (SQS, Kafka, or webhook URL). The consumer side has its own product...
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Wire Guidewire's event system into downstream consumers — analytics warehouses, fraud-detection services, broker-portal cache invalidators, customer-notification services. Guidewire emits App Events (typed business events fired on entity-state transitions); they are configured server-side in Gosu and routed to a destination (SQS, Kafka, or webhook URL). The consumer side has its own production failure modes that this skill addresses.
Five production failures this skill prevents:
claim.bound event that was never registered in Gosu; consumer waits forever; no error surfaces because there's nothing to error on.claim.reserve.changed before claim.created); consumer rejects the reserve event because the parent claim does not yet exist locally.guidewire-install-auth and guidewire-sdk-patternsBuild the integration in this order. Each step targets one of the five production failures listed in Overview.
Events not registered do not fire. The registration lives in gw.api.messaging.MessageEvents (or carrier-customized equivalent) and pairs an event code with a Gosu callback that decides whether to emit, and what payload.
// modules/configuration/gsrc/com/acme/messaging/ClaimEventBuilder.gs
package com.acme.messaging
uses gw.api.messaging.MessageContext
uses entity.Claim
class ClaimEventBuilder {
static function buildClaimStatusChangedEvent(ctx: MessageContext, claim: Claim): String {
return new gw.api.web.json.JsonObject() {{
put("eventType", "claim.status.changed")
put("messageId", java.util.UUID.randomUUID().toString())
put("eventTime", java.time.Instant.now().toString())
put("claimId", claim.PublicID)
put("claimNumber", claim.ClaimNumber)
put("oldStatus", ctx.PreviousValue?.toString())
put("newStatus", claim.State.Code)
put("policyNumber", claim.Policy.PolicyNumber)
}}.toString()
}
}
Register the destination in config/Messaging.xml so the InsuranceSuite messaging engine knows which channel (SQS, webhook, Kafka) routes the event. Without that XML entry, the Gosu callback exists but never fires.
Every event payload includes a messageId (a UUID generated by the producer). The consumer dedups on it before processing. The dedup window must exceed the queue's max-redelivery-window — for SQS with 24-hour message retention, dedup TTL ≥ 7 days is safe.
async function handleEvent(msg: SqsMessage): Promise<void> {
const event = JSON.parse(msg.Body);
const seen = await redis.set(`evt:${event.messageId}`, "1", "EX", 7 * 86400, "NX");
if (!seen) {
return; // already processed; ack and skip
}
await processEvent(event); // your business logic
}
SET ... NX (set-if-not-exists) makes the dedup atomic — concurrent workers cannot both decide a duplicate is novel.
Events for the same claim can arrive in arbitrary order. Consumer must tolerate without rejecting.
async function processEvent(event: Event): Promise<void> {
const local = await getLocalClaim(event.claimId);
switch (event.eventType) {
case "claim.created":
if (!local) await createLocalClaim(event);
break;
case "claim.status.changed":
if (!local) {
await deferEvent(event, "waiting-on-claim-created");
return;
}
await applyStatusChange(local, event);
break;
}
}
The deferEvent helper writes the event to a holding table; a periodic re-processor retries deferred events when their dependencies might have arrived. Events older than a TTL (e.g., 24h) escalate to manual review — a deferred event still missing dependencies after a day indicates a real producer bug.
If the consumer goes down or a downstream system needs to be rebuilt, replay events from a checkpoint. Guidewire's messaging system retains events server-side per the configured retention; in addition, the consumer should persist its own checkpoint (last-processed eventTime per event type).
await db.upsert("event_checkpoint", {
consumer: "broker-portal-cache",
event_type: "policy.bound",
last_event_time: maxEventTimeInBatch,
updated_at: new Date(),
});
async function replay(consumer: string, eventType: string, fromTime: Date): Promise<void> {
const events = await fetch(`${BASE}/cc/rest/v1/events?eventType=${eventType}&since=${fromTime.toISOString()}`);
for await (const e of events) await handleEvent({ Body: JSON.stringify(e) } as any);
}
Replay must be idempotent — that is why the consumer's messageId dedup must outlive the replay window.
If queue depth grows past a threshold, the consumer is losing ground. Three responses, pre-baked rather than improvised at 3am:
# CloudWatch alarm: SQS queue depth > 10000 for 15min
on-alarm:
- autoscale: increase consumer replicas to 4x
- if not catching up after 15min more:
- emit metric `consumer-saturation` to incident pipeline
- on-call paged
- if queue retention near expiry:
- last-resort: emergency cap on producer-side rate limit
The autoscale path handles transient bursts; the cap path is for sustained saturation that needs a producer-side conversation.
A production-grade event integration ships with all of the following:
Messaging.xml for every business event the consumer needs; absent registrations explicitly documented as out-of-scope.messageId with TTL ≥ queue retention window.class RenewalEventBuilder {
static function build(ctx: MessageContext, policy: Policy): String {
return new gw.api.web.json.JsonObject() {{
put("eventType", "policy.renewed")
put("messageId", java.util.UUID.randomUUID().toString())
put("eventTime", java.time.Instant.now().toString())
put("policyNumber", policy.PolicyNumber)
put("renewedFrom", policy.RenewedFromPolicy?.PolicyNumber)
put("effectiveDate", policy.EffectiveDate.toString())
put("totalPremium", policy.TotalPremium.Amount.toString())
}}.toString()
}
}
case "claim.payment.created":
const claim = await getLocalClaim(event.claimId);
if (!claim) {
await deferEvent(event, "missing claim parent");
return;
}
if (!claim.exposures.find(e => e.id === event.exposureId)) {
await deferEvent(event, "missing exposure parent");
return;
}
await applyPayment(claim, event);
break;
# Replay all policy.bound events since last successful checkpoint
LAST=$(psql -tAc "SELECT last_event_time FROM event_checkpoint WHERE consumer='broker-portal' AND event_type='policy.bound'")
node scripts/replay.js --consumer=broker-portal --type=policy.bound --since="$LAST"
| Symptom | Cause | Solution |
|---|---|---|
| Event subscription set up but no events arriving | Gosu registration missing or Messaging.xml entry missing | confirm both; the Gosu callback alone does not route |
| Same downstream record created twice | consumer not deduping on messageId | wire the Redis SET NX dedup; backfill cleanup of duplicates is painful |
| Consumer rejects event with "parent not found" | out-of-order arrival; parent event has not been processed yet | use the deferred-events queue; do not reject |
| Events lost during consumer outage | no replay tooling | implement checkpoint + replay; without it, outages are data-loss events |
| Queue depth growing 24/7 | producer faster than consumer | scale consumer; if scaling does not help, partition by entity-id |
| Replay creates duplicates downstream | consumer dedup TTL too short, or checkpoint not flushed atomically | extend dedup TTL; flush checkpoint only after batch fully processes |
| Webhook endpoint returning 5xx for valid events | endpoint capacity or bug | Guidewire retries with backoff; eventually goes to DLQ; investigate the endpoint |
Same messageId showing different payloads in DLQ | producer bug — messageId is supposed to be unique per message | escalate to Guidewire support / config team; consumer cannot fix this |
For deeper coverage (Kafka partitioning strategies, exactly-once semantics across boundaries, schema evolution for event payloads, multi-tenant event fan-out), see implementation guide and API reference.
guidewire-install-auth — auth between Guidewire and the messaging destination if it requires bearer tokensguidewire-core-workflow-a — the bind/issue/renewal events this skill consumes are emitted by that workflowguidewire-core-workflow-b — the FNOL/reserve/payment events this skill consumesguidewire-observability-and-incident-response — queue-depth and saturation alerts that drive this skill's back-pressure response