From dotnet-developer
Write a Wolverine command handler with aggregate loading and cascading messages.
npx claudepluginhub hpsgd/turtlestack --plugin dotnet-developerThis skill is limited to using the following tools:
Write a Wolverine handler for $ARGUMENTS.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Designs, implements, and audits WCAG 2.2 AA accessible UIs for Web (ARIA/HTML5), iOS (SwiftUI traits), and Android (Compose semantics). Audits code for compliance gaps.
Write a Wolverine handler for $ARGUMENTS.
Before writing the handler:
Read existing handlers — match the project's patterns:
grep -rn "AggregateHandler\|public static.*Handle(" --include="*.cs" | head -20
Identify the aggregate — which Marten aggregate does this handler operate on?
Identify the message — what command triggers this handler?
Identify the side effects — what downstream messages should cascade?
Check for existing messages — reuse existing command/event types where appropriate
Choose the correct handler pattern based on what the handler does:
| Pattern | When to use | Aggregate loading |
|---|---|---|
[AggregateHandler] with aggregate parameter | Handler operates on a Marten event-sourced aggregate | Automatic — Wolverine loads by convention |
Static handler with IDocumentSession | Handler operates on document store data | Manual — load in handler |
| Static handler with external service | Handler calls external APIs or infrastructure | No aggregate — orchestration only |
| Handler returning cascading messages | Handler triggers downstream work | Any of the above + return type |
[AggregateHandler]
public static class TriggerCrawlExtractionHandler
{
public static async Task<object?> Handle(
TriggerCrawlExtraction command,
Crawl crawl,
IDocumentSession session,
CancellationToken ct)
{
// Guard: only trigger extraction if crawl is in the right state
if (crawl.Status != CrawlStatus.Completed)
{
// Return null — no cascade, no error. Silently skip
return null;
}
// One thing: mark the crawl as extracting
crawl.Status = CrawlStatus.Extracting;
crawl.ExtractionStartedAt = DateTimeOffset.UtcNow;
session.Store(crawl);
// Cascade: return the next command to process
return new ExtractCrawlPages(crawl.Id, crawl.Pages.Select(p => p.Id).ToList());
}
}
[AggregateHandler] rules:
Id property (or {AggregateName}Id)Id property (or a property named {AggregateName}Id) that maps to the aggregate identityCascading returns are how handlers trigger downstream work. The return value of Handle is automatically published as a message.
// Single cascade — return one message
public static CrawlCompleted Handle(CompleteCrawl command, Crawl crawl)
{
crawl.Status = CrawlStatus.Completed;
return new CrawlCompleted(crawl.Id);
}
// Multiple cascades — return a tuple
public static (CrawlCompleted, NotifySourceOwner) Handle(CompleteCrawl command, Crawl crawl)
{
crawl.Status = CrawlStatus.Completed;
return (
new CrawlCompleted(crawl.Id),
new NotifySourceOwner(crawl.SourceId, $"Crawl {crawl.Id} completed")
);
}
// Polymorphic cascade — return object? for branching
public static object? Handle(ProcessCrawlResult command, Crawl crawl)
{
return command.Success
? new CrawlCompleted(crawl.Id)
: new CrawlFailed(crawl.Id, command.Error);
}
// No cascade — return void or null
public static void Handle(LogCrawlMetrics command, ILogger logger)
{
logger.LogInformation("Crawl {CrawlId} processed {Pages} pages", command.CrawlId, command.PageCount);
// No return — fire and forget
}
// Fan-out — return IEnumerable for N cascading messages
public static IEnumerable<ExtractPage> Handle(
ExtractCrawlPages command,
Crawl crawl)
{
// ONE message per page — not one handler processing N pages inline
return command.PageIds.Select(pageId => new ExtractPage(crawl.Id, pageId));
}
Cascading rules:
object?, IEnumerable<T>, or voidIEnumerable<T> and let each item process independentlynull (with object? return type) to skip cascade — no downstream work neededThis is the most important rule in Wolverine handler design.
// WRONG — processing N items inline
public static async Task Handle(ProcessAllPages command, IDocumentSession session)
{
var pages = await session.Query<Page>()
.Where(p => p.CrawlId == command.CrawlId)
.ToListAsync();
foreach (var page in pages) // BAD: if page 47 fails, pages 1-46 are lost
{
await ExtractContent(page);
session.Store(page);
}
}
// CORRECT — fan out to individual handlers
public static IEnumerable<ExtractPage> Handle(
ExtractCrawlPages command,
Crawl crawl)
{
return crawl.Pages.Select(p => new ExtractPage(crawl.Id, p.Id));
}
// Each page is an independent unit of work
[AggregateHandler]
public static class ExtractPageHandler
{
public static PageExtracted Handle(ExtractPage command, Page page)
{
page.Content = ExtractContent(page.Html);
return new PageExtracted(page.Id);
}
}
Why:
// CORRECT — managed session via dependency injection
public static async Task Handle(
MyCommand command,
IDocumentSession session, // Wolverine manages the session lifecycle
CancellationToken ct)
{
var entity = await session.LoadAsync<MyEntity>(command.Id, ct);
entity.Update(command);
session.Store(entity);
// Wolverine calls SaveChangesAsync automatically
}
// WRONG — creating your own session
public static async Task Handle(
MyCommand command,
IDocumentStore store) // BAD: manual session management
{
await using var session = store.LightweightSession(); // NOT managed by Wolverine
// ...
await session.SaveChangesAsync(); // Manual save — bypasses Wolverine's unit of work
}
Session rules:
IDocumentSession — never create sessions from IDocumentStoreSaveChangesAsync after Handle succeedsIQuerySession (read-only) if the handler only reads dataSaveChangesAsync manually — Wolverine does it. Calling it yourself causes double-save// Non-fatal errors: catch, log, continue pipeline
public static object? Handle(
ProcessExternalData command,
ILogger logger)
{
try
{
var result = ParseExternalPayload(command.Payload);
return new DataProcessed(result);
}
catch (FormatException ex)
{
// Non-fatal: log and skip. Don't crash the pipeline
logger.LogWarning(ex, "Failed to parse payload for {CommandId}", command.Id);
return null; // No cascade — this item is skipped
}
}
// Fatal errors: let them propagate — Wolverine handles retry/dead-letter
public static CrawlCompleted Handle(CompleteCrawl command, Crawl crawl)
{
// No try/catch — if this throws, Wolverine retries per policy
crawl.Complete();
return new CrawlCompleted(crawl.Id);
}
Error handling rules:
// External dependencies: constructor injection on the handler class
public class NotifyExternalServiceHandler
{
private readonly IHttpClientFactory _httpClientFactory;
private readonly ILogger<NotifyExternalServiceHandler> _logger;
public NotifyExternalServiceHandler(
IHttpClientFactory httpClientFactory,
ILogger<NotifyExternalServiceHandler> logger)
{
_httpClientFactory = httpClientFactory;
_logger = logger;
}
public async Task<object?> Handle(
NotifyExternalService command,
CancellationToken ct)
{
var client = _httpClientFactory.CreateClient("external");
var response = await client.PostAsJsonAsync("/webhook", command, ct);
return response.IsSuccessStatusCode
? new ExternalServiceNotified(command.Id)
: null; // Will be retried by Wolverine
}
}
Rules:
[AggregateHandler] handlers are typically static — inject IDocumentSession as a method parameterIHttpClientFactory for HTTP clients — never new HttpClient()public class WhenTriggeringCrawlExtraction
{
[Fact]
public void it_returns_extract_command_when_crawl_is_completed()
{
// Arrange
var crawl = CrawlFactory.Create(status: CrawlStatus.Completed, pageCount: 3);
var command = new TriggerCrawlExtraction(crawl.Id);
var session = Substitute.For<IDocumentSession>();
// Act
var result = TriggerCrawlExtractionHandler.Handle(command, crawl, session, CancellationToken.None);
// Assert
result.ShouldBeOfType<ExtractCrawlPages>();
var extract = (ExtractCrawlPages)result!;
extract.PageIds.Count.ShouldBe(3);
}
[Fact]
public void it_returns_null_when_crawl_is_not_completed()
{
// Arrange
var crawl = CrawlFactory.Create(status: CrawlStatus.InProgress);
var command = new TriggerCrawlExtraction(crawl.Id);
var session = Substitute.For<IDocumentSession>();
// Act
var result = TriggerCrawlExtractionHandler.Handle(command, crawl, session, CancellationToken.None);
// Assert
result.ShouldBeNull();
}
}
public class TriggerCrawlExtractionIntegrationTest : IntegrationContext
{
[Fact]
public async Task it_processes_the_full_message_pipeline()
{
// Arrange
var crawl = CrawlFactory.Create(status: CrawlStatus.Completed);
await using var session = Store.LightweightSession();
session.Store(crawl);
await session.SaveChangesAsync();
// Act
await Host.InvokeMessageAndWaitAsync(new TriggerCrawlExtraction(crawl.Id));
// Assert
var updated = await session.LoadAsync<Crawl>(crawl.Id);
updated!.Status.ShouldBe(CrawlStatus.Extracting);
}
}
IEnumerable<T> insteadstore.LightweightSession() bypasses Wolverine's unit of workDbException and logging it defeats the retry pipeline[AggregateHandler] can't load without an Id property matching the aggregateDeliver:
/dotnet-developer:write-endpoint — handlers are invoked by endpoints. If the handler needs a new HTTP entry point, create the endpoint first.