Migrate DSPy GEPA usage in DSPy from the original to gepa-observable. This makes it possible for teams to clearly review each iteration and the lineage to understand how their prompt is evolving. The repository offers a web dashboard for monitoring, but requires a custom GEPA fork that provides custom observers and LM call logging. Use when developers want to add observability to GEPA optimization.
Migrates DSPy GEPA to gepa-observable for real-time dashboard monitoring, LM call logging, and custom observer callbacks. Use when you need to track prompt evolution and optimization lineage during GEPA runs.
/plugin marketplace add raveeshbhalla/dspy-gepa-logger/plugin install raveeshbhalla-gepa-observable-migration@raveeshbhalla/dspy-gepa-loggerThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/api-reference.mdreferences/examples.mdMigrate existing DSPy GEPA code to gepa-observable for integrated observability: real-time dashboard, custom observer callbacks, and LM call capture working together.
pip install dspy-gepa-logger
# Before
from dspy.teleprompt import GEPA
# After
from gepa_observable import GEPA
from gepa_observable import GEPA
optimizer = GEPA(
metric=my_metric,
auto="medium",
# Observability system:
server_url="http://your-server:3000", # Dashboard integration
project_name="My Project",
capture_lm_calls=True, # LM call logging
capture_stdout=True, # Console capture
verbose=True, # LoggingObserver
observers=[MyCustomObserver()], # Custom callbacks
)
optimizer.compile(student=program, trainset=train, valset=val)gepa-observable provides an integrated observability system where all components work together:
| Component | Purpose | Enabled By |
|---|---|---|
| ServerObserver | Sends events to web dashboard | server_url param |
| LoggingObserver | Console output with summaries | verbose=True |
| LM Call Logger | Captures all LM invocations | capture_lm_calls=True |
| Custom Observers | Your callbacks for any event | observers=[...] |
All observers receive the same 8 lifecycle events:
SeedValidationEvent - Initial validation scoresIterationStartEvent - Each optimization iterationMiniBatchEvalEvent - Minibatch evaluationsReflectionEvent - Proposed prompt changesAcceptanceDecisionEvent - Accept/reject decisionsValsetEvalEvent - Full validation evaluationsMergeEvent - Candidate merge operationsOptimizationCompleteEvent - Final resultsclass MyObserver:
def on_seed_validation(self, event):
avg = sum(event.valset_scores.values()) / len(event.valset_scores)
print(f"Seed score: {avg:.2%}")
def on_iteration_start(self, event):
print(f"Iteration {event.iteration}, parent score: {event.parent_score:.2%}")
def on_reflection(self, event):
for comp, text in event.proposed_texts.items():
print(f"Proposing for {comp}: {text[:100]}...")
def on_valset_eval(self, event):
if event.is_new_best:
print(f"NEW BEST: {event.valset_score:.2%}")
def on_optimization_complete(self, event):
print(f"Done! Best: {event.best_score:.2%} in {event.total_iterations} iters")
# Use with other observers - they all work together
optimizer = GEPA(
metric=my_metric,
auto="medium",
server_url="http://localhost:3000",
observers=[MyObserver()],
verbose=True, # Also keep LoggingObserver
)
pip install dspy-gepa-loggerfrom gepa_observable import GEPAserver_url for dashboardpip install dspy-gepa-loggerfrom dspy.teleprompt import GEPAreferences/api-reference.md - Complete parameter docs, observer protocol, all event typesreferences/examples.md - Full before/after code examples