npx claudepluginhub mlld-lang/mlld --plugin mlldThis skill uses the workspace's default tool permissions.
- Designing a new mlld pipeline or workflow
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Traditional approach: Encode decisions in code (if-else, state machines, rules). The code decides; the LLM executes.
LLM-first approach: Encode decisions in prompts. The LLM decides; the code executes. Your orchestrator becomes a dumb executor that gathers context, asks "what should I do?", does exactly that, verifies results, and repeats.
Your mlld code should be boring:
Your mlld code should NOT:
When you need new behavior, add guidance to the decision prompt. Don't add if-else to the orchestrator.
Example:
examples/development/index.mld:70-230— The main loop is mechanical. Thewhen @decision.actionblock is a pure switch with no decision logic.
A "decision call" is: prompt + tools + fresh context → structured action
This is NOT:
Each iteration, you make a fresh call with full context. The LLM reads the context, applies the prompt's guidance, and returns one action. No chat history needed — the context IS the history.
>> Each iteration is a fresh call with fresh context
let @context = @buildContext(@runDir)
let @decision = @claudePoll(@fullPrompt, { model: "opus", tools: @tools, poll: @outputPath })
Example:
examples/development/index.mld:81-109— Fresh@buildContext+@claudePolleach iteration. No conversation history carried forward.
The decision loop inherently handles repair. Each iteration the decision agent sees full current state — including failures, corrupt data, and partial results from the last action. If the last worker produced garbage, the decision agent sees it in context and corrects course: redo the work, take a different approach, or create a new issue to address it.
This happens naturally because:
lastWorkerResult and lastError are always in contextYou don't need a separate "check" step. The decision loop IS the check.
Two levels of repair:
=> retry with @mx.hint. A gate stage validates output and retries with feedback before the step completes. Bounded and deterministic. See the Quality Gates section in /mlld:orchestrator.Both coexist naturally — a worker can use pipeline retry for output quality, while the outer decision loop handles strategic failures.
Example:
examples/development/lib/context.mld:44-51—@buildContextloadslastWorkerResultandlastErrorinto context.examples/development/index.mld:164-171— Worker failure is recorded in state and the loop continues, letting the decision agent see the failure next iteration. For step-level retry:examples/event-agent/index.mld— Pipeline gate retries with feedback via@mx.hint.
The decision call returns structured JSON that the orchestrator executes mechanically:
{
"reasoning": "Brief explanation",
"action": "work|create_issue|close_issue|blocked|complete",
...action-specific fields...
}
Define a schema. Validate against it. The orchestrator becomes a simple switch over action types. No interpretation needed.
Example:
examples/development/schemas/decision.json— Conditional JSON Schema with five action types and action-specific required fields.examples/development/index.mld:121-227— Purewhen @decision.actionswitch.
Consistent context structure makes prompts more durable and reusable.
Standard context sections:
<goal>
What we're trying to accomplish
</goal>
<state>
Current state: tickets, files, progress
</state>
<recent_events>
Recent actions from events.jsonl
</recent_events>
<last_result>
Output from the previous action
</last_result>
<last_error>
Error from last iteration (if any)
</last_error>
Why conventions matter:
Example:
examples/development/prompts/decision/core.att:107-129— Uses<issues>,<recent_events>,<last_worker_result>,<last_error>,<human_answers>sections.
Instead of conditionals in code, encode rules as prompt guidance:
Bad (code predicate):
if @ticket.status == "blocked" && @urgency == "high" [
if @otherTickets.filter(t => t.ready).length == 0 [
>> exit logic
]
]
Good (prompt guidance):
### Blocking Detection
If all remaining tickets require human input and no other work is available,
return a "blocked" action with clear questions for the human.
The model applies guidance contextually, handling variations you didn't anticipate.
Example:
examples/development/prompts/decision/core.att:86-105— Workflow guidance encodes dependency analysis, information gain, adversarial gates, and issue lifecycle rules as prose, not conditionals.
Complex jobs have phases (e.g., Discover → Assess → Synthesize → Invalidate). Let the decision agent track phases and progress via guidance in prompts and its access to history logs, rather than creating a programmatic state machine.
The pattern:
Example:
examples/research/prompts/decision/core.att:68-77— Phase inference from filesystem state: no inventory → discover, assessments incomplete → assess, etc.examples/research/index.mld— The orchestrator never checks what phase it's in.
When you discover an edge case:
Old way: Add conditional → code grows → becomes unmaintainable
LLM-first way: Add guidance to prompt → model handles it and similar cases
### Orphaned Work
If you see a started ticket with no recent progress in the history,
it may have been orphaned by a crash. Decide whether to continue
from where it left off or reset and retry.
This extends to understanding existing behavior. Before changing something, consider why current behavior might be intentional — encode that instinct in the prompt rather than building checks into code.
Example:
examples/development/prompts/decision/core.att:97-105— Issue lifecycle guidance handles edge cases like premature closure and verification requirements as prose rules.
Decision agent can inject per-action context into worker calls.
{
"action": "work",
"issue": 42,
"task_type": "implement",
"guidance": "Focus on error handling. The happy path already works."
}
Workers get targeted instructions. The decision agent's reasoning flows to execution. More effective than generic worker prompts.
Example:
examples/development/schemas/decision.json:21—guidancefield on work action.examples/development/index.mld:131-136— Worker prompt built with@decision.guidance.
Don't flood context with everything. Load only what's relevant to the current decision.
The pattern:
Benefits:
Example:
examples/development/lib/context.mld:44-51—@buildContextloads last 30 events (not all), plus summary state. The decision agent has Read/Glob/Grep tools to investigate further when needed.
The orchestrator SHOULD verify outcomes:
The orchestrator should NOT decide what to do about failures:
>> Good: record result and let decision agent handle it
let @result = <@workerOutputPath>?
if !@result [
@logEvent(@runDir, "error", { type: "worker", error: "no output" })
let @errState = { ...@runState, lastError: "Worker failed", lastWorkerResult: null }
@saveRunState(@runDir, @errState)
continue
]
>> Decision agent sees the error next iteration and decides what to do
Example:
examples/development/index.mld:164-171— Worker failure is recorded in state. No retry logic, no error handling conditionals. The decision agent sees the failure next iteration.
When stuck, exit cleanly with questions. Don't poll or block.
"blocked" => [
@writeQuestionsFile(@runDir, @decision.questions, @resolvedRunId)
@logEvent(@runDir, "run_paused", { reason: "needs_human" })
show `Resume with: mlld <this-script> --run @resolvedRunId`
done
]
Human answers at their leisure. Human resumes. Decision call reads answers from context and continues.
Example:
examples/development/index.mld:207-213— Blocked handler writes questions file and exits.examples/development/lib/questions.mld— Structured questions with context and resume instructions.
State machines hide why you're in a state. Event logs show everything.
@logEvent(@runDir, "iteration", {
decision: @decision.action,
reasoning: @decision.reasoning
})
Debug by reading the log. See the model's reasoning at each step.
Example:
examples/development/lib/context.mld:7-12—@logEventappends structured events to JSONL.examples/development/index.mld:118— Every iteration logs the decision and reasoning.
Enable clean resume after interruption or human handoff.
The pattern:
2026-01-31 (date-based default)run.json with lastWorkerResult, lastErrorevents.jsonl for history--run <id>Example:
examples/development/lib/context.mld:31-41—@loadRunState/@saveRunStatemanagerun.json.examples/development/index.mld:42-60— Run ID resolution and state initialization.
Instead of parsing LLM streaming output, tell the agent to write JSON to a specific path.
let @outputPath = `@runDir/decision-@iteration.json`
let @fullPrompt = `@prompt
IMPORTANT: Write your JSON response to @outputPath using the Write tool.`
@claudePoll(@fullPrompt, { model: "opus", tools: @tools, poll: @outputPath })
let @decision = <@outputPath>?
The orchestrator reads the file after the agent finishes. The file doubles as a debugging artifact.
Example:
examples/development/index.mld:94-109— Decision agent writes to specific path; orchestrator reads file.examples/research/index.mld:78-86— Same pattern for research decisions.
Save prompts to files for debugging failed runs.
>> Save prompt for debugging
output @workerFullPrompt to "@runDir/worker-@issueNum-@iterationCount\.prompt.md"
Can replay failed prompts manually. Inspect exactly what the model saw. Essential for debugging complex pipelines.
Example:
examples/development/index.mld:146-153— Worker prompts saved to@workerPromptPathbefore execution. Prompt path also logged in the event.
Separate state management from orchestration. Use external systems for issues, tickets, etc.
The pattern:
gh issue list --json ...gh issue create, gh issue closeState survives orchestrator crashes. Multiple orchestrators can coordinate. State is inspectable outside the pipeline.
Example:
examples/development/index.mld:83-88— Loads issues viagh issue list.examples/development/index.mld:188-205— Creates and closes issues viaghCLI.
>> Initialize
var @run = @initRun(@config)
>> The loop
loop(endless) [
>> 1. Gather context (fresh each iteration)
let @context = @buildContext(@runDir)
>> 2. Decision call
let @decision = @claudePoll(@fullPrompt, { model: "opus", tools: @tools, poll: @outputPath })
>> 3. Execute action (mechanical switch)
when @decision.action [
"work" => [...]
"blocked" => [ @writeQuestions(...); done ]
"complete" => done
]
>> 4. Log
@logEvent(@runDir, @decision.action, { reasoning: @decision.reasoning })
]
State Machine Trap: "I need to track which phase we're in..." → Give full context. The model infers the phase.
Validation Trap: "What if the model makes a wrong decision? I should add checks..." → Fix the prompt. The decision loop self-corrects on the next iteration.
Special Case Trap: "But this situation needs different handling..." → Add guidance to the prompt.
Efficiency Trap: "Calling a model every iteration is expensive..." → Better decisions = fewer iterations. Cheaper than maintaining complex code.
Control Trap: "I don't trust the model to decide this..." → Then you're building the wrong kind of application. Use traditional code.
Context Flooding Trap: "I'll just include the whole spec..." → Selective loading. Bound events, summarize state, give tools for deeper digs.
Escalation Ladder Trap: "I need if-else for different escalation levels..." → Put escalation rules in the prompt. Let the model apply judgment.
Use traditional code for:
The model decides. The code executes and verifies.
Orchestration code that's 70% smaller, handles edge cases gracefully, and is maintainable — because the logic lives in prompts you can read and update, not scattered conditionals.