From formal-verify
Apply Model-First Reasoning (MFR) to code generation tasks. Use when the user requests "model-first", "MFR", "formal modeling before coding", "model then implement", or when tasks involve complex logic, state machines, constraint systems, or any implementation requiring formal correctness guarantees. Enforces strict separation between modeling and implementation phases.
npx claudepluginhub petekp/agent-skills --plugin literate-guideThis skill uses the workspace's default tool permissions.
A rigorous methodology that REQUIRES constructing an explicit problem MODEL before any reasoning or implementation. The model becomes a frozen contract that governs all downstream work.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
A rigorous methodology that REQUIRES constructing an explicit problem MODEL before any reasoning or implementation. The model becomes a frozen contract that governs all downstream work.
Based on Kumar & Rana (2025), "Model-First Reasoning LLM Agents: Reducing Hallucinations through Explicit Problem Modeling" (arXiv:2512.14474)
Hallucination is not merely the generation of false statements—it is a symptom of reasoning performed without a clearly defined model of the problem space.
Reasoning does not create structure; it operates on structure. When that structure is implicit or unstable, reasoning becomes unreliable. MFR provides "soft symbolic grounding"—enough structure to stabilize reasoning without imposing rigid formalism.
Phase 1 produces the MODEL. Phase 2 reasons/implements ONLY within the model.
This prevents the common failure mode where reasoning introduces ad-hoc decisions, missing constraints, or invented behavior not grounded in the problem definition.
MODEL INCOMPLETE + what to add, then STOPSTUBAfter creating the model, run a MODEL AUDIT before coding:
| Check | Description |
|---|---|
| Coverage | Every user requirement is represented in exactly one of: a constraint, the goal/acceptance criteria, or an action precondition/effect |
| Operability | Every operation your plan would require is present as an action |
| Consistency | Constraints don't contradict each other; action effects don't violate invariants |
| Testability | Every constraint has ≥1 test oracle |
If any audit check fails, revise the model (still Phase 1) until it passes.
Once the audit passes, treat the model as read-only source of truth.
If later you discover missing info during implementation:
MODEL PATCH (minimal change)After creating the model, write it to model.json and run the validator:
python scripts/validate-model.py model.json
Exit codes:
0 = Valid, ready for Phase 21 = Invalid structure (fix and retry)2 = Valid but has unknowns (STOP after Phase 1)The model may be expressed in natural language, semi-structured text, or JSON. Flexibility improves compliance—what matters is that the representation is explicit, inspectable, and stable.
For code generation tasks, the structured format below is recommended. Use MODEL_TEMPLATE.json as a reference:
{
"deliverable": {
"description": "What we're building",
"files_expected": ["path/to/file.ts", ...]
},
"entities": [
{"name": "EntityName", "description": "...", "properties": [...]}
],
"state_variables": [
{"name": "varName", "type": "...", "initial": "...", "description": "..."}
],
"actions": [
{
"name": "actionName",
"description": "...",
"preconditions": ["..."],
"effects": ["..."],
"parameters": [...]
}
],
"constraints": [
{"id": "C1", "statement": "...", "type": "invariant|precondition|postcondition"}
],
"initial_state": ["description of starting conditions"],
"goal": ["acceptance criteria"],
"assumptions": ["things we assume to be true"],
"unknowns": ["questions that must be answered before proceeding"],
"requirement_trace": [
{
"requirement": "<verbatim from user>",
"represented_as": "goal|constraint|action",
"ref": "C1|action_name|goal_item"
}
],
"test_oracles": [
{"id": "T1", "maps_to": ["C1"], "description": "how to verify constraint"}
]
}
Critical: If unknowns is non-empty, STOP after Phase 1. Do not implement until unknowns are resolved.
Return:
{
"audit_pass": true|false,
"issues": [
{"type": "coverage|operability|consistency|testability", "detail": "..."}
]
}
If audit_pass is false, STOP and return to Phase 1 to revise the model.
Using ONLY the frozen model:
Numbered steps where each step must be an instance of a defined action:
Step 1: [action_name]
- Preconditions check: [list which preconditions are satisfied]
- Effects applied: [what state changes]
- Constraints check: [C1, C2, ...]
Create all files in deliverable.files_expected:
| Model Element | Code Translation |
|---|---|
| entities / state_variables | Types, interfaces, data models |
| actions | Functions/modules with validation + explicit failure modes |
| constraints | Runtime checks, defensive parsing, invariants |
Implement all test_oracles. Every constraint must be covered by ≥1 test.
For each constraint, document:
MFR is most valuable for:
When NOT to use: Simple, single-step tasks where the overhead of explicit modeling exceeds its benefit.
MFR is complementary, not competing:
MFR provides a foundational layer that improves robustness of any reasoning strategy in constraint-heavy domains.
User: Build a shopping cart that enforces: max 10 items, no duplicate SKUs,
total can't exceed $1000
You: [PHASE 1: MODEL]
{
"entities": [
{"name": "CartItem", "properties": ["sku", "price", "quantity"]}
],
"constraints": [
{"id": "C1", "statement": "cart.items.length <= 10", "type": "invariant"},
{"id": "C2", "statement": "all SKUs in cart are unique", "type": "invariant"},
{"id": "C3", "statement": "sum(items.price * items.quantity) <= 1000", "type": "invariant"}
],
"actions": [
{
"name": "addItem",
"preconditions": ["cart.items.length < 10", "sku not in cart", "new total <= 1000"],
"effects": ["item added to cart"]
}
],
...
}
[PHASE 1.5: MODEL AUDIT]
{
"audit_pass": true,
"issues": []
}
[PHASE 2: IMPLEMENTATION]
// Now implementing strictly from the model...
The model is not overhead—it IS the specification. Most failures in complex reasoning are representational, not inferential: the reasoning was fine, but it operated on an incomplete or unstable understanding of the problem.
By externalizing the model, we make assumptions inspectable, constraints enforceable, and errors diagnosable. The model becomes the contract between intent and implementation.
Model first. Then reason. Never invert this.