From formal-verify
Build a compilable type-level skeleton from a high-level architecture spec before writing any implementation logic. Use when you have an architectural assessment, design doc, or restructuring plan and need to prove the new architecture is sound before migrating code. Also use when asked to "scaffold the new architecture", "create type stubs", "build the shell", "flesh out this spec", "skeleton the modules", or any request to turn architectural intent into verified structure. This skill follows the "Human Builds the Shell" paradigm: types are hard constraints that the compiler enforces, so if the skeleton compiles, the architecture is structurally sound. Especially valuable for large refactors where you don't trust agents to maintain coherence.
npx claudepluginhub petekp/agent-skills --plugin literate-guideThis skill uses the workspace's default tool permissions.
Turn a high-level architecture spec into a compilable type skeleton, then prove it's sound with the compiler before anyone writes a line of logic.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Turn a high-level architecture spec into a compilable type skeleton, then prove it's sound with the compiler before anyone writes a line of logic.
Large refactors fail when agents jump straight to implementation. They lose the thread, make local decisions that contradict the global design, and you end up with a different mess than the one you started with. The "Human Builds the Shell" paradigm (Mengdi Chen, 2026) solves this by separating structure from logic:
This skill operates at layer 1. By the time you're done, every module, every function signature, every protocol/trait, and every type relationship exists as real code that the compiler has verified. No logic yet — just the architectural skeleton. An agent literally cannot hallucinate past a compiler error.
Read the spec. Produce a structured outline of every module, type, and signature.
Write real source files with real types and stub bodies. Compile layer by layer until it all passes.
For each stub, determine whether existing logic can be ported or needs rewriting.
Read the assessment or design document the user provides. You're looking for:
Produce a module map — a structured outline that captures this. Don't write code yet. The module map is your intermediate representation between the prose spec and the code skeleton.
# Module Map
## Layer 1: Domain Types (innermost — no dependencies on other project modules)
### module: domain
Location: core/capacitor-core/src/domain/
Responsibility: Shared value types, identities, and enums used across all modules
Types:
- Project { id, name, path, ... }
- RuntimeSnapshot { active_project, hooks, timestamp, ... }
- HookEvent (enum: session_start, session_end, tool_use, ...)
Dependency rule: This module imports nothing from the project. Everything else may import this.
## Layer 2: Service Contracts (depend only on domain types)
### module: RuntimeEngine
Location: core/capacitor-core/src/runtime/
Responsibility: Core runtime lifecycle — start, stop, snapshot reads
Dependencies: domain (inward only)
Exposes:
- RuntimeEngine (protocol/trait)
- start(config: RuntimeConfig) -> Result<(), RuntimeError>
- stop() -> Result<(), RuntimeError>
- current_snapshot() -> RuntimeSnapshot
Types:
- RuntimeConfig { storage_path, poll_interval, ... }
- RuntimeError (enum: already_running, storage_unavailable, ...)
### module: SetupService
Location: core/capacitor-core/src/setup/
Responsibility: First-run and configuration workflows
Dependencies: domain (inward only)
Exposes:
- SetupService (protocol/trait)
- validate_setup() -> SetupStatus
- perform_setup(config: SetupConfig) -> Result<(), SetupError>
...
## Layer 3: FFI Boundary (translates between layers)
### module: ffi
Location: core/capacitor-core/src/ffi/
Responsibility: Expose Rust services to Swift via C-compatible interface
Dependencies: Layer 2 services, domain types
Boundary types (must be repr(C) or serializable):
- FFIRuntimeConfig, FFIRuntimeSnapshot, ...
Binding mechanism: [cbindgen / uniffi / manual C headers — match what the project uses]
## Layer 4: Swift Application Layer (outermost — depends on FFI)
### module: RuntimeSupervisor
Location: apps/swift/Sources/Capacitor/Services/RuntimeSupervisor.swift
...
The module map must declare explicit dependency rules per layer. These become enforceable constraints during verification:
Layer 1 (domain) → imports nothing from the project
Layer 2 (services) → imports only Layer 1
Layer 3 (FFI) → imports Layers 1 and 2
Layer 4 (Swift app) → imports only Layer 3's public interface
These rules are what prevent the architecture from drifting back to spaghetti. Write them down. They'll be verified mechanically in Phase 2.
Each function signature should include parameter names, parameter types, and return types. If the assessment doesn't specify them, infer them from the described responsibilities and the existing codebase. Flag anything you're uncertain about — the user should confirm before you proceed to code.
The assessment will inevitably leave gaps. Common ones:
Project. Where does the type live? In the domain layer that both depend on (dependency points inward).When in doubt, ask the user. A five-second clarification now prevents an hour of rework later.
If the assessment names specific files, functions, and line numbers (and a good one will), use those as anchors when inferring signatures. Don't search the codebase from scratch when the assessment already points you to CoreRuntime.initialize() in lib.rs:276. Read what's there and design the new signature from it.
Show the module map to the user before writing any code. They should confirm:
Now you write real code. Every module, every type, every function signature — but no implementation logic. Bodies are stubs.
git checkout -b architecture-scaffold
Where new files go matters — it determines the module graph. Follow these principles:
CoreRuntime into RuntimeEngine + SetupService, create src/runtime/mod.rs and src/setup/mod.rs — don't try to carve them out of lib.rs yet.mod statements, Swift package targets) so the compiler sees the new files.Don't write all the skeleton files at once and then compile. Build from the inside out, compiling at each layer. This keeps errors localized and prevents cascading failures.
Step 1: Domain types (Layer 1) Write the shared types — structs, enums, type aliases. Compile. These have no dependencies, so if they don't compile, it's a self-contained problem.
Step 2: Service contracts (Layer 2) Write the protocols/traits and their associated types. Write stub implementations. Compile. If this fails, it's either a bad import (trivial) or a type mismatch with the domain layer (architectural — see below).
Step 3: FFI boundary (Layer 3) Write the FFI types and the translation layer between Rust services and Swift. This layer must use whatever binding mechanism the project already uses (cbindgen, uniffi, manual C headers, etc.). Compile both sides — Rust and Swift — and verify the generated bindings match.
Step 4: Swift application layer (Layer 4) Write the Swift protocols, stub implementations, and any SwiftUI-facing types. Compile.
See references/language-patterns.md for full examples in Swift, Rust, and TypeScript. The key pattern is the same everywhere: define the contract (protocol/trait/interface), define all types with all fields, and provide a stub implementation that compiles but panics if called (fatalError in Swift, todo!() in Rust, throw new Error in TypeScript).
Include:
Exclude:
Not all compiler errors are equal. Distinguish between two kinds:
Incidental errors — typos, missing imports, forgotten visibility modifiers, a type name that doesn't match between declaration and use. Fix these immediately and recompile. They're noise.
Architectural signals — these mean the module map is wrong:
When you hit an architectural signal: stop compiling, update the module map, get user sign-off if the change is significant, then resume. Don't patch around the problem to make the compiler happy — that's how the mess started.
After the skeleton compiles, verify that the declared dependency rules hold. For each layer:
Rust: Grep use statements in each module directory. Every use crate::... path should only reference modules in the same or lower layers.
# Example: verify Layer 2 modules only import from Layer 1
grep -rn "use crate::" core/capacitor-core/src/runtime/ | grep -v "domain"
# Should return nothing — any hit is a dependency violation
Swift: Check import statements. Swift modules should only reference their declared dependencies.
If violations are found, they're architectural flaws. Fix the module map, not the imports.
The skeleton is "sound" when:
Record the verification result:
## Skeleton Verification
- Rust (cargo check): PASS
- Swift (swift build): PASS
- FFI bindings: PASS (generated headers match Swift imports)
- Dependency direction: PASS (no violations found)
- Module count: 8 (4 Rust, 4 Swift)
- Protocol/trait count: 6
- Stub implementation count: 6
- FFI boundary types: 12 shared types verified
With a proven skeleton in hand, you now go back to the existing code and create a precise mapping: for each stub in the new architecture, where does the logic come from?
If the assessment names specific files, functions, and line numbers — use them. Don't re-search the codebase for things the assessment already located. For example, if the assessment says CoreRuntime.initialize() in lib.rs:276-340 handles startup, that's your anchor for mapping RuntimeEngine.start().
For stubs where the assessment doesn't point to specific code, then search the existing codebase by responsibility.
Create migration-manifest.md in the project root:
# Migration Manifest
Source assessment: [path to assessment]
Skeleton branch: architecture-scaffold
Generated: [date]
## RuntimeEngineImpl
### start(config:)
- **Source:** `CoreRuntime.initialize()` in `core/capacitor-core/src/lib.rs:276-340`
- **Action:** PORT
- **Confidence:** HIGH
- **Notes:** Currently also does setup validation; that moves to SetupService.
Signature change: takes RuntimeConfig instead of raw path + options.
### stop()
- **Source:** `CoreRuntime.shutdown()` in `core/capacitor-core/src/lib.rs:342-380`
- **Action:** PORT
- **Confidence:** HIGH
- **Notes:** Straightforward extraction, no entangled dependencies.
### currentSnapshot()
- **Source:** `CoreRuntime.get_snapshot()` in `core/capacitor-core/src/lib.rs:382-395`
- **Action:** ADAPT
- **Confidence:** HIGH
- **Notes:** Remove the global state bypass that check_hook_health uses.
The function is simple, but it currently reads from a global rather than
the instance's storage — that's the adaptation.
## SetupServiceImpl
### performSetup(config:)
- **Source:** `CoreRuntime.run_setup()` + `CoreRuntime.validate_config()`
- **Action:** REWRITE
- **Confidence:** MEDIUM
- **Notes:** Logic is scattered across the god object and interleaved with
runtime concerns. Gather requirements from both sources but write fresh.
Pay attention to the error handling in validate_config — it has edge cases
the rewrite needs to preserve.
For each stub, assign exactly one action:
Rate your confidence in each mapping: HIGH, MEDIUM, or LOW.
LOW-confidence mappings should be flagged for the user to review. They're the ones most likely to cause problems during migration.
Show the migration manifest to the user. Key things they should check:
When complete, the user has:
These artifacts are the input to the next phase: actual implementation, either via the architectural-refactor skill or manual development. The skeleton branch becomes the target that all implementation work builds toward, and the migration manifest tells the agents exactly where to find the logic they need.
architectural-refactor skillIf the user plans to use the architectural-refactor skill for execution, the migration manifest needs to be converted into that skill's format. Each stub mapping becomes a chunk in the refactor plan:
Group related stubs into single chunks (e.g., all PORT stubs for one module = one chunk). Order by risk: PORT first, ADAPT second, REWRITE third. Each chunk's exit criteria should include the same compilation check used in Phase 2 — if the skeleton still compiles after filling in real logic, the architecture hasn't drifted.