From lattice
Plans whole-codebase architectural transformations from current to target state, producing a living .lattice/transform/plan.md for incremental execution. For audits, roadmaps, migrations, drift assessments.
npx claudepluginhub techygarg/lattice --plugin latticeThis skill uses the workspace's default tool permissions.
Read, apply:
Scans codebase and generates high-level architecture graph as interlinked markdown files in docs/professor/architecture/. Use for new project analysis or significant changes.
Builds and maintains ARCHITECTURE.md and DETAILED_DESIGN.md incrementally with coverage tracking. Principal mode analyzes vision, bottlenecks, gaps, and alternatives.
Decomposes projects or tracks into modules with dependency mapping, diagrams, and implementation order. Updates architecture.md and generates token-optimized .ai-context.md.
Share bugs, ideas, or general feedback.
Read, apply:
framework:knowledge-priming -- Load codebase context: language, framework, structure, conventions (always)framework:architecture -- Architectural audit lens and target architecture guardrails (always)framework:domain-driven-design -- Strategic DDD only: bounded contexts, domain seams, core vs supporting subdomains (conditional: only when domain complexity warrants it)framework:collaborative-judgment -- Surface judgment calls with structured options during co-design rounds (always)framework:context-anchoring -- Write and maintain .lattice/transform/plan.md as a living document (always)Check for an existing plan first. If .lattice/transform/plan.md already exists:
If no existing plan: proceed from Step 2.
Check for .lattice/config.yaml. If present, load it — read knowledge-base.md and architecture.md from .lattice/standards/ if they exist. These shape the audit lens and the to-be proposal.
If no .lattice/ config exists, offer to run lattice-init first (recommended — the architecture standards refiner produces context that makes the to-be proposal significantly sharper). If the user declines, proceed with defaults inferred from the codebase scan.
Use framework:knowledge-priming to establish codebase identity before any analysis.
Do not ask any questions yet. Read the codebase and form a hypothesis first.
A large codebase cannot be read exhaustively. This is signal extraction, not a full read. Execute in order; stop reading a module once its responsibility, dependencies, and layer fit are clear.
Scanning protocol (10 steps, ~15–25 targeted reads total):
Directory tree (3 levels deep) — reveals intended organization, whether layers exist as explicit directories, naming conventions across the entire codebase. Do this before opening any file.
Dependency manifests — package.json, pom.xml, go.mod, requirements.txt. Language, framework, key external dependencies.
Architecture documents — README.md, ARCHITECTURE.md, docs/, ADR directories. The intended architecture often lives here. The gap between intention and reality is itself a finding.
Archaeology — before analysing flows, reduce scope:
Seam identification and viability — look for natural boundaries where one side can change without the other knowing:
Import and dependency patterns — grep import statements across all source files. Do not open full bodies yet. Reveals dependency direction, load-bearing modules, and layer violations cheaply.
Entry points — 3–5 files: routes, controllers, CLI handlers, event consumers. Reveals the outermost layer.
Interface and contract files — interfaces, abstract classes, ports. Reveals intended architectural boundaries, whether or not they are consistently followed.
One representative file per top-level module — confirm responsibility and catch what the import grep missed.
Stop. Form the hypothesis:
If a module's responsibility, dependencies, and layer fit remain unclear after Step 9, read one additional representative file from that module before stopping. This is the only permitted extension of the scan.
Skip entirely: full method implementations, test files, generated code, vendor directories, migration files, static assets. Never open a file when the import grep already answered the question.
In practice: 5–7 questions. Skip any the scan already answered — do not ask what can be inferred. The header "8–10" is the ceiling, not the target. A sharp 5-question interview is better than a thorough 10-question form.
Present only the relevant questions grouped by type:
Context (what the scan cannot tell):
Intent (what the team wants):
Clarification (only for genuine scan ambiguities): Only ask if the scan produced something that cannot be interpreted without team context. Examples:
services/ mixing business logic and DB calls — is this drift or intentional?"Present the architectural snapshot from the scan. The goal is a shared, accurate map — not a critique.
Present:
graph TD
API --> Services
Services --> DB["Database Models"]
Services --> Domain
Domain --> DB
style DB fill:#f96
After presenting the snapshot, ask specifically: "Does this map accurately reflect how the codebase is structured today? What's missing, wrong, or intentional that I've marked as a violation?" Update the map based on feedback. Repeat until the user explicitly confirms the map is accurate.
Do NOT advance to Step 5 until the user explicitly confirms the current state map is correct. This gate is non-negotiable — a target architecture designed against an inaccurate current state produces a gap analysis full of phantom problems.
Use framework:collaborative-judgment to surface any genuine ambiguity that cannot be resolved by the scan alone.
Propose a target architecture tailored to this codebase — not a generic template. Name specific layers, specific module moves, specific dependency rules derived from what was actually found.
Carry the drift/mismatch determination forward:
Minimum viable target principle: Propose the simplest structure that resolves the stated pain. The test: is each slice an improvement in itself, or does the target only pay off at the very end? A target that requires six months before anything is better is the wrong target. Resist the pull toward the theoretically perfect architecture.
Apply framework:architecture guardrails to validate the proposal. The non-negotiable rule to enforce: domain must have zero dependency on infrastructure — infrastructure depends on domain. Every other dependency rule follows from this inversion. Flag any proposed structure that violates it.
Apply framework:domain-driven-design (strategic level only) when: the codebase has multiple distinct business capabilities, different parts change at different rates, different teams own different areas, or different data ownership boundaries exist. If none of these apply, skip DDD — a layered architecture without explicit bounded contexts is the right target.
The proposal covers:
graph TD
API --> Application
Application --> Domain
Infrastructure --> Domain
API --> Infrastructure
style Domain fill:#6f9
src/
├── api/ ← HTTP handlers, request/response models
├── application/ ← use cases, orchestration
├── domain/ ← core business logic, interfaces (ports)
└── infrastructure/ ← DB, external services, adapters
Annotate key representative files per layer. Do not list every file.
The plan is a hypothesis, not a specification. State this explicitly in the proposal: the target architecture is the best current understanding and will be refined as execution reveals new information.
After presenting the proposal, ask specifically: "Does this target architecture reflect what you want to reach? Are there constraints, preferences, or non-negotiables that should change this proposal?" Refine until the target is agreed and written down. This is the north star for all subsequent slices.
Do NOT advance to Step 6 until the user explicitly confirms the target architecture. Every execution decision — strategy, sequencing, slice scope — derives from this agreement. Proceeding without it produces a plan with no stable foundation.
This step is also a valid stopping point. If the team only needs current + target architecture agreed, the session can end here with a partial plan document. The gap analysis and slice backlog can be built in a follow-up session by resuming from Step 6.
With both states agreed, derive the path.
Gap analysis — structural items only:
Do not include tactical items (anemic models, poor naming, missing tests) in the gap analysis. These are execution concerns handled by code-forge and refactor-safely when slices run. Including them dilutes the plan and makes prioritisation impossible.
Transformation strategy — choose one approach and justify it using the selection criteria below:
State: sequencing constraints (what must happen before what), which tracks can run in parallel, where temporary adapters are needed and when they must be removed. Adapters that are not given an explicit removal condition become permanent — name the removal trigger for each one.
Present the strategy. Use framework:collaborative-judgment to resolve the approach if there are genuine tradeoffs between options. Ask specifically: "Does this migration approach and sequencing match your team's capacity and risk tolerance?"
Do NOT advance to Step 7 until the user explicitly confirms the execution strategy. The slice backlog is derived from the strategy — building slices against an unagreed strategy produces a backlog the team will not follow.
Derive slices from the gap analysis and agreed strategy. Every slice must map to a specific structural delta. No slice exists without a reason.
Each slice:
**[Slice Name]**
- Scope: which module(s), layer(s), or bounded context
- Structural change: what moves, what gets created, what gets deleted
- Pre-conditions: which slices must complete first
- What still works after this slice: explicit statement of which system
capabilities remain intact. Example: "All existing API endpoints respond
correctly; payment processing continues to function; user authentication
is unaffected." If this cannot be answered, the slice is too large — split it.
- Risk: Low / Medium / High — with rationale
- Success criteria: structural (e.g., "Domain layer has no imports from Infrastructure")
Order slices: (1) unblocks others first, (2) highest severity, (3) lowest blast radius.
Flag any slice where execution would change a public API or external contract — these require explicit user sign-off before execution begins.
Present the full slice backlog to the user. Ask: "Does this backlog accurately represent the transformation work? Are any slices missing, too large, or incorrectly sequenced?" Refine until agreed.
Do NOT advance to Step 8 until the user confirms the slice backlog. The plan document is the written record of this agreement — writing it before agreement produces a document nobody trusts.
Produce .lattice/transform/plan.md using framework:context-anchoring.
Required structure:
# Transformation Plan — [Project Name]
## Codebase Identity
Language, framework, size, team context, delivery constraints.
## Archaeology Findings
Dead code identified. Duplicates to reconcile. Implicit coupling.
Hidden integration points. Quick wins before transformation begins.
## Domain Map
Core domain. Bounded contexts and seams. Core / supporting / generic subdomains.
## Current Architecture
Drift or mismatch determination with rationale.
Layer structure and module inventory.
[Mermaid diagram — layers and violations]
Key violations — specific and named.
## Target Architecture
Architecture style and rationale.
Layer definitions and dependency rules.
[Mermaid diagram — clean target]
[Annotated target repository structure tree]
[Bounded context map — if applicable]
## Gap Analysis
Must change / Should change / Explicitly deferred / Leave alone.
## Transformation Strategy
Approach chosen and rationale.
Sequencing constraints. Parallel vs. sequential tracks.
Adapter/bridge strategy and removal criteria.
## Slice Backlog
- [ ] Slice 1 — scope, structural change, pre-conditions, what still works, risk, success criteria
- [ ] Slice 2 — ...
## Progress Log
Updated as slices complete. Date, what changed, decisions made, new findings.
The document must be complete enough that a new AI session — or a new team member — can read it alone and know exactly what was decided, what has been done, and what to do next. No re-briefing required.
State explicitly in the document: "This target architecture is the best current understanding. It will be refined as execution reveals new information."