From hatch3r
Analyzes brownfield codebase via static analysis to reverse-engineer modules, dependencies, tech debt, business domain, and architecture. Generates specs in docs/specs/, ADRs, and AGENTS.md.
npx claudepluginhub hatch3r/hatch3r# Codebase Map — Brownfield Codebase Analysis & Spec Generation Analyze an existing codebase to reverse-engineer project documentation across **two dimensions**: business domain analysis and technical architecture analysis. Discovers modules, dependencies, conventions, tech stack, technical debt, business logic, domain models, and production readiness using parallel analyzer sub-agents. Outputs structured specs to `docs/specs/business/` (business domain, market context, production readiness) and `docs/specs/technical/` (modules, conventions, stack, debt), plus inferred architectural decisi...
/map-codebasePerforms deep codebase analysis with parallel agents, producing 7 structured documents on tech stack, architecture, structure, conventions, testing, integrations, and concerns.
/fire-map-codebaseAnalyzes codebase using parallel mapper agents to produce 7 structured Markdown docs in .planning/codebase/ covering tech stack, architecture, structure, conventions, testing, integrations, and concerns. Optional focus area.
/map-codebaseAnalyzes codebase with parallel mapper agents to produce 7 structured documents in .planning/codebase/: STACK.md, INTEGRATIONS.md, ARCHITECTURE.md, STRUCTURE.md, CONVENTIONS.md, TESTING.md, CONCERNS.md.
/audit-codebaseAudits codebase for product managers: auto-detects structure/framework/architecture, maps capabilities/APIs/data models/gaps, outputs Markdown report with diagrams/tables/implications.
/intake-from-codebaseScans codebase directory to analyze code structure, dependencies, infrastructure, and patterns, generating intake documents. Supports optional interactive mode, custom output, and guidance text.
Share bugs, ideas, or general feedback.
Analyze an existing codebase to reverse-engineer project documentation across two dimensions: business domain analysis and technical architecture analysis. Discovers modules, dependencies, conventions, tech stack, technical debt, business logic, domain models, and production readiness using parallel analyzer sub-agents. Outputs structured specs to docs/specs/business/ (business domain, market context, production readiness) and docs/specs/technical/ (modules, conventions, stack, debt), plus inferred architectural decision records to docs/adr/. Optionally generates a root-level AGENTS.md as the project's "README for agents." This command is purely read-only until the final write step — all analysis is static (file reading, pattern matching). Works for any language or framework.
| Stage | Agent(s) | Parallel | Required |
|---|---|---|---|
| 1. Analysis | hatch3r-researcher analyzers (6 parallel: module, conventions, tech stack, concerns/debt, business domain, production readiness) | Yes | Yes |
| 2. Document Generation | hatch3r-docs-writer (parallel: technical spec, business spec, ADRs, health report) | Yes | Yes |
| 3. AGENTS.md | hatch3r-docs-writer (AGENTS.md generation/rework) | No | Yes |
Read hatch3r-board-shared at the start of the run if available. It contains GitHub Context, Project Reference, and tooling directives. While this command does not perform board operations directly, the shared context establishes owner/repo and tooling hierarchy for any follow-up commands.
Execute these steps in order. Do not skip any step. Ask the user at every checkpoint marked with ASK. When in doubt, ASK — it is better to ask one question too many than to make one wrong assumption. Discovery questions are never wasted.
Classify the codebase analysis request before delegating:
If Tier 1, run the reduced analyzer set and skip Step 5 (ADRs) unless decisions are obvious. If Tier 2, run the standard pipeline below. If Tier 3, run the full pipeline including the production-readiness scorecard and confirm scope with the user before file writes.
Perform a lightweight scan of the project root to build a project fingerprint, then gather business context.
Scan for:
| Signal | Ecosystem |
|---|---|
package.json | Node.js / JavaScript / TypeScript |
Cargo.toml | Rust |
go.mod | Go |
requirements.txt, pyproject.toml, setup.py, Pipfile | Python |
Gemfile | Ruby |
pom.xml, build.gradle | Java / Kotlin |
*.csproj, *.sln | .NET / C# |
composer.json | PHP |
pubspec.yaml | Dart / Flutter |
mix.exs | Elixir |
Also detect: Dockerfile, docker-compose.yml, .github/workflows/, .gitlab-ci.yml, Makefile, tsconfig.json, .eslintrc.*, .prettierrc.*, turbo.json, nx.json, lerna.json, pnpm-workspace.yaml.
From config files and top-level imports, identify:
node_modules/, vendor/, dist/, build/, .git/)docs/specs/ — if exists, note contents (including business/ and technical/ subdirectories)docs/adr/ — if exists, note contentsREADME.md, CONTRIBUTING.md, ARCHITECTURE.md, or similar.agents/hatch.json — if exists, this project already has hatch3r configurationAGENTS.md — if exists, note its contentsIf docs/specs/ or docs/adr/ already exist:
ASK: "Existing documentation found at docs/specs/ and/or docs/adr/. (a) Supplement — keep existing files and add new ones, (b) Replace — archive existing and generate fresh, (c) Abort."
Project Fingerprint
===================
Root: {project root path}
Languages: {language1} ({N files}), {language2} ({N files}), ...
Frameworks: {framework1}, {framework2}, ...
Databases: {db1}, {db2}, ... (or "None detected")
Package Mgr: {npm/cargo/pip/...}
Build Tools: {webpack/vite/tsc/make/...}
CI/CD: {GitHub Actions/GitLab CI/...} (or "None detected")
Infra: {Docker/K8s/Terraform/...} (or "None detected")
Project Size: {N files}, ~{N}K LOC
Monorepo: {yes — N workspaces / no}
Existing Docs: {docs/specs/ (N files), docs/adr/ (N files) / None}
AGENTS.md: {found / not found}
ASK: "Should I analyze the full product, or only specific parts? If specific, list the directories, modules, or domains to focus on. Options: (a) full codebase analysis, (b) specific directories only — list them, (c) exclude directories — list them (e.g., vendor, generated code)."
ASK: "To calibrate the analysis depth and recommendations to your situation, tell me about your company/project stage:
Cache the stage assessment. It drives stage-adaptive depth throughout the analysis:
Before asking, attempt to reverse-engineer business context from the codebase: look for payment/billing code, user roles, analytics events, domain models, README descriptions, and marketing copy in the repo.
Present what was inferred, then ASK to fill gaps:
"Based on the codebase, I inferred the following business context. Please confirm or correct, and fill in any gaps:
Any additional business context I should know?"
If running as part of a pipeline after another hatch3r command that already gathered this context, check for .hatch3r-session.json. If found, pre-fill company stage and business context from the session file. Confirm with the user rather than re-asking.
Launch one analyzer sub-agent per domain below in parallel — as many as the platform supports — using the Task tool with subagent_type: "generalPurpose". Each analyzer receives the project fingerprint, confirmed scope, company stage assessment, and business context from Step 1.
Each sub-agent prompt must include:
resolve-library-id then query-docs) for understanding framework conventions and library APIsPrompt context: Project fingerprint, confirmed scope.
Task:
src/auth/, src/api/, src/models/)app/ routes, Django apps, Rails controllers/models)Output format:
## Module Map
| Module | Path | Type | Description | Key Exports |
| ------ | ---- | ---- | ----------- | ----------- |
| ... | ... | ... | ... | ... |
## Internal Dependency Graph
{module} → {module} (via {import path})
...
## Entry Points
- {path} — {description}
## Shared Utilities
- {path} — used by {N} modules
## Concerns
- Circular: {A} ↔ {B}
- Orphaned: {path} (no importers)
Prompt context: Project fingerprint, confirmed scope.
Task:
*.test.*, *.spec.*, __tests__/), frameworks used, fixture patternsOutput format:
## Conventions
### Naming
- Files: {pattern}
- Functions: {pattern}
- Classes: {pattern}
- Constants: {pattern}
### File Structure
- {pattern description}
### Exports
- {pattern description}
## Architectural Patterns
### Architecture Style
{MVC / Clean Architecture / Layered / Modular / Monolithic / ...}
Evidence: {file paths and patterns observed}
### Error Handling
- {pattern description with examples}
### State Management
- {pattern description}
### API Design
- {pattern description}
### Data Access
- {pattern description}
### Testing
- Framework: {jest/vitest/pytest/...}
- Location: {co-located / separate test directory}
- Naming: {pattern}
- Coverage: {estimated from test file presence}
## Code Style
- Indentation: {tabs/spaces, width}
- Quotes: {single/double}
- Semicolons: {yes/no}
- Notable: {any other consistent patterns}
Prompt context: Project fingerprint, confirmed scope.
Task:
.env.example, config files — never read actual .env files)Output format:
## Dependencies
### Runtime ({N} packages)
| Package | Version | Purpose | Health |
| ------- | ------- | ------- | ------ |
| ... | ... | ... | ... |
### Dev ({N} packages)
| Package | Version | Purpose |
| ------- | ------- | ------- |
| ... | ... | ... |
## Build Pipeline
- Tool: {webpack/vite/tsc/esbuild/...}
- Scripts: {key npm scripts or Makefile targets}
- Output: {dist directory, bundle format}
## CI/CD
- Platform: {GitHub Actions / GitLab CI / ...}
- Stages: {lint → test → build → deploy}
- Deploy Target: {Vercel / AWS / GCP / self-hosted / ...}
## Environment
- Config approach: {env vars / config files / ...}
- Required env vars: {list from .env.example}
## Infrastructure
- Containerized: {yes/no}
- IaC: {Terraform / CloudFormation / none}
- Cloud: {AWS / GCP / Azure / none detected}
## Health Assessment
- Outdated: {N packages need updates}
- Missing tooling: {linter/formatter/type checker not configured}
- Security: {known advisory matches from lockfile}
Prompt context: Project fingerprint, confirmed scope.
Task:
TODO, FIXME, HACK, XXX, WORKAROUND comments — capture location and content@ts-ignore, @ts-expect-error, # type: ignore, # noqa, // nolint — suppression markersany type usage (TypeScript), untyped parameters, missing return typesOutput format:
## Technical Debt Register
### Critical (address immediately)
| # | Type | Location | Description | Effort |
| - | ---- | -------- | ----------- | ------ |
| 1 | Security | {path}:{line} | {description} | {S/M/L} |
### High (address soon)
| # | Type | Location | Description | Effort |
| - | ---- | -------- | ----------- | ------ |
### Medium (plan for)
| # | Type | Location | Description | Effort |
| - | ---- | -------- | ----------- | ------ |
### Low (nice to have)
| # | Type | Location | Description | Effort |
| - | ---- | -------- | ----------- | ------ |
## Summary
- TODO/FIXME count: {N}
- Type suppressions: {N}
- Dead code files: {N}
- Functions >50 LOC: {N}
- Files >300 LOC: {N}
- Untested modules: {N} of {total}
- Security concerns: {N}
- Performance hotspots: {N}
## Top 5 Debt Items
1. {item} — {severity} — {effort}
2. ...
Prompt context: Project fingerprint, confirmed scope, company stage, business context from Step 1h.
Task: Reverse-engineer the business logic embedded in the codebase. Use the business context from Step 1h to guide interpretation. Use web search to research the product's industry, domain patterns, and competitor approaches where helpful.
Output format:
## Business Domain Map
### Domain Entities
| Entity | Location | Type | Relationships | Business Significance |
| ------ | -------- | ---- | ------------- | --------------------- |
| {name} | {path} | aggregate root / entity / value object | {relations} | {why it matters} |
### Entity Relationship Diagram
{Mermaid ER diagram or textual description of key entity relationships}
## Business Rules Register
| # | Rule | Location | Type | Enforcement | Confidence |
| - | ---- | -------- | ---- | ----------- | ---------- |
| 1 | {rule description} | {path}:{line} | validation / state machine / authorization / pricing | {how enforced} | high/medium/low |
## Revenue Flow
### Payment & Billing
- Payment processor: {Stripe / PayPal / custom / none detected}
- Billing model: {subscription / one-time / usage-based / none detected}
- Key paths: {list of file paths involved in payment flow}
### Monetization Touchpoints
- {touchpoint}: {path} — {description}
## User Journey Code Map
| Journey | Entry Point | Key Steps | Exit/Completion | Gaps |
| ------- | ----------- | --------- | --------------- | ---- |
| {journey name} | {path} | {step flow through code} | {completion path} | {missing steps} |
## Business Metrics & Analytics
| Event/Metric | Location | Provider | What It Tracks |
| ------------ | -------- | -------- | -------------- |
| {event name} | {path} | {analytics provider} | {description} |
## Business Invariants
| Invariant | Location | Enforcement | Risk if Violated |
| --------- | -------- | ----------- | ---------------- |
| {rule} | {path} | {how enforced} | {business impact} |
## Uncertainties
- {business logic that is unclear from static analysis — marked for human review}
Prompt context: Project fingerprint, confirmed scope, company stage from Step 1g.
Task: Evaluate infrastructure maturity relative to the company stage. Use web search to research current best practices for the detected stack, cloud provider recommendations, and SLA benchmarks for the industry. Grade each dimension relative to what is appropriate for the company's stage — a seed-stage startup has different production readiness needs than a Series B company.
Output format:
## Production Readiness Scorecard
Company Stage: {stage from Step 1g}
Grading Baseline: {what "good" looks like for this stage}
### Deployment Maturity
- Grade: {A/B/C/D/F} (for stage)
- Current state: {description}
- Gap to stage-appropriate: {what's missing}
- Recommendation: {next step}
### Observability
- Grade: {A/B/C/D/F} (for stage)
- Current state: {description}
- Gap to stage-appropriate: {what's missing}
- Recommendation: {next step}
### Scaling Readiness
- Grade: {A/B/C/D/F} (for stage)
- Current state: {description}
- Gap to stage-appropriate: {what's missing}
- Recommendation: {next step}
### Reliability
- Grade: {A/B/C/D/F} (for stage)
- Current state: {description}
- Gap to stage-appropriate: {what's missing}
- Recommendation: {next step}
### Incident Readiness
- Grade: {A/B/C/D/F} (for stage)
- Current state: {description}
- Gap to stage-appropriate: {what's missing}
- Recommendation: {next step}
### Cost Efficiency
- Grade: {A/B/C/D/F} (for stage)
- Current state: {description}
- Gap to stage-appropriate: {what's missing}
- Recommendation: {next step}
### Data Management
- Grade: {A/B/C/D/F} (for stage)
- Current state: {description}
- Gap to stage-appropriate: {what's missing}
- Recommendation: {next step}
## Overall Production Readiness
- Overall Grade: {A/B/C/D/F} (for stage)
- Launch Readiness: {ready / not ready — list blockers}
- Top 3 Production Risks:
1. {risk} — {mitigation}
2. {risk} — {mitigation}
3. {risk} — {mitigation}
## Stage-Appropriate Recommendations
{Ordered list of actions calibrated to the company stage — do not recommend enterprise-grade solutions for pre-revenue startups}
Collect all sub-agent outputs and produce a merged codebase map with both business and technical dimensions.
Codebase Map Summary
====================
Architecture: {detected pattern} (confidence: high/medium/low)
Module Count: {N} modules
Entry Points: {N}
Dependency Health: {healthy/warning/critical} ({N outdated, N vulnerable})
Test Coverage: {estimated} ({N}/{M} modules have tests)
Technical Debt: {low/medium/high} ({N items: X critical, Y high, Z medium})
Conventions: {consistent/mostly consistent/inconsistent}
Business Domain:
Domain Entities: {N} entities identified
Business Rules: {N} rules mapped (confidence: high/medium/low)
Revenue Paths: {payment provider} — {billing model}
User Journeys: {N} journeys traced
Analytics Coverage: {comprehensive/partial/minimal/none}
Production Readiness:
Overall Grade: {A/B/C/D/F} (for {stage})
Launch Readiness: {ready/not ready}
Top Gaps: {list}
Key Findings:
1. {finding} — {impact}
2. {finding} — {impact}
3. {finding} — {impact}
Cross-Reference Alerts:
- Convention "{X}" violated in {N} locations (see debt items #...)
- Module "{Y}" depends on outdated package "{Z}" (see tech stack)
- Business rule "{R}" in revenue path has no test coverage
- Production gap "{G}" blocks business milestone "{M}"
- ...
If any sub-agent failed, present partial results and note the gap.
ASK: "Here is the merged codebase map with business and technical dimensions. Review the findings. (a) Confirm and proceed to spec generation, (b) flag corrections — list what needs adjusting, (c) re-run a specific analyzer with adjusted scope, (d) I have additional business context to add."
From the merged analyzer outputs, draft spec documents in two separate directories: business specs and technical specs. These specs document what exists, not what should be. Mark any gaps or uncertainties explicitly.
docs/specs/technical/docs/specs/technical/00_glossary.md{prefix}_{name} (e.g., mod_auth, evt_user_login, ent_user)docs/specs/technical/01_overview.mddocs/specs/technical/02_{module_name}.md (numbered sequentially)# {Module Name}
> Status: Inferred — reverse-engineered from codebase analysis
## Overview
{What this module does, based on code analysis}
## Current State
### Structure
{Directory layout, key files}
### Key Components
{Classes, functions, exports with brief descriptions}
### Integration Points
{How this module connects to other modules — imports, exports, API contracts}
## Patterns
{Module-specific conventions and patterns observed}
## Test Coverage
{Existing tests, estimated coverage, gaps}
## Technical Debt
{Debt items from the Concerns Analyzer specific to this module}
## Uncertainties
{Anything unclear from static analysis — marked for human review}
docs/specs/business/docs/specs/business/00_business_glossary.md{prefix}_{name} (e.g., biz_subscription, evt_payment_completed, dom_pricing_tier)docs/specs/business/01_business_overview.md# {Project Name} — Business Overview
> Status: Inferred — reverse-engineered from codebase analysis and user input
## Business Model
{Business model type, revenue model — from Step 1h and Business Domain Analyzer}
## Market Context
{Target market, ICP, competitors — from Step 1h}
## Value Proposition
{Inferred from code: what does the product do for users?}
## Personas & User Segments
{Inferred from auth roles, user types, permission models in code}
| Persona | Code Evidence | Primary Goals | Key Flows |
| ------- | ------------- | ------------- | --------- |
| {name} | {user type/role in code} | {goals} | {flows} |
## Key Business Metrics
{From Business Domain Analyzer — analytics events, KPI computation}
| Metric | Tracked | Location | Notes |
| ------ | ------- | -------- | ----- |
| {metric} | yes/inferred/not tracked | {path} | {notes} |
## Company Stage Context
{From Step 1g — stage, team, users, funding}
docs/specs/business/02_{domain}.md (one per business domain)# {Business Domain Name}
> Status: Inferred — reverse-engineered from codebase analysis
## Domain Overview
{What this business domain covers}
## Business Rules
| # | Rule | Enforcement | Test Coverage | Confidence |
| - | ---- | ----------- | ------------- | ---------- |
| 1 | {rule} | {how} | {covered/gap} | {high/med/low} |
## User Journeys
| Journey | Steps | Code Path | Completeness |
| ------- | ----- | --------- | ------------ |
| {name} | {steps} | {file paths} | {complete/gaps noted} |
## Domain Invariants
| Invariant | Enforcement | Business Impact if Violated |
| --------- | ----------- | --------------------------- |
| {rule} | {how enforced} | {impact} |
## Revenue Relevance
{How this domain relates to revenue — payment flows, conversion, retention}
## Uncertainties
{Business logic unclear from static analysis — marked for human review}
docs/specs/business/03_production_readiness.mdFull production readiness scorecard from Sub-Agent 6, formatted for the business audience — emphasizing business impact of each gap rather than purely technical descriptions.
Present the list of specs to be generated with a brief summary of each, organized by business and technical.
ASK: "Here are the specs I will generate across both business and technical dimensions. Review the outlines:
Technical specs (docs/specs/technical/):
00_glossary.md — {N} technical entities01_overview.md — architecture & stack overviewBusiness specs (docs/specs/business/):
00_business_glossary.md — {N} business entities01_business_overview.md — business model & market context03_production_readiness.md — production scorecard(a) Confirm and proceed, (b) adjust module/domain boundaries or naming, (c) add/remove items."
From conventions, architecture patterns, and tech stack choices discovered by the analyzers, infer architectural decisions. Include both technical and business-driven decisions.
Look for:
For each inferred decision, draft docs/adr/0001_{decision_slug}.md (numbered sequentially):
# {N}. {Decision Title}
**Date:** Inferred {today's date}
**Status:** Inferred
**Scope:** {Technical / Business / Both}
> This ADR was reverse-engineered from codebase analysis, not from original
> decision documentation. Review and change status to "Accepted" if accurate.
## Context
{What problem or need this decision addresses, inferred from code patterns}
## Decision
{The decision that was made, inferred from what exists in the codebase}
## Evidence
{Specific files, patterns, and configurations that support this inference}
- {file path}: {what it shows}
- {pattern}: {where observed}
## Consequences
{Observed consequences of this decision — both positive and negative}
## Uncertainties
{Aspects of this decision that are unclear from static analysis}
ASK: "Here are the inferred ADRs (including both technical and business-scope decisions). Each has status 'Inferred'. Review and: (a) confirm all, (b) change status to 'Accepted' for confirmed decisions — list numbers, (c) reject/remove specific ADRs — list numbers, (d) adjust content."
Compile a summary health report from all 6 analyzer outputs, covering both technical and business health.
Codebase Health Report
======================
Project: {name} ({owner}/{repo} if available)
Analysis Date: {today's date}
Analyzer Version: hatch3r-codebase-map v2
Company Stage: {stage from Step 1g}
— Technical Health —
Architecture: {detected pattern}
Module Count: {N}
Dependency Health: {healthy/warning/critical}
- Runtime deps: {N} ({X outdated, Y vulnerable})
- Dev deps: {N} ({X outdated})
Test Coverage: {estimated percentage or qualitative} ({N}/{M} modules with tests)
Technical Debt: {low/medium/high} ({N total items})
- Critical: {N}
- High: {N}
- Medium: {N}
- Low: {N}
Convention Consistency: {high/medium/low}
— Business Health —
Business Logic Coverage: {what % of business rules have test coverage}
Revenue Path Reliability: {error handling quality in payment/billing flows}
User Journey Completeness: {gaps in critical user flows}
Analytics Instrumentation: {comprehensive/partial/minimal/none}
Business Rule Test Coverage: {N}/{M} rules have corresponding tests
— Production Readiness —
Overall Grade: {A/B/C/D/F} (for {stage})
Deployment: {grade}
Observability: {grade}
Scaling: {grade}
Reliability: {grade}
Incident Ready: {grade}
Top 5 Technical Concerns:
1. {concern} — {severity} — {recommended action}
2. {concern} — {severity} — {recommended action}
3. {concern} — {severity} — {recommended action}
4. {concern} — {severity} — {recommended action}
5. {concern} — {severity} — {recommended action}
Top 5 Business Concerns:
1. {concern} — {severity} — {recommended action}
2. {concern} — {severity} — {recommended action}
3. {concern} — {severity} — {recommended action}
4. {concern} — {severity} — {recommended action}
5. {concern} — {severity} — {recommended action}
Strengths:
- {strength observed}
- {strength observed}
- ...
ASK: "Codebase health report above (technical + business + production readiness). (a) Write report to docs/codebase-health.md? (b) Generate a todo.md with prioritized improvement items? (c) Both? (d) Neither — display only. Answer for each."
Spawn parallel hatch3r-docs-writer sub-agents via the Task tool (subagent_type: "generalPurpose") to generate and write the documentation. Each docs-writer receives the merged analyzer output from Steps 3-6 and is responsible for one document category. All docs-writers run in parallel and follow the hatch3r-docs-writer agent protocol.
| Docs-Writer | Responsibility | Input |
|---|---|---|
| Technical Spec Writer | docs/specs/technical/ (glossary, overview, module specs) | Merged analyzer outputs from Sub-Agents 1-4 |
| Business Spec Writer | docs/specs/business/ (glossary, overview, domain specs, production readiness) | Merged analyzer outputs from Sub-Agents 5-6, business context from Step 1 |
| ADR Writer | docs/adr/ (all architectural decision records) | Inferred decisions from Step 5 |
| Health Report Writer | docs/codebase-health.md (if user confirmed in Step 6) | All 6 analyzer outputs, cross-reference findings from Step 3 |
Each docs-writer prompt must include:
hatch3r-docs-writer agent protocolmkdir -p docs/specs/technical docs/specs/business docs/adr
If user chose Replace in Step 1d: archive existing docs before writing.
docs/.archive-{timestamp}/ (e.g., docs/.archive-20250223T120000/).docs/specs/ and docs/adr/ into the archive directory.0001_ (no continuation from archived numbers).Write each technical spec file confirmed in Step 4a:
docs/specs/technical/00_glossary.mddocs/specs/technical/01_overview.mddocs/specs/technical/02_{module}.md (one per module)If supplementing existing specs (Step 1d option "a"), do not overwrite existing files. Add new files alongside them.
Write each business spec file confirmed in Step 4b:
docs/specs/business/00_business_glossary.mddocs/specs/business/01_business_overview.mddocs/specs/business/02_{domain}.md (one per business domain)docs/specs/business/03_production_readiness.mdWrite each ADR confirmed in Step 5:
docs/adr/0001_{decision}.md (numbered sequentially)If supplementing (option "a") and docs/adr/ already contains ADRs, continue numbering from the highest existing number.
If Replace was chosen (option "b"), start at 0001_.
If user confirmed in Step 6:
docs/codebase-health.md — health reporttodo.md — prioritized improvement items (if todo.md already exists, ASK before overwriting or appending)Write .hatch3r-session.json to the project root with the company stage assessment and business context gathered in Step 1. This allows subsequent hatch3r commands (hatch3r-project-spec, hatch3r-roadmap) to skip re-asking the same discovery questions.
{
"timestamp": "{ISO timestamp}",
"command": "hatch3r-codebase-map",
"companyStage": { ... },
"businessContext": { ... },
"scope": "{full / specific parts}"
}
Files Written:
docs/specs/technical/
- 00_glossary.md
- 01_overview.md
- 02_{module_1}.md
- 02_{module_2}.md
- ...
docs/specs/business/
- 00_business_glossary.md
- 01_business_overview.md
- 02_{domain_1}.md
- ...
- 03_production_readiness.md
docs/adr/
- 0001_{decision_1}.md
- 0002_{decision_2}.md
- ...
docs/codebase-health.md (if requested)
todo.md (if requested)
.hatch3r-session.json
Total: {N} files created, {M} directories created
Next steps:
- Review generated specs and correct any inaccuracies
- Change ADR statuses from "Inferred" to "Accepted" for confirmed decisions
- Run `hatch3r-board-fill` to create issues from todo.md (if generated)
- Run `hatch3r-healthcheck` for deep QA audit of each module
- Run `hatch3r-security-audit` for full security audit of each module
This step is MANDATORY, not optional.
If AGENTS.md exists at project root:
ASK: "Your existing AGENTS.md may be outdated after generating new documentation. Would you like to rework it based on the new specs?"
hatch3r-docs-writer sub-agent via the Task tool (subagent_type: "generalPurpose") to regenerate AGENTS.md incorporating the newly generated specs, architecture overview, module map, and conventions. The docs-writer follows the hatch3r-docs-writer agent protocol.If no AGENTS.md exists:
Generate AGENTS.md — there is no opt-out. Spawn a hatch3r-docs-writer sub-agent via the Task tool (subagent_type: "generalPurpose") to create AGENTS.md following the hatch3r-docs-writer agent protocol. The docs-writer receives the full merged analyzer output and generates AGENTS.md with:
The generated AGENTS.md should follow this structure:
# {Project Name} — Agent Instructions
> Auto-generated by hatch3r-codebase-map on {today's date}. Review and adjust before use.
## Project Purpose
{One-paragraph vision/purpose from business overview}
## Business Context
- **Business model**: {type}
- **Revenue model**: {model}
- **Company stage**: {stage}
- **Target market**: {segments}
- **Key metrics**: {KPIs}
## Technology Stack
{Concise stack summary — languages, frameworks, databases, infrastructure}
## Architecture Overview
{Architecture style, key components, deployment topology — 3-5 sentences}
## Module Map
| Module | Purpose |
| ------ | ------- |
| {module} | {one-line description} |
## Key Business Rules & Domain Constraints
{Top 5-10 business rules that agents must respect when making changes}
- {rule}: {constraint}
## Conventions
{Key coding conventions agents should follow — naming, patterns, testing}
## Documentation Reference
- Business specs: `docs/specs/business/`
- Technical specs: `docs/specs/technical/`
- Architecture decisions: `docs/adr/`
- Health report: `docs/codebase-health.md`
ASK: "Analysis complete. Recommended next steps:
hatch3r-project-spec to create forward-looking specs and fill gaps identified in the analysishatch3r-roadmap to generate a phased roadmap from these specshatch3r-board-fill to create GitHub issues from todo.md (if generated)Which would you like to run next? (or none)"
src/, app/, lib/). Exclude generated code, vendored dependencies, and build artifacts. Present the scoping decision before spawning analyzers.node_modules/, vendor/, dist/, build/, .git/, __pycache__/, .venv/, target/ (Rust), bin/, obj/ (.NET)..env files or actual secrets. Only read .env.example or similar templates. If a hardcoded secret is found during analysis, flag it as a security concern but do not include the actual value in any output.workspaces in package.json, pnpm-workspace.yaml, turbo.json, nx.json, lerna.json, Cargo workspaces, Go workspaces) and analyzing each package as a separate module.todo.md already exists, ASK before overwriting or appending.AGENTS.md without explicit user confirmation.