This skill should be used when the user asks to "discover the current architecture", "map the existing system", "document the as-is state", "inventory the codebase", "trace dependencies", or needs to produce a comprehensive model of an existing brownfield codebase from actual code evidence rather than documentation claims.
From pm-architect-brownfieldnpx claudepluginhub nbkm8y5/claude-plugins --plugin pm-architect-brownfieldThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Searches claude-mem's persistent cross-session memory database to retrieve past work. Uses 3-step MCP workflow: search index, timeline context, fetch selected details. For recalling prior solutions.
Produce a comprehensive, evidence-backed model of an existing codebase's actual architecture. This skill reads real code, traces real dependencies, and maps real data flows — never relying on documentation claims without verification. Every statement in the output must cite [file:line] evidence.
[REVIEW_BRIEF.md + ALIGNMENT_MATRIX.md + Codebase] --> **CODEBASE DISCOVERY (AS-IS)** --> [AS_IS_SYSTEM_MODEL.md] --> Reverse PRD --> ...
Input: REVIEW_BRIEF.md, ALIGNMENT_MATRIX.md, actual codebase (via Read/Glob/Grep)
Output: AS_IS_SYSTEM_MODEL.md written to artifacts directory
The output artifact follows the standard artifact template:
# [ARTIFACT TITLE]
## Summary
## Inputs
## Outputs
## Assumptions
## Open Questions
## Main Content
## Acceptance Criteria
REVIEW_BRIEF.md to understand scope, goals, and constraintsALIGNMENT_MATRIX.md to understand which codebase areas map to which goalssrc/, lib/, app/, pkg/)tests/, __tests__/, spec/, test/)config/, .github/, deploy/)dist/, build/, target/, out/)docs/, doc/)public/, static/, assets/)migrations/, seeds/, fixtures/)> Source code organized in src/ directory [src/:directory]Scan for technology indicators and record with version evidence.
Package manifests (read each found file):
package.json -> Node.js/npm ecosystem, extract dependencies, devDependencies, enginesrequirements.txt / pyproject.toml / setup.py / Pipfile -> Python ecosystemCargo.toml -> Rust ecosystem, extract [dependencies]go.mod -> Go ecosystem, extract module path and require blockpom.xml / build.gradle / build.gradle.kts -> JVM ecosystem*.csproj / *.sln -> .NET ecosystemGemfile -> Ruby ecosystemPackage.swift -> Swift ecosystemcomposer.json -> PHP ecosystemFramework detection (Grep for framework-specific patterns):
Infrastructure detection:
Dockerfile / docker-compose.yml -> Containerizationterraform/ / *.tf -> Infrastructure as Codek8s/ / kubernetes/ / *.yaml with kind: -> Kubernetes.github/workflows/ -> GitHub Actions CI/CDserverless.yml / sam-template.yaml -> ServerlessRecord technology table:
| Technology | Version | Evidence | Purpose |
|-----------|---------|----------|---------|
| Node.js | 18.x | [package.json:4] | Runtime |
| TypeScript | 5.3.2 | [package.json:18] | Language |
| Express | 4.18.2 | [package.json:12] | HTTP framework |
Identify discrete components, modules, or services in the codebase.
Identify component boundaries:
For each component, record:
[file:line]Grep for entry points:
main(), app.listen(), createServer(), if __name__, func main(), fn main()index.ts, __init__.py, mod.rs)bin/, shebang lines)Map inter-component and external dependencies.
Internal dependencies (between components):
COMP-0001 --> COMP-0003 [src/api/routes.ts:5]External dependencies (third-party):
dependencies sectionDependency Mermaid diagram:
```mermaid
graph TD
COMP-0001[API Gateway] --> COMP-0002[Auth Service]
COMP-0001 --> COMP-0003[Data Layer]
COMP-0002 --> COMP-0003
COMP-0003 --> ext_db[(PostgreSQL)]
COMP-0001 --> ext_cache[(Redis)]
4. **Circular dependency detection**:
- If A imports B and B imports A (directly or transitively), flag as a finding
- Record cycle chain with file evidence
### Step 6: Map Data Flows
Trace how data enters, transforms, and exits the system.
1. **Entry points**: HTTP routes, CLI arguments, message queue consumers, file watchers, cron jobs
- Grep for route definitions: `app.get`, `@app.route`, `router.Handle`, `#[get]`
- Grep for queue consumers: `subscribe`, `consume`, `on('message'`
- Record each with `[file:line]` evidence
2. **Data transformations**:
- Model/schema definitions (Grep for class/interface/struct definitions in model directories)
- Validation layers (Grep for validation libraries: Joi, Zod, Pydantic, serde)
- Serialization/deserialization points
3. **Exit points**: HTTP responses, database writes, queue publishes, file writes, external API calls
- Grep for: `res.send`, `res.json`, `return Response`, `db.insert`, `publish`, `fetch(`, `axios`
- Record each with `[file:line]` evidence
4. **Data flow Mermaid diagram**:
```markdown
```mermaid
flowchart LR
Client -->|HTTP| API[API Layer]
API -->|Validate| Service[Service Layer]
Service -->|Query| DB[(Database)]
Service -->|Publish| Queue[Message Queue]
Queue -->|Consume| Worker[Background Worker]
Worker -->|Write| Storage[File Storage]
### Step 7: Identify Configuration and Environment
1. **Configuration files**: Grep for config loading patterns
- Environment variables: `process.env`, `os.environ`, `os.Getenv`, `std::env`
- Config files: `.env`, `config/`, `application.yml`, `settings.py`
- Feature flags: LaunchDarkly, Unleash, custom flag systems
2. **Environment separation**:
- Development, staging, production configs
- Environment-specific overrides
- Secret management approach (env vars, vault, config files)
3. **Record configuration inventory**:
```markdown
| Config Source | Type | Evidence | Secrets Present |
|--------------|------|----------|-----------------|
| .env | Environment variables | [.env.example:1-15] | Yes (redacted) |
| config/database.yml | YAML config | [config/database.yml:1] | Connection strings |
Assemble all findings into the final artifact.
AS_IS_SYSTEM_MODEL.md following the template[file:line] citation# As-Is System Model: [Project Name]
## Summary
[3-5 sentence evidence-backed overview of the system as it exists today]
## Inputs
- REVIEW_BRIEF.md
- ALIGNMENT_MATRIX.md
- Codebase at: [path]
## Outputs
- AS_IS_SYSTEM_MODEL.md (this document)
## Assumptions
- [ASM-NNNN]: [Assumption with rationale]
## Open Questions
- [OQ-NNNN]: [Question about unclear or unverifiable aspects]
## Main Content
### System Overview
> [Evidence-backed description] [file:line]
### Technology Stack
| Technology | Version | Evidence | Purpose |
|-----------|---------|----------|---------|
### Component Inventory
#### COMP-0001: [Name]
- **Path**: [relative path]
- **Type**: [Service | Library | Module | ...]
- **Entry Point**: [file:line]
- **Responsibility**: [description]
- **Test Coverage**: [Yes/No, path]
### Dependency Map
```mermaid
[Component dependency diagram]
| From | To | Evidence | Type |
|---|
| Package | Version | Category | Evidence |
|---|
[List or "None detected"]
[Data flow diagram]
| Type | Path/Pattern | Handler | Evidence |
|---|
| Type | Destination | Handler | Evidence |
|---|
| Config Source | Type | Evidence | Notes |
|---|
[Consolidated list of all file:line citations organized by section]
**Output path**: `artifacts/brownfield/<project_name>/AS_IS_SYSTEM_MODEL.md`
## Determinism Rules
These rules ensure reproducible output regardless of when or how many times the skill is invoked on the same inputs.
1. **COMP-NNNN IDs**: Sort all components alphabetically by component name (case-insensitive), then assign sequential four-digit numbers starting at 0001
2. **Technology table rows**: Sort alphabetically by Technology column
3. **Dependency table rows**: Sort by From component ID, then by To component ID
4. **Entry/Exit point rows**: Sort by Type, then by Path/Pattern alphabetically
5. **Configuration rows**: Sort by Config Source alphabetically
6. **Sections**: Always in template order, never reordered
7. **No timestamps**: Do not include generation timestamps in artifact body
8. **Mermaid node IDs**: Use COMP-NNNN IDs as node identifiers for consistency
9. **Evidence Index**: Sort by section heading, then by file path alphabetically
## Evidence Citation Rules
Every architectural claim, observation, or assertion about the codebase MUST include evidence citations.
**Format**: `> Claim text [relative/path/to/file.ext:line_number]`
**Examples**:
```markdown
> The API server listens on port 3000 by default [src/server.ts:45]
> Database models use Prisma ORM with PostgreSQL provider [prisma/schema.prisma:3-5]
> Authentication uses bcrypt for password hashing [src/auth/hash.ts:12]
> Worker processes Redis queue jobs every 30 seconds [src/workers/processor.ts:8]
Rules:
[file:line][file:start-end][file1:line] [file2:line][directory/:structure][UNVERIFIED] and add to Open QuestionsBefore finalizing the artifact, verify:
[file:line] citation