Technical Requirements Document generation through interactive interview
Generate a comprehensive Technical Requirements Document (TECH_REQ.md) through an interactive interview process. Use this when you need to define architecture, technology stack, data models, and security decisions before implementation planning.
/plugin marketplace add aws-solutions-library-samples/guidance-for-claude-code-with-amazon-bedrock/plugin install epcc-workflow@aws-claude-code-plugins[initial-technical-context-or-project-name]Generate comprehensive TECH_REQ.md through collaborative technical discovery. This command transforms architectural ambiguity into clear technical decisions that feed directly into the EPCC plan phase.
Opening Principle: High-quality TRDs transform architectural ambiguity into clear technical decisions through collaborative discovery, enabling confident implementation with the right technology choices.
@../docs/EPCC_BEST_PRACTICES.md - Comprehensive guide covering sub-agent delegation, clarification strategies, error handling patterns, and technical requirements workflow optimization
Purpose: Create Technical Requirements Document (TECH_REQ.md) that defines:
Output: TECH_REQ.md file
Position in workflow:
/epcc-plan (strategic implementation planning)Opening Principle: Discover technical requirements through structured questions and collaborative dialogue, not assumptions.
✅ Do (Default Behavior):
❌ Don't:
Remember: You're discovering technical requirements, not implementing. Focus on WHAT technologies and WHY, defer HOW to CODE phase.
What we're discovering:
Architecture (Patterns, service boundaries, component structure)
Technology Stack (Languages, frameworks, tools, libraries)
Data Models (Storage, schemas, relationships)
Integrations (APIs, third-party services, authentication)
Security (Auth, compliance, data protection)
Performance (Scalability, caching, optimization)
Depth adaptation:
✅ Use AskUserQuestion for (default for all technical decisions):
Pattern:
AskUserQuestion({
questions: [{
question: "What database technology fits your needs?",
header: "Database",
multiSelect: false,
options: [
{
label: "PostgreSQL",
description: "Relational, ACID compliant, complex queries, JSON support, mature ecosystem"
},
{
label: "MongoDB",
description: "Document store, flexible schema, good for JSON-heavy data, horizontal scaling"
},
{
label: "MySQL",
description: "Relational, widely supported, proven at scale, simpler than PostgreSQL"
},
{
label: "DynamoDB",
description: "AWS managed NoSQL, serverless, auto-scaling, simple key-value or documents"
}
]
}]
})
✅ Use conversation for:
❌ Don't use conversation for:
Before asking questions:
if [ -f "PRD.md" ]; then
# Read PRD.md to understand:
# - Features (what needs technical support?)
# - Users (scale, geography, access patterns)
# - Constraints (timeline, budget, compliance)
# - Success criteria (performance targets, uptime, etc.)
# Then ask technical questions informed by product context
fi
Reference PRD dynamically:
If PRD.md missing: Ask about product context first (users, features, scale) to inform technical decisions.
Research & Exploration:
Decision heuristic: Research when comparing options or learning domain; explore brownfield for existing patterns; skip if user provided sufficient context.
Present mode choice to user with clear time/depth tradeoffs:
When to use:
Coverage:
Question count: ~8-12 structured questions focused on essentials
When to use:
Coverage:
Question count: ~25-35 structured questions + conversational deep-dives
I can help create your Technical Requirements Document.
Based on [initial context], this appears to be a [simple/medium/complex] technical scope.
I can create either:
1. **Quick TRD** (20-30 min) - Core stack and architecture for straightforward projects
2. **Comprehensive TRD** (60-90 min) - Deep technical exploration for complex systems
Which approach works better for your project?
Adapt mode during interview: If complexity emerges (user mentions compliance, high scale, many integrations), suggest switching to comprehensive.
Goal: Define high-level structure and component organization.
Context: Research with WebSearch/WebFetch("[architecture] patterns 2025"), explore with /epcc-explore (brownfield).
Use AskUserQuestion for:
// Architecture Pattern
{
question: "What architectural pattern fits your project?",
header: "Architecture",
options: [
{
label: "Monolith",
description: "Single codebase, simpler deployment, good for small teams, faster initial development"
},
{
label: "Microservices",
description: "Independent services, complex deployment, team autonomy, scales components independently"
},
{
label: "Serverless",
description: "Function-based, auto-scaling, pay-per-use, less infrastructure management"
},
{
label: "JAMstack",
description: "Static generation + APIs, excellent performance, simple hosting, limited dynamic features"
}
]
}
// Design Patterns (if complex project)
{
question: "What design patterns are important for your system?",
header: "Patterns",
multiSelect: true,
options: [
{label: "Event-driven", description: "Async communication, decoupled components, eventual consistency"},
{label: "CQRS", description: "Separate read/write models, optimized queries, complex to implement"},
{label: "Repository", description: "Data access abstraction, testable, clean architecture"},
{label: "Factory", description: "Object creation patterns, dependency injection, flexible instantiation"}
]
}
Converse about:
From PRD.md (if available): Features → Architectural needs (real-time? background jobs? file processing?)
Goal: Select languages, frameworks, hosting, and deployment approach.
Context: Research with WebSearch/WebFetch("[tech-stack] best practices 2025"), explore with /epcc-explore (brownfield).
Use AskUserQuestion for:
// Backend Language
{
question: "What backend language/runtime fits your needs?",
header: "Backend",
options: [
{label: "Node.js", description: "JavaScript/TypeScript, async I/O, npm ecosystem, good for APIs"},
{label: "Python", description: "Django/Flask/FastAPI, AI/ML libraries, readable, slower than compiled"},
{label: "Go", description: "Compiled, fast, simple concurrency, strong typing, smaller ecosystem"},
{label: "Java/Kotlin", description: "Enterprise-grade, JVM, Spring ecosystem, verbose, battle-tested"}
]
}
// Frontend Framework
{
question: "What frontend approach do you want?",
header: "Frontend",
options: [
{label: "React", description: "Popular, large ecosystem, component-based, JSX syntax, flexible"},
{label: "Vue", description: "Simpler than React, good docs, template syntax, smaller ecosystem"},
{label: "Svelte", description: "Compile-time framework, fast, less boilerplate, newer ecosystem"},
{label: "Vanilla JS", description: "No framework, full control, smaller bundle, more manual work"}
]
}
// Hosting Infrastructure
{
question: "Where will you host this application?",
header: "Hosting",
options: [
{label: "AWS", description: "Full service suite, complex, powerful, enterprise-ready, higher cost"},
{label: "Google Cloud", description: "Good for AI/ML, Kubernetes native, competitive pricing"},
{label: "Azure", description: "Enterprise integration, Microsoft stack, hybrid cloud"},
{label: "Vercel/Netlify", description: "Simple deployment, great DX, limited backend, good for JAMstack"}
]
}
Converse about:
From PRD.md (if available): Budget → Hosting costs, Timeline → Deployment complexity
Goal: Define data storage strategy, schemas, and caching.
Context: Research with WebSearch/WebFetch("[database] best practices 2025"), explore with /epcc-explore (brownfield).
Use AskUserQuestion for:
// Database Selection
{
question: "What database technology fits your data model?",
header: "Database",
options: [
{label: "PostgreSQL", description: "Relational, ACID, complex queries, JSON support, mature"},
{label: "MongoDB", description: "Document store, flexible schema, good for JSON, horizontal scaling"},
{label: "MySQL", description: "Relational, widely supported, proven at scale, simpler than Postgres"},
{label: "DynamoDB", description: "AWS NoSQL, serverless, auto-scaling, key-value or documents"}
]
}
// Caching Strategy (if medium/complex)
{
question: "What caching approach do you need?",
header: "Caching",
multiSelect: true,
options: [
{label: "Redis", description: "In-memory, fast, pub/sub, session storage, requires management"},
{label: "CDN", description: "Edge caching, static assets, global distribution, reduces origin load"},
{label: "Application cache", description: "In-process, simple, no network, lost on restart"},
{label: "Database query cache", description: "Built-in, automatic, limited control"}
]
}
Converse about:
From PRD.md (if available): Features → Data entities, Users → Access patterns
Goal: Define API design, authentication, and third-party integrations.
Context: Research with WebSearch/WebFetch("[API/auth] best practices 2025"), explore with /epcc-explore (brownfield).
Use AskUserQuestion for:
// API Style
{
question: "What API style fits your needs?",
header: "API",
options: [
{label: "REST", description: "Standard HTTP, widely understood, simple, over-fetching/under-fetching"},
{label: "GraphQL", description: "Flexible queries, precise data fetching, complex setup, learning curve"},
{label: "gRPC", description: "High performance, typed, binary protocol, requires code generation"},
{label: "tRPC", description: "Type-safe, TypeScript end-to-end, simple, ecosystem smaller"}
]
}
// Authentication Method
{
question: "How will users authenticate?",
header: "Auth",
options: [
{label: "JWT", description: "Stateless, scalable, client stores token, can't revoke easily"},
{label: "Session", description: "Server-side state, easy to revoke, requires session store"},
{label: "OAuth2", description: "Third-party login (Google, GitHub), complex setup, better UX"},
{label: "Auth0/Clerk", description: "Managed service, fast setup, monthly cost, less control"}
]
}
// Third-Party Services (multiSelect)
{
question: "What third-party services do you need?",
header: "Services",
multiSelect: true,
options: [
{label: "Payment", description: "Stripe, PayPal, Square - transaction processing"},
{label: "Email", description: "SendGrid, Mailgun, AWS SES - transactional emails"},
{label: "Storage", description: "S3, Cloudinary, Uploadcare - file uploads and CDN"},
{label: "Analytics", description: "Mixpanel, Amplitude, PostHog - user behavior tracking"}
]
}
Converse about:
From PRD.md (if available): Features → Required integrations (payments, notifications, etc.)
Goal: Define authentication, authorization, data protection, and compliance.
Context: Research with WebSearch/WebFetch("[security/compliance] requirements 2025"), explore with /epcc-explore (brownfield).
Use AskUserQuestion for:
// Authorization Model
{
question: "What authorization model do you need?",
header: "Authz",
options: [
{label: "RBAC", description: "Role-based, simple, roles assigned to users, good for most apps"},
{label: "ABAC", description: "Attribute-based, flexible, complex policies, enterprise use cases"},
{label: "Simple ownership", description: "Users own resources, basic access control, simplest"},
{label: "Multi-tenancy", description: "Isolated data per tenant, complex, SaaS products"}
]
}
// Compliance Requirements (multiSelect, if applicable)
{
question: "What compliance standards apply?",
header: "Compliance",
multiSelect: true,
options: [
{label: "GDPR", description: "EU data privacy, right to deletion, consent management"},
{label: "HIPAA", description: "Healthcare data, strict security, audit logs, encryption"},
{label: "SOC2", description: "Security controls, audit reports, enterprise customers"},
{label: "PCI DSS", description: "Payment card data, strict requirements, third-party audits"}
]
}
Converse about:
From PRD.md (if available): Constraints → Compliance requirements, Data sensitivity
Goal: Define performance targets, scaling strategy, and optimization priorities.
Context: Research with WebSearch/WebFetch("[performance/scaling] patterns 2025"), explore with /epcc-explore (brownfield).
Use AskUserQuestion for:
// Expected Scale
{
question: "What scale are you targeting?",
header: "Scale",
options: [
{label: "Small (<1K users)", description: "Single server, minimal caching, simple deployment"},
{label: "Medium (1K-100K users)", description: "Load balancer, caching layer, horizontal scaling"},
{label: "Large (100K-1M users)", description: "Multi-region, CDN, advanced caching, auto-scaling"},
{label: "Massive (>1M users)", description: "Global infrastructure, edge computing, complex architecture"}
]
}
// Performance Priorities (multiSelect)
{
question: "What performance aspects are most critical?",
header: "Performance",
multiSelect: true,
options: [
{label: "Page load speed", description: "Initial render, time to interactive, Core Web Vitals"},
{label: "API latency", description: "Response times, database query optimization"},
{label: "Real-time updates", description: "WebSocket, SSE, sub-second data freshness"},
{label: "Background jobs", description: "Async processing, job queues, worker scaling"}
]
}
Converse about:
From PRD.md (if available): Success criteria → Performance targets, Users → Scale expectations
Match question depth to project complexity (discovered dynamically):
Adapt: Focus on Stack + Data + Basic Security (~10-12 questions)
Adapt: All 6 phases with moderate depth (~20-25 questions)
100K users
5 integrations
Adapt: Comprehensive exploration of all 6 phases (~30-40 questions)
Dynamic adjustment: If user mentions compliance/high-scale/many integrations during simple TRD → offer to switch to comprehensive mode.
Forbidden patterns:
TRD structure - 6 core dimensions:
When: Single service, standard stack, minimal integrations, <10K users
# Technical Requirements: [Project Name]
**Created**: [Date] | **Complexity**: Simple | **From PRD**: [Yes/No]
## Architecture
**Pattern**: [Monolith/SPA/JAMstack]
**Rationale**: [Why this pattern fits the project]
## Technology Stack
**Backend**: [Language + Framework] - [Rationale]
**Frontend**: [Framework/Vanilla] - [Rationale]
**Database**: [Database] - [Rationale]
**Hosting**: [Platform] - [Rationale]
## Data Model
**Core Entities**: [List 3-5 main entities]
**Relationships**: [Key relationships]
**Migrations**: [Strategy: tool/approach]
## Security
**Authentication**: [Method] - [Rationale]
**Authorization**: [Approach] - [Rationale]
**Data Protection**: [Encryption strategy]
## Integrations
[List essential integrations with rationale, or "None" if standalone]
## Performance
**Expected Scale**: [<1K users, load expectations]
**Caching**: [Strategy if needed, or "Not required initially"]
## PRD Alignment
[If PRD.md exists, reference how technical choices support product requirements]
## Next Steps
Technical requirements defined. Ready for:
- Brownfield: `/epcc-explore` then `/epcc-plan`
- Greenfield: `/epcc-plan` (skip explore)
When: Multiple services, moderate complexity, several integrations, 10K-100K users
Add to simple structure:
When: Distributed system, compliance requirements, high scale, many integrations
Add to medium structure:
Depth heuristic: TRD complexity should match technical complexity. Don't write distributed systems TRD for simple CRUD app.
# Technical Requirements Document: [Project Name]
**Created**: [Date]
**Version**: 1.0
**Complexity**: [Simple/Medium/Complex]
**PRD Reference**: [PRD.md if available, or "Standalone"]
---
## Executive Summary
[2-3 sentence technical overview]
## Research & Exploration
**Key Insights** (from WebSearch/WebFetch/exploration):
- **[Technology choice]**: [Research finding, benchmark, or rationale]
- **[Pattern/approach]**: [Best practice discovered or code pattern leveraged]
- **[Existing component]**: [Reusable code discovered from exploration]
**Documentation Identified**:
- **[Doc type]**: Priority [H/M/L] - [Why needed for this project]
## Architecture
### Pattern
[Monolith/Microservices/Serverless/JAMstack/Hybrid]
**Rationale**: [Why this pattern? Considered alternatives?]
### Component Structure
[List main components/services and their responsibilities]
### Data Flow
[How data moves through the system - simple description or diagram]
### Design Patterns
[Key patterns: Event-driven, CQRS, Repository, etc.]
## Technology Stack
### Backend
**Language/Runtime**: [Choice] - [Rationale vs alternatives]
**Framework**: [Choice] - [Rationale vs alternatives]
### Frontend
**Framework**: [React/Vue/Svelte/Vanilla] - [Rationale vs alternatives]
**Build Tools**: [Vite/Webpack/etc.] - [Rationale]
### Database
**Primary Database**: [PostgreSQL/MongoDB/MySQL/etc.] - [Rationale vs alternatives]
**Caching**: [Redis/CDN/Application cache] - [Strategy]
### Infrastructure
**Hosting**: [AWS/GCP/Azure/Vercel/etc.] - [Rationale vs alternatives]
**Deployment**: [Containers/Serverless/VMs] - [Rationale]
**CI/CD**: [GitHub Actions/GitLab CI/CircleCI/etc.] - [Strategy]
## Environment Setup
**init.sh required**: [Yes/No]
**Triggers** (if any apply, init.sh is needed):
- [ ] Web server / API backend
- [ ] Database setup required
- [ ] External services (Redis, Elasticsearch, etc.)
- [ ] Complex dependency installation
- [ ] Environment variables required
**Components to initialize** (if init.sh required):
- [ ] Virtual environment / package installation
- [ ] Database setup/migration
- [ ] Service dependencies: [list services]
- [ ] Environment variables: [list vars, no secrets]
- [ ] Development server startup
**Startup command**: [e.g., "npm run dev", "uvicorn main:app --reload"]
**Health check**: [e.g., "curl localhost:3000/health"]
## Data Architecture
### Core Entities
1. **[Entity Name]**
- Purpose: [What it represents]
- Key attributes: [Essential fields]
- Relationships: [Connections to other entities]
2. **[Entity Name]**
- [Same structure]
### Schema Design
**Approach**: [Normalized/Denormalized/Hybrid] - [Rationale]
**Migrations**: [Tool: Prisma/TypeORM/Alembic/etc.] - [Strategy]
### Data Access Patterns
- [Read-heavy? Write-heavy? Analytics?]
- [Query optimization strategy]
## API Design
### API Style
**Choice**: [REST/GraphQL/gRPC/tRPC] - [Rationale vs alternatives]
### Endpoints (if REST)
[High-level endpoint groups, not exhaustive list]
### Authentication
**Method**: [JWT/Session/OAuth2/Auth0] - [Rationale vs alternatives]
**Token Storage**: [Where tokens stored, expiry strategy]
### Authorization
**Model**: [RBAC/ABAC/Ownership/Multi-tenancy] - [Rationale]
### Rate Limiting
[Strategy if needed]
## Integrations
### Third-Party Services
1. **[Service Name]** (e.g., Stripe for payments)
- Purpose: [What it does]
- Rationale: [Why this vs alternatives]
- Integration approach: [API/SDK/Webhook]
2. **[Service Name]**
- [Same structure]
### External APIs
[Any external APIs to consume]
### Webhooks
[If handling incoming webhooks]
## Security
### Authentication & Authorization
**Authentication**: [Detailed approach from API Design]
**Authorization**: [Detailed model from API Design]
### Data Protection
**Encryption at Rest**: [Yes/No - approach if yes]
**Encryption in Transit**: [TLS configuration]
**Sensitive Data**: [PII handling, secrets management]
### OWASP Considerations
[Key OWASP Top 10 items relevant to this project]
### Compliance (if applicable)
**Requirements**: [GDPR/HIPAA/SOC2/PCI DSS/etc.]
**Implementation**: [How compliance requirements are met]
**Audit Logging**: [What's logged, retention period]
## Performance & Scalability
### Scale Targets
**Users**: [Expected user count]
**Requests**: [Expected req/sec or req/day]
**Data Volume**: [Expected data growth]
### Performance Budgets
- **Page Load**: [Target: <2s]
- **API Latency**: [Target: <100ms p95]
- **Database Queries**: [Target: <50ms p95]
### Caching Strategy
**Layers**:
1. **CDN**: [Static assets, edge caching]
2. **Application Cache**: [Redis/in-memory, what's cached]
3. **Database Query Cache**: [If applicable]
**Invalidation**: [Strategy for cache freshness]
### Scaling Approach
**Horizontal vs Vertical**: [Choice and rationale]
**Auto-scaling**: [Triggers, min/max instances]
**Load Balancing**: [Strategy]
### Monitoring & Observability
**Metrics**: [What to track: latency, errors, throughput]
**Logging**: [Structured logging approach]
**Tracing**: [Distributed tracing if microservices]
**Tools**: [DataDog/New Relic/Prometheus/etc.]
## Deployment Strategy
### Environments
- **Development**: [Local/shared dev environment]
- **Staging**: [Pre-production testing]
- **Production**: [Live environment]
### CI/CD Pipeline
1. [Build step]
2. [Test step]
3. [Deploy step]
### Rollback Strategy
[How to revert if deployment fails]
### Zero-Downtime Deployment
[Blue-green? Rolling? Canary?]
## Disaster Recovery (Complex projects)
### Backup Strategy
**Frequency**: [Hourly/Daily/etc.]
**Retention**: [How long backups kept]
**Testing**: [Backup restore testing frequency]
### Failover
**RTO** (Recovery Time Objective): [Target downtime]
**RPO** (Recovery Point Objective): [Acceptable data loss]
## Migration Plan (If applicable)
[If replacing existing system or migrating data]
### Migration Strategy
- [Approach: Big bang? Phased? Strangler pattern?]
### Data Migration
- [Source → Target mapping]
- [Validation strategy]
### Rollback Plan
- [How to revert if migration fails]
## Risks & Mitigation
| Risk | Impact | Likelihood | Mitigation |
|------|--------|------------|------------|
| [Technical risk] | H/M/L | H/M/L | [How to address] |
## Assumptions
[Critical technical assumptions that could change the plan]
## Out of Scope
[Technical decisions deferred or explicitly excluded]
## PRD Alignment
[If PRD.md exists]
**Product Requirements Supported**:
- [Feature from PRD] → [Technical approach]
- [Constraint from PRD] → [How technical design respects it]
- [Success criteria from PRD] → [How architecture enables measurement]
**Technical Decisions Informing Product**:
- [Technology limitation] → [Product implication]
- [Performance characteristic] → [User experience impact]
## Next Steps
This TRD feeds into the EPCC workflow. Choose your entry point:
**For Greenfield Projects** (new codebase):
1. Review & approve this TRD
2. Run `/epcc-plan` to create implementation plan (can skip Explore)
3. Begin development with `/epcc-code`
4. Finalize with `/epcc-commit`
**For Brownfield Projects** (existing codebase):
1. Review & approve this TRD
2. Run `/epcc-explore` to understand existing codebase and patterns
3. Run `/epcc-plan` to create implementation plan based on exploration + this TRD
4. Begin development with `/epcc-code`
5. Finalize with `/epcc-commit`
**Note**: The core EPCC workflow is: **Explore → Plan → Code → Commit**. This TRD is the optional technical preparation step before that cycle begins.
---
**End of TRD**
Completeness heuristic: TRD is ready when you can answer:
Anti-patterns:
Remember: Match TRD depth to technical complexity. Simple project = simple TRD. Focus on WHAT and WHY, defer HOW to CODE phase.
Confirm completeness:
✅ TECH_REQ.md generated and saved
This document captures:
- Architecture: [Pattern chosen]
- Tech Stack: [Key technologies with rationale]
- Data Model: [Storage approach]
- Security: [Auth/compliance approach]
- Scalability: [Scale strategy]
[+ PRD Alignment if PRD.md existed]
Next steps - Enter the EPCC workflow:
- Review the TRD and let me know if anything needs adjustment
- When ready, begin EPCC cycle with `/epcc-explore` (brownfield) or `/epcc-plan` (greenfield)
Questions or changes to the TRD?
After generating TECH_REQ.md, enrich the feature list with technical subtasks if epcc-features.json exists.
if [ -f "epcc-features.json" ]; then
# Feature list exists from PRD - enrich with technical subtasks
echo "Found epcc-features.json - enriching features with technical details..."
else
# No feature list - will be created during /epcc-plan
echo "No epcc-features.json found - technical decisions will inform /epcc-plan"
fi
For each feature in epcc-features.json, add technical subtasks based on TRD decisions:
{
"features": [
{
"id": "F001",
"name": "User Authentication",
"subtasks": [
{"name": "Set up [Auth provider] integration", "status": "pending", "source": "TECH_REQ.md#authentication"},
{"name": "Implement [JWT/Session] token handling", "status": "pending", "source": "TECH_REQ.md#authentication"},
{"name": "Create [Database] user schema", "status": "pending", "source": "TECH_REQ.md#data-model"},
{"name": "Configure [bcrypt/argon2] password hashing", "status": "pending", "source": "TECH_REQ.md#security"},
{"name": "Add rate limiting middleware", "status": "pending", "source": "TECH_REQ.md#security"}
]
}
]
}
Subtask generation rules:
Add new features for infrastructure tasks not covered by product features:
{
"features": [
// ... existing features from PRD ...
// NEW: Infrastructure features from TRD
{
"id": "INFRA-001",
"name": "Database Setup",
"description": "Set up [PostgreSQL] database with schemas and migrations",
"priority": "P0",
"status": "pending",
"passes": false,
"acceptanceCriteria": [
"Database provisioned and accessible",
"All migrations run successfully",
"Connection pooling configured",
"Backup strategy in place"
],
"subtasks": [],
"source": "TECH_REQ.md#data-model"
},
{
"id": "INFRA-002",
"name": "CI/CD Pipeline",
"description": "Set up continuous integration and deployment",
"priority": "P1",
"status": "pending",
"passes": false,
"acceptanceCriteria": [
"Tests run on every commit",
"Automated deployment to staging",
"Production deployment with approval gate"
],
"subtasks": [],
"source": "TECH_REQ.md#deployment"
},
{
"id": "INFRA-003",
"name": "Monitoring & Logging",
"description": "Set up application monitoring and centralized logging",
"priority": "P1",
"status": "pending",
"passes": false,
"acceptanceCriteria": [
"Error tracking configured",
"Performance monitoring in place",
"Logs aggregated and searchable"
],
"subtasks": [],
"source": "TECH_REQ.md#monitoring"
}
]
}
Infrastructure feature rules:
Append TRD session to epcc-progress.md:
---
## Session [N]: TRD Created - [Date]
### Summary
Technical Requirements Document created with architecture and technology decisions.
### Technical Decisions
- **Architecture**: [Pattern chosen]
- **Backend**: [Technology + rationale]
- **Frontend**: [Technology + rationale]
- **Database**: [Technology + rationale]
- **Hosting**: [Platform chosen]
- **Authentication**: [Method chosen]
### Feature Enrichment
- Updated [X] features with technical subtasks
- Added [Y] infrastructure features:
- INFRA-001: Database Setup
- INFRA-002: CI/CD Pipeline
[...]
### Feature Summary (Updated)
- **Total Features**: [N] (was [M] from PRD)
- **Product Features**: [X] (with technical subtasks)
- **Infrastructure Features**: [Y] (new from TRD)
### Next Session
Run `/epcc-plan` to finalize implementation order and create detailed task breakdown.
---
## Technical Requirements Complete
✅ **TECH_REQ.md** - Technical decisions documented
✅ **epcc-features.json** - Features enriched with technical details:
- [X] existing features updated with subtasks
- [Y] infrastructure features added
- Total features: [N]
✅ **epcc-progress.md** - TRD session logged
### Technical Subtasks Added
| Feature | Subtasks Added | Source |
|---------|----------------|--------|
| F001: User Auth | 5 subtasks | TECH_REQ.md#authentication |
| F002: Task CRUD | 3 subtasks | TECH_REQ.md#data-model |
| ... | ... | ... |
### Infrastructure Features Added
| Feature | Priority | Source |
|---------|----------|--------|
| INFRA-001: Database Setup | P0 | TECH_REQ.md#data-model |
| INFRA-002: CI/CD Pipeline | P1 | TECH_REQ.md#deployment |
| ... | ... | ... |
### Next Steps
**For Implementation Planning**: `/epcc-plan` - Finalize task order and create detailed breakdown
**For Brownfield Projects**: `/epcc-explore` - Understand existing codebase first
**To check progress**: `/epcc-resume` - Quick orientation and status
Map TRD decisions to subtasks based on technology choices:
| TRD Section | Generated Subtasks |
|---|---|
| Authentication: JWT | Token generation, validation middleware, refresh token handling |
| Authentication: OAuth2 | Provider integration, callback handling, token storage |
| Database: PostgreSQL | Schema creation, migrations, connection pooling, indexes |
| Database: MongoDB | Schema design, indexes, aggregation pipelines |
| API: REST | Route structure, validation, error handling, documentation |
| API: GraphQL | Schema definition, resolvers, subscriptions setup |
| Hosting: AWS | IAM setup, VPC config, deployment scripts |
| Hosting: Vercel | Environment variables, build config, domain setup |
| Caching: Redis | Connection setup, cache invalidation, session storage |
| Security: GDPR | Audit logging, data export, deletion handlers |
❌ Don't dictate: "You should use microservices for this" ✅ Do guide: "For your scale, we could use a monolith (simpler, faster to ship) or microservices (independent scaling, team autonomy). Given your timeline and team size, which sounds better?"
❌ Don't guarantee: "PostgreSQL will handle your scale perfectly" ✅ Do qualify: "PostgreSQL would handle your expected 10K users well, though we'd want monitoring to validate query performance as you grow"
Pattern:
// Don't wait for user to ask "help me decide"
// Present structured questions for ALL technical decisions
AskUserQuestion({
questions: [{
question: "What hosting platform fits your needs?",
header: "Hosting",
options: [
{label: "AWS", description: "Full suite, complex, powerful, higher cost"},
{label: "Vercel", description: "Simple, great DX, limited backend, good for Next.js"},
{label: "Railway", description: "Simple, good for full-stack, auto-deploy, newer platform"},
{label: "Self-hosted", description: "Full control, more setup, lower ongoing cost"}
]
}]
})
If PRD.md exists:
"So if I understand correctly, we're building a [architecture] with [stack] that [key technical capability]. The main technical challenges are [X] and [Y]. Does that sound right?"
# With project context
/trd "Real-time collaboration platform"
# After creating PRD
/trd # Will find and read PRD.md automatically
# Without context
/trd
# Will ask: "What technical project would you like to define requirements for?"
User: /trd "Simple blog platform"
Claude: Checks for PRD.md (not found)
I can help create your Technical Requirements Document for the blog platform.
This appears to be a straightforward project. I can create:
1. **Quick TRD** (20-30 min) - Core stack and architecture
2. **Comprehensive TRD** (60-90 min) - Deep technical exploration
Which approach works better?
User: "Quick TRD"
Claude: Uses AskUserQuestion for all key decisions
Claude: Generates Simple TRD (~500 tokens) matching complexity
Total time: ~20 minutes
User: /trd [PRD.md exists in directory]
Claude: Reads PRD.md first
✅ Found PRD.md - I'll use the product requirements to inform technical decisions.
From PRD.md I can see:
- Real-time collaboration features (needs WebSocket support)
- 100K user target (needs caching and scaling strategy)
- GDPR compliance (needs audit logging and data protection)
This appears to be a complex technical scope. I recommend:
**Comprehensive TRD** (60-90 min) - Deep technical exploration
Shall we proceed with comprehensive mode?
User: "Yes"
Claude: Comprehensive exploration across all 6 phases
Claude: Generates Complex TRD (~2,000 tokens) with PRD alignment section
Total time: ~75 minutes
User: /trd "Add payment processing to existing e-commerce platform"
Claude: Checks for PRD.md (found)
✅ Found PRD.md - using product context for payment requirements.
Since you're adding to an existing codebase, I'll focus technical requirements on:
- Payment integration architecture
- Technology choices (payment processor, security)
- Data model changes
- Integration with existing stack
I recommend **Medium TRD** (30-45 min) for this integration scope.
Shall we proceed?
User: "Yes"
Claude: Focused technical interview
Claude: Generates Medium TRD (~900 tokens) focused on integration
✅ TECH_REQ.md generated
Next steps:
1. Review this TRD
2. Run `/epcc-explore` to understand existing codebase patterns
3. Run `/epcc-plan` to create implementation plan that integrates with existing code
Ready to explore the existing codebase?
Don't: "I'll use React and PostgreSQL for this" → Do: Ask using AskUserQuestion for all stack choices
Don't: "PostgreSQL is the best choice" → Do: Present options with tradeoffs, let user decide
Don't: Generate comprehensive TRD for "add button" task → Do: Match depth to technical complexity
Don't: "Create UserService class with methods..." → Do: Focus on technology choices and architecture patterns
Don't: Ask about scale/features already in PRD.md → Do: Read PRD.md first, reference context
Don't: "What database do you want?" (open-ended) → Do: AskUserQuestion with 4 database options + tradeoffs
Even with this guidance, you may default to:
Your role: Technical discovery partner who autonomously gathers context and interviews collaboratively using structured questions.
Work pattern: Read PRD.md → Explore codebase (if brownfield) → Research options (WebSearch) → Ask (AskUserQuestion for decisions) → Clarify → Document technical requirements with research insights.
Context gathering: Proactively use /epcc-explore (brownfield) and WebSearch (unfamiliar tech) to inform better decisions.
AskUserQuestion usage: PRIMARY method for all technical decisions with 2-4 options. Conversation for follow-ups.
TRD depth: Simple project = simple TRD. Complex project = comprehensive TRD. Always adapt to technical complexity.
Technology choices: Research with WebSearch → Present options with tradeoffs → Let user decide → Document rationale and research findings.
Documentation planning: Identify what docs would help CODE phase → Include in TECH_REQ.md with priorities.
🎯 TECH_REQ.md complete - ready to feed into /epcc-plan for implementation planning!