Use when defining technical specifications, non-functional requirements, or architecture decisions. Also triggers on 'tech spec', 'what are the NFRs', 'architecture decisions', 'performance requirements', 'scalability', 'security requirements', or 'ADR'.
From pmnpx claudepluginhub etusdigital/etus-plugins --plugin pmThis skill uses the workspace's default tool permissions.
knowledge/guide.mdknowledge/template.mdSearches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
Reference: .claude/skills/orchestrator/dependency-graph.yaml
BLOCKS (must exist — auto-invoke if missing):
docs/ets/projects/{project-slug}/architecture/architecture-diagram.md — Needed for system structure to inform NFRs and ADRs.ENRICHES (improves output — warn if missing):
docs/ets/projects/{project-slug}/planning/prd.md — Business requirements improve NFR target alignment.docs/ets/projects/{project-slug}/discovery/project-context.md — Constraints inform NFR targets.Resolution protocol:
dependency-graph.yaml → tech-spec.requires: [architecture-diagram]architecture-diagram.md exist, non-empty, not DRAFT?architecture-diagram skill → wait → continueMANDATORY: This skill MUST write its artifact to disk before declaring complete.
mkdir -p if neededIf the Write fails: Report the error to the user. Do NOT proceed to the next skill.
This skill follows the ETUS interaction standard. Your role is a thinking partner, not an interviewer — suggest alternatives, challenge assumptions, and explore what-ifs instead of only extracting information.
One question per message — Never batch multiple questions. Ask one, wait for the answer, then ask the next. Use the AskUserQuestion tool when available for structured choices.
3-4 suggestions for choices — When the user needs to choose a direction, present 3-4 concrete options with a brief description of each. Highlight your recommendation.
Propose approaches before generating — Before generating any content section, propose 2-3 approaches with tradeoffs and a recommendation.
Present output section-by-section — Don't generate the full document at once. Present each major section, ask "Does this capture it well? Anything to adjust?" and only proceed after approval.
Track outstanding questions — If something can't be answered now, classify it:
Multiple handoff options — At completion, present 3-4 next steps as options.
Resume existing work — Before starting, check if the target artifact already exists at the expected path. If it does, ask the user: "I found an existing tech-spec.md at [path]. Should I continue from where it left off, or start fresh?" If resuming, read the document, summarize the current state, and continue from outstanding gaps.
Assess if full process is needed — If the user's input is already detailed with clear requirements, specific acceptance criteria, and defined scope, don't force the full interview. Confirm understanding briefly and offer to skip directly to document generation. Only run the full interactive process when there's genuine ambiguity to resolve.
This skill reads and writes persistent memory to maintain context across sessions.
On start (before any interaction):
docs/ets/.memory/project-state.md — know where the project isdocs/ets/.memory/decisions.md — don't re-question closed decisionsdocs/ets/.memory/preferences.md — apply user/team preferences silentlydocs/ets/.memory/patterns.md — apply discovered patternsOn finish (after saving artifact, before CLOSING SUMMARY):
project-state.md is updated automatically by the PostToolUse hook — do NOT edit it manually.python3 .claude/hooks/memory-write.py decision "<decision>" "<rationale>" "<this-skill-name>" "<phase>" "<tag1,tag2>"python3 .claude/hooks/memory-write.py preference "<preference>" "<this-skill-name>" "<category>"python3 .claude/hooks/memory-write.py pattern "<pattern>" "<this-skill-name>" "<applies_to>"The .memory/*.md files are read-only views generated automatically from memory.db. Never edit them directly.
Use full version when:
Use short version when:
This skill generates docs/ets/projects/{project-slug}/architecture/tech-spec.md, the authoritative specification of all non-functional requirements (NFR-#) and architecture decisions (ADR-#). It is the Single Source of Truth for:
This document bridges architecture (Context, Container diagrams from architecture-diagram.md) and implementation (code, tests, deployment).
Each NFR-# must have a measurable target, not subjective language.
| Category | Examples | Format |
|---|---|---|
| Performance | Latency, throughput, resource usage | < 200ms p95, > 1M events/sec, < 2GB RAM/instance |
| Scalability | Concurrent users, data growth, regional expansion | 10M users, 100GB/day, 5 regions by Q3 2026 |
| Availability | Uptime, failover time, MTTR | 99.95% uptime, < 5min failover |
| Security | Encryption, authentication, compliance | TLS 1.3, HMAC-SHA256, SOC2 Type II |
| Reliability | Error rates, recovery, deduplication | < 0.1% error rate, < 1min recovery, 2-layer dedup |
| Data Quality | Freshness, completeness, accuracy | < 5min lag (p95), 100% page-level, < 0.5% missing IPs |
| Operational | Deployability, observability, runbooks | 1-click deploy, < 2sec metric lag, runbooks for 10 scenarios |
NFR-00X: [Category] [Constraint]
**Measurement:** [Metric and unit]
**Target:** [Quantified value]
**Rationale:** [Why this target? Business or technical reason?]
**Verification:** [How do we measure/test?]
**Owned by:** [Team/service name]
Example:
NFR-001: Performance — Event Ingestion Latency
**Measurement:** Time from Event Tracker receives request to HTTP response sent
**Target:** < 100ms p95, < 500ms p99
**Rationale:** Browser SDK times out after 30s; p95 < 100ms ensures < 3% timeouts. Batch microblocker delays ~50ms.
**Verification:** Prometheus `event_tracker_request_duration_seconds` histogram, daily SLO report
**Owned by:** Infra Team (Event Tracker service)
Each ADR-# documents a significant technical decision and its consequences.
ADR-00X: [Title]
**Status:** Accepted | Proposed | Deprecated
**Context:**
[What is the issue we're facing? What are the constraints?]
**Decision:**
[What did we decide to do and why?]
**Consequences:**
[What are the positive and negative outcomes?]
**Alternatives Considered:**
[What else did we consider? Why not chosen?]
**Related:**
- ADR-YYY (if applicable)
- Feature Spec: [FS-name-001]
Example:
ADR-001: Redpanda over Kafka for Event Streaming
**Status:** Accepted
**Context:**
Need event streaming for pipeline with:
- 1M events/sec throughput
- Multi-region deployment (future)
- Operational simplicity (small team, limited Kubernetes expertise)
- Latency-critical: < 5sec end-to-end
**Decision:**
Use Redpanda (fully Kafka API compatible) instead of self-managed Kafka cluster.
**Consequences:**
- Positive: 70% fewer operational tasks (no ZK, no broker rebalancing), 40% lower memory footprint, same ecosystem tools
- Negative: Vendor lock-in (Redpanda over Kafka), no multi-region replication (yet), 2x cost vs. self-hosted
**Alternatives Considered:**
- Self-managed Kafka: Complexity, 24/7 ops required
- AWS MSK: No multi-region, vendor lock-in, higher latency
- RabbitMQ: Worse throughput, less suitable for event streaming patterns
**Related:**
- NFR-001 (Latency), NFR-003 (Throughput)
For each feature or service boundary, define:
| Event | Level | Structured Fields | When |
|---|
| Metric Name | Type | Labels | Description |
|---|
| Condition | Severity | Channel | Runbook |
|---|
Key views needed for operational monitoring.
SST Rule: Observability requirements are ONLY defined in tech-spec.md. Downstream documents (quality-checklist, implementation-plan, release-plan) validate against these requirements but do NOT redefine them.
These rules exist to prevent conflicting definitions across documents — why it matters: gate approval depends on SST compliance. Only tech-spec.md should define:
If another document (prd.md, ux-design.md, etc.) needs to reference an NFR, it uses NFR-# ID and links to tech-spec.md.
The generated docs/ets/projects/{project-slug}/architecture/tech-spec.md must include:
docs/ets/projects/{project-slug}/.templates/tech-spec.md — Skeleton with NFR/ADR sections and examplesdocs/ets/projects/{project-slug}/.guides/nfr-quantification.md — How to write measurable NFRs (anti-patterns, examples)docs/ets/projects/{project-slug}/.guides/adr-decision-making.md — ADR best practices, when to write one, templatedocs/ets/projects/{project-slug}/prd.md — Business requirements that drive NFRsdocs/ets/projects/{project-slug}/architecture/architecture-diagram.md — System structure that tech-spec justifiesdocs/ets/projects/{project-slug}/architecture/tech-spec.md| NFR | Typical Target | Tech Decision |
|---|---|---|
| Ingestion latency | < 100ms p95 | Batch microblocker, local buffering |
| Throughput | > 1M events/sec | Redpanda partitioning, consumer scaling |
| Query latency | < 1sec | ClickHouse, pre-aggregated tables |
| Deduplication | 2-layer (batch + DB) | In-batch bloom filter + ReplacingMergeTree |
| Freshness | < 5min lag | Dual-write (realtime + archive) |
| Security | TLS 1.3, AES-256 | mTLS, encryption in-transit, at-rest |
| Uptime | 99.95% | Multi-AZ, circuit breakers, SLO dashboards |
architecture-diagram.md (BLOCKS):
## Container View or ## C4 Containerprd.md (ENRICHES):
project-context.md (ENRICHES):
Before marking this document as COMPLETE:
If any check fails → mark document as DRAFT with <!-- STATUS: DRAFT --> at top.
After saving and validating, display:
✅ tech-spec.md saved to `docs/ets/projects/{project-slug}/architecture/tech-spec.md`
Status: [COMPLETE | DRAFT]
IDs generated: [list NFR-# and ADR-# IDs, e.g., NFR-001 through NFR-010, ADR-001 through ADR-005]
→ Next step: data/ux/api agents (parallel) — Start parallel Design sub-phases
Run: /design or let the orchestrator continue
Do NOT proceed to the next skill without displaying this summary first.
architecture-diagram.md (BLOCKS), prd.md (ENRICHES), project-context.md (ENRICHES)user-stories.md (if available)implementation-plan and quality-checklistarchitecture-diagram (bidirectional)docs/ets/projects/{project-slug}/architecture/ — create if missingdocs/ets/projects/{project-slug}/architecture/tech-spec.md using the Write tooldocs/ets/projects/{project-slug}/architecture/tech-spec.md) + paths to upstream documents (BLOCKS: docs/ets/projects/{project-slug}/architecture/architecture-diagram.md)"Document saved to
docs/ets/projects/{project-slug}/architecture/tech-spec.md. The spec reviewer approved it. Please review and let me know if you want any changes before we proceed." Wait for the user's response. If they request changes, make them and re-run the spec review. Only proceed to validation after user approval.
| Error | Severity | Recovery | Fallback |
|---|---|---|---|
| BLOCKS dep missing (architecture-diagram.md) | Critical | Auto-invoke architecture-diagram skill | Block execution |
| Architecture diagram is too sparse (<2 containers) | Medium | Warn user, proceed with limited NFRs | Mark tech-spec as DRAFT |
| Can't quantify an NFR target | Medium | Ask user for target, suggest industry defaults | Use TBD with TODO marker |
| Output validation fails | High | Mark as DRAFT | Proceed with DRAFT status |
| Conflicting NFR-# or ADR-# IDs | Medium | Renumber from max+1 | Append suffix |
This skill supports iterative quality improvement when invoked by the orchestrator or user.
| Condition | Action | Document Status |
|---|---|---|
| Completeness ≥ 90% | Exit loop | COMPLETE |
| Improvement < 5% between iterations | Exit loop (diminishing returns) | DRAFT + notes |
| Max 3 iterations reached | Exit loop | DRAFT + iteration log |
--quality-loop on any skill invocation--no-quality-loop to disable (generates once, validates once)When the self-evaluation identifies a weakness (score < 7/10 on any criterion):
Example: If "NFR targets not quantified (using subjective language)" is identified: