From rtl-agent-team
Defines DSE policies, methodology, comparison matrices, C model transformation rules, self-critique protocols, and gate criteria for iterative hardware design pipelines (Phase 1-3). Pure reference material.
npx claudepluginhub babyworm/rtl-agent-team --plugin rtl-agent-teamThis skill uses the workspace's default tool permissions.
| Aspect | Standard (p1 + p2 + p3 sequential) | rat-dse |
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
| Aspect | Standard (p1 + p2 + p3 sequential) | rat-dse |
|---|---|---|
| Algorithm study | Select best, justify | Explore N candidates, quantitative comparison |
| Architecture | Single architecture from requirements | Multiple candidates, trade-off matrix, user selects |
| μArch + BFM | Single-pass μArch design | Iterative μArch with self-critique and re-exploration |
| Ref C model | Build from scratch | Accept functional model as input, transform to architectural model |
| Fixed-point | Identify precision requirements | Simulate effects, precision vs area trade-off curves |
| Iteration | One-shot per phase | Self-critique → re-run → user review → trial comparison |
| Output | Ready for Phase 4 | Pre-implementation package with DSE rationale, ready for Phase 4 |
All exploration results captured in design artifacts (docs/, reviews/) so downstream phases can reference DSE rationale without repeating exploration.
For each major functional block, enumerate 2-4 algorithmic approaches:
| Metric | Candidate A | Candidate B | Candidate C |
|---|---|---|---|
| Computational complexity (ops/input) | |||
| Memory access pattern (seq/random, R/W ratio) | |||
| Memory bandwidth estimate | |||
| HW gate count estimate (order of magnitude) | |||
| Quality/accuracy impact (PSNR/SSIM if applicable) | |||
| Parallelization potential (data/pipeline) |
For each candidate (2-3 required):
Output: docs/phase-2-architecture/architecture-candidates.md
Two mandatory AskUserQuestion interactions:
Decisions recorded as ADRs:
docs/decisions/ADR-001-algorithm-selection.mddocs/decisions/ADR-002-architecture-selection.mdWhen input_mode == "transform" (user-provided functional C model):
ext_mem_read()/ext_mem_write() abstractioncontext_t)Output: refc/.c (restructured), refc/include/.h. C11, no clock/reset, DPI-C compatible.
Quantitative RD evaluation: If ref C model encoder is buildable and test sequences are available, invoke
/rtl-agent-team:codec-rd-evalfor BD-PSNR measurements.Decoder conformance: If ref C model decoder exists and conformance bitstreams are available, invoke
/rtl-agent-team:codec-conformance-eval.
Phase 1→2, Phase 2→3, and Phase 3 completion require BOTH Artifact Gate + Quality Gate.
Quality Gate verdicts: PASS or FAIL + findings[]. Max 2 retries per gate.
.rat/scratch/phase-{N}/round-{R}-{agent}.md
On gate PASS: consolidate to reviews/, clean scratch.
After Phase 3 Quality Gate PASS + self-critique re-run, present results to user. Do NOT proceed to Phase 4. DSE produces a pre-implementation package for user review.
Artifact Gate: iron-requirements.json + open-requirements.json + io_definition.json + timing_constraints.json + domain-analysis.md exist Quality Gate:
reviews/phase-1-research/research-review.mdSummary Validation: docs/phase-1-research/phase-1-summary.md
Artifact Gate: architecture.md + architecture-candidates.md + refc/.c + iron-requirements.json (P2, REQ-A-) exist Quality Gate:
reviews/phase-2-architecture/feature-coverage.mdreviews/phase-2-architecture/architecture-review.mdSummary + ADR: phase-2-summary.md + ADRs (including algorithm + architecture selection)
i_, o_, io_. Clock: {domain}_clk. Reset: {domain}_rst_nArtifact Gate: architecture.md + refc/.c + iron-requirements.json (P2, REQ-A-) exist Quality Gate: Phase 2→3 gate above must have passed (compliance PASS, all OPEN-1-* resolved)
Artifact Gate: docs/phase-3-uarch/.md + bfm/src/.cpp (or bfm/src/*.c) + iron-requirements.json (P3) exist Quality Gate:
Note: Phase 3 produces C/SystemC BFM, NOT SystemVerilog RTL. BFM is the executable μArch model. DPI bridge template is prepared for future RTL comparison in Phase 4, but no RTL is written in DSE.
After Phase 3 Quality Gate PASS, the orchestrator performs self-critique BEFORE presenting results to the user:
Critique agent reviews the complete P1→P3 output:
Output: reviews/dse-self-critique.md with findings rated HIGH/MEDIUM/LOW
Re-run: Run Phase 1→3 again incorporating all critique findings
Result: Second pass produces refined pre-implementation package
When the user requests another iteration (not satisfied with results), a new trial is created in a git worktree:
Agent(isolation="worktree") with user feedbackAfter Trial N completes, compare against the current best trial:
Independent compliance checks: invoke compliance-checker separately on EACH trial's P1→P3 chain
Quantitative comparison table (presented to user):
| Metric | Current Best (Trial K) | New Trial (Trial N) |
|---|---|---|
| Iron requirements count | ||
| Open items remaining | ||
| Ambiguity score (P1) | ||
| Architecture candidates explored | ||
| μArch modules defined | ||
| BFM ↔ RefC match | ||
| Compliance verdict (P1+P2→P3) | ||
| Self-critique HIGH findings |
repeat:
AskUserQuestion("결과가 만족스러운가? (yes/no)")
if yes → done (pre-implementation package ready for Phase 4)
if no → collect user feedback
→ create new trial (worktree)
→ run P1→P3 with feedback
→ compare trials
→ user selects better trial