From add
Executes complete TDD cycle—RED (failing tests from spec), GREEN (minimal implementation), REFACTOR (code quality), VERIFY (quality gates)—coordinating sub-agents for feature development.
npx claudepluginhub mountainunicorn/add --plugin addThis skill is limited to using the following tools:
Execute a complete Test-Driven Development cycle for a feature from the specification through production-quality code.
Runs full RED-GREEN-REFRACTOR TDD workflow for features from descriptions, task IDs, or specs. Confirms plan then automates failing tests (RED), minimal implementation (GREEN), and refactoring.
Orchestrates RED/GREEN/REFACTOR TDD cycles using context-isolated agents for test-first feature implementation.
Implements task specs via TDD (RED-GREEN-REFACTOR cycle), one test at a time from PLAN-*.md files. Collaborative mode pauses per step; auto mode runs autonomously.
Share bugs, ideas, or general feedback.
Execute a complete Test-Driven Development cycle for a feature from the specification through production-quality code.
The TDD Cycle is the primary "do the work" skill in Agent Driven Development. It orchestrates the full lifecycle:
This skill coordinates sub-agents (test-writer, implementer, reviewer, verify) and maintains traceability between specs, tests, and implementation.
Before beginning, validate:
.add/handoff.md if it existsInvoke the /add:test-writer skill with the spec reference:
specs/{feature}.md [--ac AC-001,AC-002]test_AC_NNN_description for traceabilityVerify tests actually fail:
npm test or python -m pytest depending on configOnce RED phase is complete, invoke the /add:implementer skill:
specs/{feature}.md [--ac AC-001,AC-002]After implementation:
npm test or equivalentWith all tests green, refactor for quality:
specs/{feature}.md --scope fullnpm testRun the full verification suite using /add:verify skill:
--level deploy (all gates for production readiness)If any gate fails:
Upon successful completion, output:
# TDD Cycle Complete ✓
## Feature
{feature-name} v{spec-version}
## Summary
- Tests Written: {count}
- Tests Passing: {count}
- Code Coverage: {percentage}%
- Refactoring Issues Fixed: {count}
- Quality Gates: {pass/total}
## Acceptance Criteria Status
- AC-001: ✓ Passing
- AC-002: ✓ Passing
... (all ACs listed with status)
## Next Steps
1. Review code in {implementation-path}
2. Merge to main branch
3. Deploy to staging environment
## Artifacts
- Tests: {test-file-paths}
- Implementation: {code-file-paths}
- Plan: docs/plans/{feature}-plan.md
Use TaskCreate and TaskUpdate to report progress through the CLI spinner. Create tasks at the start of each major phase and mark them completed as they finish.
Tasks to create:
| Phase | Subject | activeForm |
|---|---|---|
| Pre-flight | Loading spec and config | Loading spec and config... |
| RED | Writing failing tests | Writing failing tests... |
| GREEN | Implementing code | Implementing code to pass tests... |
| REFACTOR | Refactoring for quality | Refactoring for code quality... |
| VERIFY | Running verification gates | Running verification gates... |
Mark each task in_progress when starting and completed when done. This gives the user real-time visibility into skill execution.
Tests won't compile/run
npm install or equivalent dependency installationTests still failing after GREEN phase
/add:implementer again with specific AC rangeQuality gate failures
--fix flag on /add:verifyPerformance issues detected
If --parallel flag is set:
The skill respects these .add/config.json settings:
test.framework: The test framework (jest, pytest, vitest, etc.)test.minCoverage: Minimum code coverage percentagetest.convention: Test file naming conventioncode.style: Code style rules and lintersci.gates: Which gates to run and in what orderAfter completing this skill, do BOTH:
Append one observation line to .add/observations.md:
{YYYY-MM-DD HH:MM} | tdd-cycle | {one-line summary of outcome} | {cost or benefit estimate}
If .add/observations.md does not exist, create it with a # Process Observations header first.
Write a structured JSON learning entry per the checkpoint trigger in rules/learning.md (section: "After TDD Cycle Completes"). Classify scope, write to the appropriate JSON file (.add/learnings.json or ~/.claude/add/library.json), and regenerate the markdown view.