From add
Writes minimal production-quality code to pass failing tests in TDD GREEN phase. Analyzes specs, test mappings, configs, and failing assertions for clean, targeted implementations.
npx claudepluginhub mountainunicorn/add --plugin addThis skill is limited to using the following tools:
Write minimal production-quality code to make failing tests pass. This is the GREEN phase of TDD.
Enforces rigorous Test-Driven Development (TDD) via RED-GREEN-REFACTOR cycle. Requires failing tests before any production code when implementing features or fixing bugs.
Enforces strict TDD for features/bugfixes: RED (write failing test), GREEN (minimal passing code), REFACTOR. Auto-detects runners like Jest, Vitest, Playwright, Mocha via package.json.
Executes complete TDD cycle—RED (failing tests from spec), GREEN (minimal implementation), REFACTOR (code quality), VERIFY (quality gates)—coordinating sub-agents for feature development.
Share bugs, ideas, or general feedback.
Write minimal production-quality code to make failing tests pass. This is the GREEN phase of TDD.
The Implementer takes failing tests (from test-writer) and writes the smallest amount of code necessary to make them all pass. The goal is:
Verify test files exist and fail
npm test or python -m pytestRead the feature spec
Load test mapping
Determine implementation structure
Read .add/config.json
Check for session handoff
.add/handoff.md if it existsFor each failing test:
Example analysis:
// Test:
it('should return sum of two numbers', () => {
expect(add(2, 3)).toBe(5);
});
// Requires:
// - Function named 'add'
// - Takes two number parameters
// - Returns number (sum)
Create a plan without writing code:
Start with simplest, most-dependent-upon functions first:
For JavaScript/TypeScript:
// src/feature.ts - minimal implementation
export function add(a: number, b: number): number {
return a + b;
}
export class Feature {
constructor(private config: FeatureConfig) {}
doSomething(input: string): string {
return input.toUpperCase();
}
}
For Python:
# src/feature.py - minimal implementation
def add(a: int, b: int) -> int:
return a + b
class Feature:
def __init__(self, config):
self.config = config
def do_something(self, input_str: str) -> str:
return input_str.upper()
After implementing each logical unit:
npm test or python -m pytestAs tests progress to GREEN:
Run full test suite:
npm test
# or
python -m pytest
Verify:
Example output:
PASS tests/feature.test.ts
Feature Name
AC-001: requirement
✓ test_AC_001_behavior_a (15ms)
✓ test_AC_001_behavior_b (12ms)
AC-002: requirement
✓ test_AC_002_behavior_c (8ms)
Test Suites: 1 passed, 1 total
Tests: 3 passed, 3 total
Coverage: 85%
Implementation code must:
After implementation reaches GREEN:
npm test -- --shuffleUpon successful GREEN phase completion, output:
# Implementation Complete (GREEN Phase) ✓
## Feature
{feature-name} v{spec-version}
## Test Results
- Tests Passing: {count}/{total}
- Tests Failing: 0/{total}
- Test Duration: {seconds}s
## Acceptance Criteria Implementation
- AC-001: ✓ Implemented and passing
- AC-002: ✓ Implemented and passing
... (all ACs listed)
## Code Coverage
- Line Coverage: {percentage}%
- Branch Coverage: {percentage}%
- Function Coverage: {percentage}%
## Implementation Files Created/Modified
- {file-path}: {N} functions, {N} classes
- {file-path}: {N} functions
## Next Steps
1. Run /add:reviewer to check code quality
2. Run /add:tdd-cycle REFACTOR phase to improve code
3. Merge implementation
## Notes
- All tests green as of {timestamp}
- Ready for code review and refactoring
- No over-engineering; minimal viable implementation
Use TaskCreate and TaskUpdate to report progress through the CLI spinner. Create tasks at the start of each major phase and mark them completed as they finish.
Tasks to create:
| Phase | Subject | activeForm |
|---|---|---|
| Read tests | Reading failing tests | Reading failing tests... |
| Analyze | Analyzing requirements | Analyzing requirements... |
| Implement | Writing implementation code | Writing implementation code... |
| Verify GREEN | Confirming all tests pass | Verifying tests pass (GREEN confirmed)... |
Mark each task in_progress when starting and completed when done. This gives the user real-time visibility into skill execution.
Tests still failing after implementation
Type errors or compilation failures
Tests pass but coverage is low
Existing implementation conflicts with tests
Performance issues
The implementer respects: