/implement
You are implementing tasks using strict Test-Driven Development: Red → Green → Refactor, one task at a time.
Input
You need the task list from /tasks. If it is not in context:
$ARGUMENTS
If $ARGUMENTS is empty and no task list is in context, ask the user to paste the tasks before continuing.
Step 1 — Verify the current state
git branch --show-current # should be feat/{issue-number}-{name}
If not on a feature branch, stop and ask the user to switch to the correct branch.
Step 2 — Run the existing test suite
Before writing any code, discover and run all test suites in the project:
- Discover test projects/suites. Look for all test directories, test projects, or test configurations — not just unit tests. Projects commonly have multiple suites: unit tests, integration tests, functional tests, end-to-end tests.
# Examples of discovering test suites:
# .NET: find all *.Tests.csproj or *.IntegrationTests.csproj
# JS/TS: look for test configs (jest.config.*, vitest.config.*, cypress.config.*)
# Python: look for test directories, pytest.ini, tox.ini
- Run every suite you can. Use whatever commands the project uses:
# Run tests — use whatever command the project uses
# Examples: dotnet test, bun test, pytest, go test ./..., cargo test, npm test
- If a test suite requires infrastructure (database, external services, Docker, etc.) that is not available, inform the user explicitly and ask them to run those tests before you proceed. Do not silently skip them.
All existing tests must pass before you start. If any tests fail, stop and report them to the user. Do not proceed with implementation on top of a broken test suite — you won't be able to distinguish pre-existing failures from regressions you introduce.
Record the test count per suite (e.g., "UnitTests: 47 passing, IntegrationTests: 12 passing") — you will compare against this baseline after each task.
Step 3 — Read the codebase
Before writing any code:
- Read the files listed in the current task's "Files to create/modify"
- Understand existing conventions (naming, patterns, file structure)
- Check for similar implementations to follow
- Identify the testing framework and patterns used in the project
- Check if modified methods have documentation (XML doc comments, JSDoc, docstrings, etc.) — you will need to update it if you change the method's behavior, parameters, or return values
Do not guess. Read the files.
Step 4 — Work through tasks in order
For each task in the task list:
4a. Announce the task
State clearly which task you are starting:
"Starting Task N: [title]"
4b. TDD Cycle — for each behavior in the task
Red — Write a failing test
- Write a test that specifies the next behavior
- Place it in the appropriate test file (following project conventions)
- Test only one behavior at a time
- Use descriptive test names that read as specifications
Run the test suite and confirm it fails for the right reason.
# Run tests — use whatever command the project uses
# Examples: dotnet test, bun test, pytest, go test ./..., cargo test, npm test
If the test passes without implementation, the test is wrong — fix it.
Green — Write the minimum code
Write only what is needed to make the failing test pass. No more. Resist the urge to implement future behaviors.
Run the test suite. All tests must be green before continuing.
Refactor — Clean up
If the implementation has duplication or poor structure, clean it up now. Re-run the suite to confirm everything is still green.
Repeat — Next behavior in the task
Move to the next behavior within the same task.
4c. Verify the task is complete
When all behaviors in the task are implemented:
- Run the full test suite — all tests must pass
- Verify the task's acceptance criteria are met
- Confirm no test is skipped, commented out, or empty
✓ Task N complete: [title]
Tests: N passing, 0 failing
Acceptance criteria: all met
4d. Move to next task
Repeat from 4a for the next task.
Step 5 — Final verification
When all tasks are complete, run the full test suite one final time:
# Run the full test suite
Also verify the build compiles cleanly:
# Run the build — use whatever command the project uses
# Examples: dotnet build, bun run build, go build ./..., cargo build, npm run build
If any tests are red, do not report success. Diagnose and fix.
Step 6 — Refresh the GitHub issue description
After implementation is complete, update the GitHub issue to reflect the actual state of the implementation. The original issue was created before any code was written — it may now be outdated.
Extract the issue number from the branch name (feat/<issue-number>-...), then:
- Read the current issue body:
gh issue view <issue-number>
- Update the issue body with
gh issue edit to reflect:
- Completed tasks: Check off all task checkboxes (
- [x]) that were implemented
- Deviations from the plan: If any task was implemented differently than originally described, update the task description to match what was actually built
- New files or components: If the implementation created files not listed in the original "Key files" section, add them
- Updated acceptance criteria: If any acceptance criteria changed during implementation, reflect the final version
gh issue edit <issue-number> --body "<updated body>"
- Preserve the original structure. Do not rewrite the issue from scratch — update the existing sections in place. The issue format must remain consistent with the
/tasks template.
Rules:
- All content must remain in English.
- Do not remove information — update it. If a task was split or merged during implementation, note what changed.
- If nothing changed from the original issue, skip this step.
Report:
✓ All tasks complete (N/N)
✓ All tests passing (N tests, 0 failures)
✓ Build: clean
✓ Issue #<number>: description refreshed
Implementation complete. Run:
/review
TDD Rules — Non-negotiable
- Never write production code before a failing test exists for it.
- Never move to the next behavior while any test is red.
- Never skip the refactor step if there is obvious cleanup to do.
- If a test is unexpectedly hard to write, that's a signal the design is wrong — reconsider before forcing it.
- Never comment out a failing test. Never write empty test bodies. Never mark a test as skipped.
- Every task must add or modify tests. If a task changes behavior, new tests must cover that behavior. If you finish a task and the test count hasn't increased, you likely missed test coverage.
- The test count at the end must be greater than or equal to the baseline from Step 2. Deleting or reducing tests is a red flag — justify it explicitly.
Documentation Rules
When modifying methods, functions, or classes that have existing documentation (XML doc comments, JSDoc, docstrings, etc.):
- If you change a method's behavior, update its documentation to reflect the new behavior.
- If you add, remove, or rename parameters, update the parameter documentation.
- If you change the return value or type, update the return documentation.
- If you add a new public method, add documentation following the same style as the existing codebase.
- Do not add documentation to undocumented code — only update what is already documented, and document new public APIs you create.
For a detailed TDD reference including anti-patterns and troubleshooting, read references/tdd-guide.md in this skill's directory.
Gotchas
These are common failure modes during implementation. Watch for them:
- Writing production code first, then backfilling tests. This is the most common TDD violation. If you catch yourself writing a function and then writing a test that calls it, you've inverted the cycle. Delete the production code, write the test, watch it fail, then implement.
- Tests that always pass. If your test passes on the first run without any production code, the test is wrong. It's either testing nothing (empty assertion), testing a tautology, or testing existing behavior that isn't the new behavior you're implementing.
- Gold-plating during Green. The Green step means write the MINIMUM code to make the test pass. If you're adding error handling, validation, or edge cases that no test requires yet, stop. Write a test for that behavior first.
- Skipping Refactor because "it looks fine." Even if it looks fine, take 10 seconds to consider: is there duplication? Is a name unclear? Could a conditional be simplified? The Refactor step is where design emerges.
- Testing implementation details. "Assert that
processPayment calls validateCard internally" couples the test to the implementation. Test the observable behavior: "Given a valid card, when processing payment, then the charge succeeds."
- Running tests in your head instead of actually running them. Claude sometimes claims "the test would fail because..." without executing the test command. Always run the actual test command — the real output often surprises you.
- Commenting out or skipping a failing test to "come back to it later." There is no later. Fix it now or revert. A skipped test is a hidden bug.
- Not running existing tests before starting. If you skip the baseline test run (Step 2), you won't know whether failures are pre-existing or caused by your changes. Always establish a green baseline first.
- Implementing without adding new tests. Every behavioral change needs a test. If you modified code but didn't write any new tests, you skipped the Red step of TDD.
- Forgetting to update documentation. When you change a documented method's signature or behavior, the documentation becomes a lie. Update XML doc comments, JSDoc, docstrings, etc. as part of the Refactor step.
What to do when stuck
- Test failing unexpectedly: Read the error carefully before changing anything.
- Dependency behaving oddly: Read its documentation or source before trying workarounds.
- Task is ambiguous: Stop and ask the user rather than guessing.
- Design feels wrong: It's OK to revisit the plan. Better to adjust now than force a bad design.
- Tests consistently failing: The plan may need revision. Offer to return to
/plan.