From flowstate
Use when implementing any feature or bugfix, before writing implementation code. Enforces strict test-first development — no production code without a failing test.
npx claudepluginhub c-reichert/flowstate --plugin flowstateThis skill uses the workspace's default tool permissions.
Write the test first. Watch it fail. Write minimal code to pass.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides TDD-style skill creation: pressure scenarios as tests, baseline agent failures, write docs to enforce compliance, verify with RED-GREEN-REFACTOR.
Write the test first. Watch it fail. Write minimal code to pass.
Core principle: If you didn't watch the test fail, you don't know if it tests the right thing.
Violating the letter of the rules is violating the spirit of the rules.
Always:
Exceptions (ask your human partner):
Thinking "skip TDD just this once"? Stop. That's rationalization.
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
Write code before the test? Delete it. Start over.
No exceptions:
Implement fresh from tests. Period.
Every piece of production code follows this cycle. No shortcuts. No reordering.
Write one minimal test that describes the behavior you want.
Requirements:
# Test describes the WHAT, not the HOW
# Name states expected behavior clearly
# Tests real code, not mock behavior
Bad tests: vague names, multiple assertions testing unrelated things, mocking away the code under test.
MANDATORY. Never skip.
Run the project's test command targeting your new test.
Confirm:
Test passes immediately? You're testing existing behavior. Your test is wrong. Fix it.
Test errors instead of failing? Fix the error. Re-run until it fails correctly — a clean failure for the right reason.
Write the simplest possible code to make the test pass. Nothing more.
If the test asks for one thing, implement one thing.
MANDATORY.
Run the project's test command.
Confirm:
New test fails? Fix your implementation, not the test.
Other tests broke? Fix them now. Do not proceed with broken tests.
Only after green:
Run the project's test command after every change. Stay green. If tests break during refactor, undo and try again.
Do NOT add behavior during refactor. Refactor changes structure, not functionality.
Next failing test. Next behavior. Same cycle. No exceptions.
Write code before the test? Delete it. Start over. No exceptions.
You wrote 200 lines without a test? Delete 200 lines. You spent 4 hours? Those hours are gone regardless. Your choice now:
Option 2 is not TDD. It is rationalization wearing a lab coat.
Every excuse below is invalid. Every single one.
| Excuse | Reality |
|---|---|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. Write it. |
| "I'll test after" | Tests passing immediately prove nothing. You lost the proof. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" These are not the same question. |
| "Already manually tested" | Ad-hoc is not systematic. No record. Can't re-run. Can't trust. |
| "Deleting X hours is wasteful" | Sunk cost fallacy. The time is gone. Keeping unverified code is debt. |
| "Keep as reference" | You'll adapt it. That's testing after with extra steps. Delete means delete. |
| "Need to explore first" | Fine. Explore. Then throw away ALL exploration code and start with TDD. |
| "Test is hard = design unclear" | Listen to the test. Hard to test = hard to use. Fix the design. |
| "TDD will slow me down" | TDD is faster than debugging. You'll pay the time either way — upfront or in production. |
| "Manual test is faster" | Manual doesn't prove edge cases. You'll re-test every change forever. |
| "Existing code has no tests" | You're improving it now. Add tests for the code you're touching. No excuse. |
| "This is different because..." | It's not. Delete the code. Start over with TDD. |
"I'll write tests after to verify it works"
Tests written after code pass immediately. Passing immediately proves nothing:
Test-first forces you to see the test fail, proving it actually tests something real.
"Tests after achieve the same goals — it's spirit not ritual"
No. Tests-after are biased by your implementation. You test what you built, not what's required. You verify remembered edge cases, not discovered ones.
Tests-first force edge case discovery BEFORE implementing. Tests-after verify you remembered everything. You didn't.
"Deleting X hours of work is wasteful"
Sunk cost fallacy. The time is already spent. Your choice now:
The "waste" is keeping code you can't trust.
"TDD is dogmatic, being pragmatic means adapting"
TDD IS pragmatic:
"Pragmatic" shortcuts = debugging in production = slower.
If you catch yourself doing any of these, stop immediately. Delete the code. Restart with TDD.
All of these mean the same thing: Delete the code. Start over with TDD.
No negotiation. No partial credit. Delete and restart.
| Quality | Good | Bad |
|---|---|---|
| Minimal | Tests one thing. "and" in name? Split it. | test('validates email and domain and whitespace') |
| Clear | Name describes expected behavior | test('test1'), test('it works') |
| Intent | Demonstrates the desired API | Obscures what code should do |
| Real | Tests actual code paths | Tests mock behavior instead of real behavior |
| Focused | Assertion matches the test name | Asserts unrelated side effects |
| Problem | Solution |
|---|---|
| Don't know how to test | Write the API you wish existed. Write the assertion first. Ask your human partner. |
| Test too complicated | Design too complicated. Simplify the interface. |
| Must mock everything | Code too coupled. Use dependency injection. Reduce coupling. |
| Test setup is huge | Extract test helpers. Still complex? Simplify the design. |
| Can't isolate the unit | Break dependencies. Inject them. Make the boundary explicit. |
Bug found? Write a failing test that reproduces it FIRST. Then follow the TDD cycle. The test proves the fix works AND prevents regression.
Never fix bugs without a test. The test is the proof. Without proof, you're guessing.
Before marking any task complete, confirm ALL of these:
Can't check all boxes? You skipped TDD. Start over.
Production code -> test exists and failed first
Otherwise -> not TDD
No exceptions without your human partner's explicit permission.