Use when writing any new code, fixing bugs, or refactoring. Globally enforced: this skill is MANDATORY whenever code is written. Enforces Red-Green-Refactor cycle and the Delete Untested Code rule. Triggers: any code writing task, "implement", "add feature", "fix bug", any step in executing-plans that touches source code.
From superomninpx claudepluginhub wilder1222/superomni --plugin superomniThis skill is limited to using the following tools:
SKILL.md.tmpltdd-enforcement.mdtesting-anti-patterns.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
mkdir -p ~/.omni-skills/sessions
_PROACTIVE=$(~/.claude/skills/superomni/bin/config get proactive 2>/dev/null || echo "true")
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
_TEL_START=$(date +%s)
echo "Branch: $_BRANCH | PROACTIVE: $_PROACTIVE"
If PROACTIVE is false: do NOT proactively suggest skills. Only run skills the
user explicitly invokes. If you would have auto-invoked, say:
"I think [skill-name] might help here — want me to run it?" and wait.
Report status using one of these at the end of every skill session:
Pipeline stage order: THINK → PLAN → REVIEW → BUILD → VERIFY → SHIP → REFLECT
REVIEW is the only human gate. All other stages auto-advance on DONE.
| Status | At REVIEW stage | At all other stages |
|---|---|---|
| DONE | STOP — present review summary, wait for user input (Y / N / revision notes) | Auto-advance — print [STAGE] DONE → advancing to [NEXT-STAGE] and immediately invoke next skill |
| DONE_WITH_CONCERNS | STOP — present concerns, wait for user decision | STOP — present concerns, wait for user decision |
| BLOCKED / NEEDS_CONTEXT | STOP — present blocker, wait for user | STOP — present blocker, wait for user |
When auto-advancing:
docs/superomni/[STAGE] DONE → advancing to [NEXT-STAGE] ([skill-name])When the user sends a follow-up message after a completed session, before doing anything else:
ls docs/superomni/specs/spec-*.md docs/superomni/plans/plan-*.md docs/superomni/ .superomni/ 2>/dev/null | head -20
git log --oneline -3 2>/dev/null
To find the latest spec or plan:
_LATEST_SPEC=$(ls docs/superomni/specs/spec-*.md 2>/dev/null | sort | tail -1)
_LATEST_PLAN=$(ls docs/superomni/plans/plan-*.md 2>/dev/null | sort | tail -1)
workflow skill for stage → skill mapping) and announce:
"Continuing in superomni mode — picking up at [stage] using [skill-name]."using-skills/SKILL.md.When asking the user a question, match the confirmation requirement to the complexity of the response:
| Question type | Confirmation rule |
|---|---|
| Single-choice — user picks one option (A/B/C, 1/2/3, Yes/No) | The user's selection IS the confirmation. Do NOT ask "Are you sure?" or require a second submission. |
| Free-text input — user types a value and presses Enter | The submitted text IS the confirmation. No secondary prompt needed. |
| Multi-choice — user selects multiple items from a list | After the user lists their selections, ask once: "Confirm these selections? (Y to proceed)" before acting. |
| Complex / open-ended discussion — back-and-forth clarification | Collect all input, then present a summary and ask: "Ready to proceed with the above? (Y/N)" before acting. |
Rule: never add a redundant confirmation layer on top of a single-choice or text-input answer.
Custom Input Option Rule: Whenever you present a predefined list of choices (A/B/C, numbered options, etc.), always append a final "Other" option that lets the user describe their own idea:
[last letter/number + 1]) Other — describe your own idea: ___________
When the user selects "Other" and provides their custom text, treat that text as the chosen option and proceed exactly as you would for any other selection. If the custom text is ambiguous, ask one clarifying question before proceeding.
Load context progressively — only what is needed for the current phase:
| Phase | Load these | Defer these |
|---|---|---|
| Planning | Latest docs/superomni/specs/spec-*.md, constraints, prior decisions | Full codebase, test files |
| Implementation | Latest docs/superomni/plans/plan-*.md, relevant source files | Unrelated modules, docs |
| Review/Debug | diff, failing test output, minimal repro | Full history, specs |
If context pressure is high: summarize prior phases into 3-5 bullet points, then discard raw content.
All skill artifacts are written to docs/superomni/ (relative to project root).
See the Document Output Convention in CLAUDE.md for the full directory map.
Agent failures are harness signals — not reasons to retry the same approach:
harness-engineering skill to update the harness before retrying.It is always OK to stop and say "this is too hard for me." Escalation is expected, not penalized.
After completing any skill session, run a 3-question self-check before writing the final status:
If any answer is NO, address it before reporting DONE. If it cannot be addressed, report DONE_WITH_CONCERNS and name the gap.
For a full performance evaluation spanning the entire sprint, use the self-improvement skill.
_TEL_END=$(date +%s)
_TEL_DUR=$(( _TEL_END - _TEL_START ))
~/.claude/skills/superomni/bin/analytics-log "SKILL_NAME" "$_TEL_DUR" "OUTCOME" 2>/dev/null || true
Nothing is sent to external servers. Data is stored only in ~/.omni-skills/analytics/.
Goal: Write code that is correct by construction, with tests as the specification.
If you can write the code, you can write the test first. Exceptions are rare: pure UI layout, one-off scripts, throwaway prototypes. If you're not sure — write the test first anyway.
Code written before its test MUST be deleted.
If you wrote implementation code before writing a test for it:
There are no exceptions. This rule prevents gradual test-last erosion.
A test that was never red proves nothing. Before implementing, run the test and confirm it FAILS for the right reason. A test that passes without implementation is either testing the wrong thing or the behavior is already implemented.
RED → Write a failing test for the behavior you want
↓ confirm it fails for the RIGHT reason
GREEN → Write the MINIMUM code to make the test pass
↓ confirm it now passes
REFACTOR → Clean up without changing behavior
↓ confirm tests still pass after every change
Repeat for each behavior unit.
Before starting, verify the test environment is working:
# Confirm tests can run at all
npm test 2>&1 | head -10 ||
pytest --collect-only 2>&1 | head -10 ||
go test ./... -list '.*' 2>&1 | head -10 ||
bash lib/validate-skills.sh 2>&1 | head -5 ||
echo "No test runner found — document test approach before proceeding"
# Find existing tests for the area you're about to change
find . -name "*.test.*" -o -name "*.spec.*" -o -name "test_*.py" -o \
-name "*_test.go" 2>/dev/null | head -20
Before writing any implementation code:
Identify the behavior — not the implementation, the behavior
calculateTotal() is called with an empty cart, it returns 0"calculateTotal function"Write the test — one behavior per test, AAA pattern (Arrange / Act / Assert)
Run the test — verify it FAILS with the right error (not "test not found")
# Run just the new test to confirm it fails
npm test -- --testNamePattern="specific test name"
# or
pytest -k "test_name" -v
# or
go test -run TestName ./...
# or (for skill/markdown tests)
bash lib/validate-skills.sh skills/<name>/SKILL.md.tmpl
Write the MINIMUM code to make the test pass. Nothing more.
Rules:
# Run the test to confirm it passes
npm test -- --testNamePattern="specific test name"
If you cannot make the test pass in 3 attempts, stop and escalate — report BLOCKED with the 3 hypothesis trace from systematic-debugging.
Now that tests are green, clean up:
Rule: Run the full test suite after every refactoring step. If tests break, the refactor changed behavior — revert and try again.
# Run full suite after each refactoring step
npm test 2>&1 | tail -5
After completing the feature/fix, document what was tested:
# Count new tests added
git diff HEAD --name-only | grep -E "\.(test|spec)\." | head -20 || \
git diff HEAD --name-only | grep -E "(test_|_test\.)" | head -20
# Verify all new tests pass
npm test 2>&1 | tail -10
Produce a brief inventory:
TDD INVENTORY
────────────────────────────────
Feature/fix: [what was built]
Tests written: [N new tests]
- test_[name]: [what behavior it covers]
- test_[name]: [what behavior it covers]
Tests passing: N/N
Iron Law 2 (Delete Untested Code): OBSERVED | N/A [if no untested code was written]
────────────────────────────────
See testing-anti-patterns.md for the full list. Key ones:
| Anti-pattern | Problem |
|---|---|
| Testing implementation (not behavior) | Tests break on refactor even when behavior is correct |
| Mocking the thing you're testing | Test proves nothing |
| Tests that always pass | Worse than no tests — gives false confidence |
| No assertions | Syntactically a test, semantically useless |
| Order-dependent tests | Hides bugs, hard to debug |
| Testing private methods directly | Brittle, couples to internals |
| Writing code before the test | Violates Iron Law 2 — delete and rewrite |
Test these:
Don't test these:
# Unit test: one function/class in isolation
def test_calculate_total_with_empty_cart():
cart = Cart()
assert cart.calculate_total() == 0
# Integration test: multiple real components
def test_checkout_flow_persists_order():
cart = Cart()
cart.add_item(Item(id=1, price=9.99))
order = checkout(cart, user=test_user)
assert Order.find(order.id).total == 9.99
Before adding tests, check what exists:
# Find existing test files
find . -name "*.test.*" -o -name "*.spec.*" -o -name "test_*.py" | head -20
# Check test coverage (if configured)
npm run test:coverage 2>/dev/null || coverage run -m pytest 2>/dev/null || true
Extend existing tests rather than creating parallel test files.
TDD REPORT
════════════════════════════════════════
Feature/fix: [description]
Tests written: [N]
Tests passing: [N/N]
Iron Laws:
Law 1 (Test First): FOLLOWED | VIOLATED (explain)
Law 2 (Delete Untested): FOLLOWED | N/A
Law 3 (Red Before Green): FOLLOWED | VIOLATED (explain)
Test inventory:
[test name] — [behavior covered]
Regressions: [none | list]
Status: DONE | DONE_WITH_CONCERNS | BLOCKED
════════════════════════════════════════