From majestic-engineer
Executes TDD workflow: decomposes features/bugs into increments, runs red-green-refactor cycles with pauses for review. Framework-agnostic; configs for Python, TypeScript, Go, Ruby.
npx claudepluginhub majesticlabs-dev/majestic-marketplace --plugin majestic-engineerThis skill uses the workspace's default tool permissions.
**Never write implementation code before a failing test exists for that behavior.**
Runs full RED-GREEN-REFRACTOR TDD workflow for features from descriptions, task IDs, or specs. Confirms plan then automates failing tests (RED), minimal implementation (GREEN), and refactoring.
Enforces strict TDD workflow for feature implementation: write one failing test, minimal code to pass, refactor, repeat. Prevents writing full test suites upfront.
Executes strict TDD workflow with red-green-refactor phases: test specification/design, failing unit tests (RED), coverage thresholds (80% lines), and refactoring triggers. Use for TDD cycles.
Share bugs, ideas, or general feedback.
Never write implementation code before a failing test exists for that behavior.
LANG = /majestic:config tech_stack "unknown"
If LANG == "unknown": detect from project files (package.json → TypeScript, Gemfile → Ruby, go.mod → Go, pyproject.toml → Python)
If LANG == "unknown": AskUserQuestion("What language/framework?")
RUNNER = lookup LANG in [references/language-configs.md]
If LANG in [ruby]: delegate to `rspec-coder` or `minitest-coder` skill for runner details
INCREMENTS = decompose feature using patterns from [references/increments.md]
- Read requirements/plan
- Identify pattern (Data Transformation, CRUD, State Machine, Calculation, Integration)
- Break into ordered increments: degenerate → happy path → variations → edge cases → errors
For each INCREMENT in INCREMENTS:
TaskCreate(subject: INCREMENT.name, description: INCREMENT.test_description)
AskUserQuestion("Review increments. Adjust ordering or scope?", options: ["Looks good", "Modify"])
For each INCREMENT in INCREMENTS:
TaskUpdate(INCREMENT.task_id, status: "in_progress")
# RED
Write failing test to real file (not code block)
RUN = Bash(RUNNER.test_command)
If RUN.status == PASS: STOP — investigate unexpected pass
If RUN.status == FAIL (wrong reason): fix test, rerun
PAUSE → show test + failure output, wait for user
# GREEN
Write minimal implementation code
RUN = Bash(RUNNER.full_suite_command)
If RUN.status == FAIL: show output, PAUSE → ask user to fix or hand off
If RUN.status == PASS: auto-continue (no pause)
# REFACTOR
If improvement opportunities exist:
Refactor implementation and/or test code
RUN = Bash(RUNNER.full_suite_command)
If RUN.status == FAIL: revert refactor, PAUSE → discuss
If RUN.status == PASS: auto-continue (no pause)
TaskUpdate(INCREMENT.task_id, status: "completed")
Brief summary of what was implemented
→ immediately begin next increment (no pause between increments)
RUN = Bash(RUNNER.full_suite_command)
Report: increments completed, total tests, pass/fail status
Suggest: remaining work, missed edge cases, integration tests needed
| Situation | Action |
|---|---|
| RED: test fails (expected) | Pause — show test + failure, wait for user |
| RED: test passes unexpectedly | Stop — investigate, don't proceed |
| GREEN: all tests pass | Auto-continue to REFACTOR |
| GREEN: tests fail | Pause — show output, ask user |
| REFACTOR: tests pass | Auto-continue to next increment |
| REFACTOR: tests fail | Revert + Pause — discuss approach |
| Between increments | Auto-continue — no pause |
TASK_TRACKING = /majestic:config task_tracking.enabled false
WORKFLOW_ID = "tdd-{timestamp}"
If TASK_TRACKING:
For each TaskCreate: add metadata {workflow: WORKFLOW_ID, phase: "tdd-loop"}
For each TaskUpdate: wrap with If TASK_TRACKING: TaskUpdate(...)
On Wrap-up: update ledger if LEDGER_ENABLED
LEDGER_ENABLED = /majestic:config task_tracking.ledger false
LEDGER_PATH = /majestic:config task_tracking.ledger_path .agents/workflow-ledger.yml
Order tests from simple to complex:
Each test should:
Avoid:
For language-specific test runner commands, see references/language-configs.md.
For Ruby projects:
rspec-coder skillminitest-coder skillA Rails example is available in references/rails-tdd-workflow.md.
When writing tests outside a TDD loop (e.g., adding coverage to existing code), follow these patterns.
| Evidence | Framework |
|---|---|
spec/ + _spec.rb + .rspec | RSpec |
test/ + _test.rb + test_helper.rb | Minitest |
*.test.js + jest.config.js | Jest |
*.spec.ts in tests/ or e2e/ | Playwright |
vitest.config.js | Vitest |
Before writing tests, create a plan covering:
Use for complex scenarios with multiple parameters:
| Objective | Inputs | Expected Output | Test Type |
|---|---|---|---|
| Validate creation | valid params | Created, 201 | Happy Path |
| Reject duplicate | existing data | Error, 422 | Sad Path |
| Handle empty | missing field | Validation error | Edge Case |
Tests are complete when ALL of these are met:
Coverage: All public methods tested, happy/sad/edge paths covered, auth checks included.
Quality: Tests pass (verified by running), isolated (no shared state), follow AAA pattern (Arrange-Act-Assert), descriptive names.
Framework compliance: Proper matchers, appropriate mocking, follows project patterns.