From klair-legacy
Implements specifications using TDD methodology with quality review. Use when implementing a spec, writing code from requirements, following TDD workflow. Includes test-first development, implementation, and thorough code review before PR.
npx claudepluginhub ai-builder-team/ai-builder-plugin-marketplace --plugin klair-legacyThis skill is limited to using the following tools:
Implements specifications using TDD methodology: Phase X.0 (tests first), Phase X.1 (implementation), Phase X.2 (review).
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Implements specifications using TDD methodology: Phase X.0 (tests first), Phase X.1 (implementation), Phase X.2 (review).
Read spec.md + checklist.md + spec-research.md
│
▼
Phase X.0: Test Foundation (all FRs)
│
├─ Write failing tests for each FR
├─ Commit with test(FR{N}) prefix
├─ GATE: red-phase-test-validator (must PASS)
│
▼
Phase X.1: Implementation (all FRs)
│
├─ Implement to make tests pass
├─ Commit with feat(FR{N}) prefix
│
▼
Phase X.2: Review (all 4 steps mandatory)
│
├─ Step 0: spec-implementation-alignment (must PASS)
├─ Step 1: pr-review-toolkit:review-pr (6 agents)
├─ Step 2: Address feedback, fix issues
├─ Step 3: test-optimiser (final pass)
│
▼
Implementation complete
During implementation, if you encounter a disconnect that breaks fundamental spec assumptions, stop and escalate.
Do NOT work around silently. Escalate:
IMPLEMENTATION BLOCKED
Spec assumes: [quote the specific requirement]
Reality: [what actually exists or doesn't]
This breaks: [interface contract / data flow / core assumption]
Options:
1. Return to Spec/Research to address
2. Descope this requirement
3. [Other options if apparent]
Need decision to proceed.
| Scenario | Action |
|---|---|
| Spec requires field X, API doesn't have it, can't derive | STOP |
| Spec says "receives props" but no provider exists | STOP |
| Interface contract assumes data source that doesn't exist | STOP |
| Core architectural assumption is invalid | STOP |
| Scenario | Action |
|---|---|
| Missing dependency but can implement without it | Note it, proceed |
| Typo in spec | Fix inline |
| Unclear requirement | Clarify, proceed |
| Minor detail not specified | Make reasonable choice, document |
Does this break the interface contracts or fundamental assumptions?
Based on user decision:
spec.md for requirements (FRs with success criteria)checklist.md for task listspec-research.md for implementation contextThe spec-research.md contains:
Use these as the source of truth for implementation.
When running as spec-executor (orchestrating sub-agents), analyze FR dependencies before execution.
Read spec-research.md Test Plan section. For each FR, determine:
| FR | Depends On | Can Parallel With |
|---|---|---|
| FR1 | - | FR3 |
| FR2 | FR1 | - |
| FR3 | - | FR1 |
Group FRs into waves based on dependencies:
Wave 1: FRs with no dependencies (FR1, FR3) Wave 2: FRs depending only on Wave 1 (FR2) Wave 3: FRs depending on Wave 2 (...)
This wave structure drives parallel sub-agent dispatch.
Check checklist.md status:
Goal: Write failing tests that verify each Functional Requirement.
test_FR1_accepts_valid_inputWhen orchestrating via spec-executor, dispatch FR test work in parallel:
Wave-Based Dispatch:
Wave 1 - Launch parallel sub-agents for independent FRs:
Task(subagent_type="general-purpose",
description="FR1 tests",
prompt="Write tests for FR1 per spec. File: {test-file}")
Task(subagent_type="general-purpose",
description="FR3 tests",
prompt="Write tests for FR3 per spec. File: {test-file}")
Wait for Wave 1 completion
Wave 2 - Launch sub-agents for newly unblocked FRs:
Task(subagent_type="general-purpose",
description="FR2 tests",
prompt="Write tests for FR2 per spec. FR1 types now available.")
Continue until all FR tests complete
Run red-phase-test-validator on combined test output
One commit per FR test group. Use the standard commit format (see Commit Discipline section):
git add {test-file-path}
git commit -m "test(FR1): add input validation tests ..." # HEREDOC format with Co-Author-By
Stage only test files, not implementation files.
Verify:
⚠️ Do NOT proceed to Phase X.1 until this gate passes.
Run the red-phase-test-validator agent:
subagent_type: "red-phase-test-validator"
description: "Validate test design quality"
prompt: "Validate the tests written for this spec:
Test files: {paths to test files}
Spec: {spec-path}/spec.md
Research: {spec-path}/spec-research.md
Check:
1. Spec alignment - all FR success criteria have tests
2. Assertion specificity - tests verify specific behavior
3. Mock discipline - mocks at boundaries, realistic data
4. Behavior focus - tests describe WHAT, not HOW
Report PASS or FAIL with specific issues."
If PASS: Proceed to Phase X.1
If FAIL:
Why this matters: Weak tests lead to implementation guided by weak contracts. Tests failing ≠ tests well-designed. This gate catches no-op tests, missing coverage, and poor mock discipline before you implement against them.
Goal: Implement code to make tests pass.
Same wave-based pattern as Phase X.0:
One commit per FR implementation. Use the standard commit format (see Commit Discipline section):
git add {implementation-file-path}
git commit -m "feat(FR1): implement validation logic ..." # HEREDOC format with Co-Author-By
Stage only implementation files, NOT test files (they were committed in X.0).
Verify:
Next: Phase X.2 is MANDATORY. Do not commit or skip to completion.
⚠️ MANDATORY GATE - Do NOT commit until this phase is complete.
Goal: Quality gate before PR. Catch issues that would be found in human review.
Always run all four steps. No scope-based shortcuts - every change gets full QC.
Run this FIRST. No point reviewing code quality if it's the wrong code.
Run the spec-implementation-alignment agent:
subagent_type: "spec-implementation-alignment"
description: "Verify implementation matches spec"
prompt: "Validate implementation alignment:
- spec.md: {spec-path}/spec.md
- spec-research.md: {spec-path}/spec-research.md
- checklist.md: {spec-path}/checklist.md
Verify:
1. Each FR success criterion is satisfied by the implementation
2. Types/signatures match interface contracts
3. No scope creep (features added beyond spec)
4. No scope shortfall (requirements quietly dropped)
Report PASS or FAIL with specific gaps."
If PASS: Proceed to Step 1 (code quality)
If FAIL:
If stuck: Escalate to user - may need spec revision.
Run the pr-review-toolkit:review-pr skill for multi-agent analysis:
Use Skill tool:
skill: "pr-review-toolkit:review-pr"
This runs 6 specialized agents:
| Agent | Focus |
|---|---|
| code-reviewer | CLAUDE.md compliance, style, bugs, quality |
| silent-failure-hunter | Error handling, silent failures, missing logging |
| pr-test-analyzer | Test coverage quality, critical gaps, edge cases |
| comment-analyzer | Comment accuracy, doc completeness |
| type-design-analyzer | Encapsulation, invariants, type quality (1-10 ratings) |
| code-simplifier | Readability, unnecessary complexity, redundancy |
If issues found:
fix({spec}): {description}If no significant issues:
Run the test-optimiser agent as a final pass:
subagent_type: "test-optimiser"
description: "Optimize test quality"
prompt: "Review the tests for this spec implementation.
Focus on recently modified test files. Ensure tests provide genuine confidence:
- Tests would fail if implementation were broken
- Behavior tested, not implementation details
- Edge cases and error paths covered
- No pageantry testing (tests that look good but verify nothing)
Optimize any weak tests found."
If test-optimiser suggests changes:
fix({spec}): strengthen test assertionsIf all passes:
After review complete, update documentation and finalize.
Read {spec-path}/checklist.md and verify all tasks marked [x]:
If any tasks unmarked, update them now.
Append to checklist.md:
## Session Notes
**{date}**: Implementation complete.
- Phase X.0: {N} test commits
- Phase X.1: {N} implementation commits
- Phase X.2: Review passed, {N} fix commits (if any)
- All tests passing.
Read features/{domain}/{feature}/FEATURE.md and update:
Changelog section - Update spec entry:
### {NN}-{spec-name}
- **Status**: Complete
- **Date**: {date}
- **Commits**: test(FR1-N), feat(FR1-N), fix (if any)
Files Touched section - Add any new files created
git add {spec-path}/checklist.md features/{domain}/{feature}/FEATURE.md
git commit -m "$(cat <<'EOF'
docs({NN}-{spec-name}): update checklist and FEATURE.md
- Mark all checklist tasks complete
- Add session notes
- Update FEATURE.md changelog
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
EOF
)"
git status # Verify clean working tree
"Implementation complete for {NN}-{spec-name}.
Commits (in TDD order):
1. test(FR1-N): tests for all FRs
2. feat(FR1-N): implementations for all FRs
3. fix({spec}): review fixes (if any)
4. docs({spec}): documentation updates
Documentation Updated:
✓ checklist.md - all tasks marked complete, session notes added
✓ FEATURE.md - changelog updated
All tests passing. Git state clean.
Next step: Proceed to next spec using `spec` then `implement` skills"
Use Edit tool to update checklist.md as you work:
# Before
- [ ] Task description
# After
- [x] Task description
Add session notes at bottom:
## Session Notes
**2025-01-05**: Completed X.0 and X.1. Review found 2 issues, both fixed.
Each spec follows this commit sequence:
| Order | Prefix | Phase | Purpose | Created By |
|---|---|---|---|---|
| 1 | spec | Setup | Spec artifacts | spec skill |
| 2 | test | X.0 | Test commits (red phase) | implement skill |
| 3 | feat | X.1 | Implementation commits (green phase) | implement skill |
| 4 | fix | X.2 | Review fix commits (if needed) | implement skill |
| 5 | docs | Completion | Documentation updates | implement skill |
Note: The spec() commit is created by the spec skill before implement runs.
Always use HEREDOC format with Co-Author-By:
git add {specific-files} # Never use git add -A
git commit -m "$(cat <<'EOF'
{prefix}({scope}): {concise description}
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
EOF
)"
git status # Verify commit succeeded
git add -A or git add . - stage specific filesgit status verification after commit