From implement-with-test-plugin
Implement code with tests from a spec file or direct request. Auto-detects test framework (jest, vitest, pytest, go test, cargo test) and follows existing project patterns. Use when user asks to implement a spec, build a feature with tests, code from a spec, or says "implement", "구현", "구현해줘", "테스트와 함께 구현", "스펙 구현", "implement this spec". Proactively trigger whenever the user wants to turn a specification or requirement into working code with tests.
npx claudepluginhub devstefancho/claude-plugins --plugin implement-with-test-pluginThis skill is limited to using the following tools:
Implements planned work (spec files or user requests) into production code with accompanying tests. Detects the project's language, test framework, and coding conventions to produce code that fits naturally into the existing codebase.
Applies Acme Corporation brand guidelines including colors, fonts, layouts, and messaging to generated PowerPoint, Excel, and PDF documents.
Builds DCF models with sensitivity analysis, Monte Carlo simulations, and scenario planning for investment valuation and risk assessment.
Calculates profitability (ROE, margins), liquidity (current ratio), leverage, efficiency, and valuation (P/E, EV/EBITDA) ratios from financial statements in CSV, JSON, text, or Excel for investment analysis.
Implements planned work (spec files or user requests) into production code with accompanying tests. Detects the project's language, test framework, and coding conventions to produce code that fits naturally into the existing codebase.
Spec-first - Never start coding without a clear understanding of what to build. If working from a spec file, parse its 4 sections (Purpose, Requirements, Approach, Verification). If working from a direct request, extract equivalent structured information first. This prevents aimless coding and scope creep.
Pattern-follower - The project already has conventions for imports, naming, directory structure, and test organization. Detect and follow them rather than imposing external conventions. Code that looks foreign to the project creates maintenance burden, even if it's technically correct.
Test-alongside - Write tests as part of the implementation, not as an afterthought. Each Verification item from the spec maps to at least one test. This ensures the tests actually verify what the spec intended, not just what happened to get implemented.
Run-to-green - After writing code and tests, run the test suite. If tests fail, diagnose and fix (up to 3 attempts). Do not report success until tests pass. A green test suite is the definition of "done."
Minimal diff - Only create or modify files necessary for the feature. Do not refactor surrounding code, add unrelated type annotations, or "improve" existing patterns. The goal is a focused, reviewable changeset.
Determine what to implement:
specs/ exists → git diff로 변경된 스펙을 우선 감지:
a. git diff --name-only HEAD -- specs/ 로 새로 추가되거나 수정된 스펙 파일 확인
b. untracked 파일도 확인: git ls-files --others --exclude-standard specs/
c. 변경된 스펙이 1개 → 해당 스펙을 자동으로 선택하여 진행
d. 변경된 스펙이 여러 개 → 변경된 스펙 목록만 보여주고 AskUserQuestion으로 선택 요청
e. 변경된 스펙이 없으면 → 전체 Glob specs/**/*.md fallback하여 목록 표시After resolving input, display the parsed spec summary so the user can verify alignment.
Scan the project to understand its conventions. This phase determines how to write code that fits in.
Language & framework detection:
package.json, tsconfig.json, pyproject.toml, go.mod, Cargo.toml, pom.xmlTest framework detection (in priority order):
package.json for a test script — this tells you how tests are actually runvitest.config.* or vite.config.* with test config → vitestjest.config.* or package.json with "jest" key → jestpytest.ini, setup.cfg [tool:pytest], or pyproject.toml [tool.pytest] → pytestgo.mod → go testCargo.toml → cargo testAskUserQuestionExisting test pattern analysis:
Glob **/*.test.* **/*.spec.* **/test_* **/*_test.* **/__tests__/***.test.ts vs *.spec.ts vs test_*.py vs *_test.go__tests__/ directory, or separate tests/ directorySource code pattern analysis:
Based on the spec and reconnaissance:
Display the file plan to the user as an informational summary (no confirmation needed — just transparency).
Write the production code:
Write tests covering the spec's Verification section:
Run tests using the detected test command via Bash
test script from package.json, or npx vitest run / npx jest with the specific test filepython -m pytest {test_file}go test ./{package}/...cargo testHandle failures (max 3 attempts):
Generate report using the template: templates/report-template.md
| Situation | Action |
|---|---|
| No spec file found and no direct request | Ask user what to implement via AskUserQuestion |
| Test framework not detected | Ask user which framework to use |
| Existing tests use unfamiliar patterns | Follow patterns as-is; do not "modernize" |
| Tests fail after 3 fix attempts | Report the current state honestly with error details |
| Spec is ambiguous | Ask user to clarify before implementing |