Single-Shot Feature Implementation
Confirmation
Before proceeding, confirm with the user:
- This will implement a full feature from ticket to PR (branch, code, tests, commit, push)
- Uses TDD — writes tests first, then implementation
- Creates a PR on GitHub when complete
Ask using AskUserQuestion. If the user declines, stop immediately.
You are an expert engineer implementing a feature from a ticket. Your goal is production-grade code in a single pass — no rework cycles.
This skill was refined through 4 retrospectives covering 53 issues: wrong abstractions, schema misalignment, missed patterns, naming inconsistency, config safety, DB robustness, and validation gaps. Each phase directly addresses one or more of these patterns.
Input
$ARGUMENTS — a ticket ID (e.g., PROJ-1784, 1784).
Optional flags:
--draft — stop after Phase 3 (plan only, don't implement)
--skip-review — skip Phase 6 review agents (faster but less thorough)
Phase 0: Fetch & Understand (addresses: wrong abstraction)
-
Fetch ticket from your issue tracker:
- Get story summary, description, acceptance criteria, subtasks
- Get parent epic for broader context
- Note assignee, sprint, story points
-
Locate story docs — search for existing documentation:
- Check your team's standard docs locations (wiki, Notion, repo docs/)
- Look for technical design docs, prototypes, or grooming notes
-
Ask clarifying questions using AskUserQuestion:
- "How does the client submit this?" (single-form vs multi-step vs separate endpoint)
- "Which services consume these types?" (domain vs shared package)
- "Are there any design decisions already made?" (grooming notes, prototype)
- "What does FE need after this operation?" (preview, render, lifecycle — catches related endpoints)
Gate: Do NOT proceed until client interaction pattern is confirmed.
Phase 1: Research (addresses: schema misalignment, missed patterns)
Launch 3 parallel Explore agents:
Agent 1: External API Research
- If the feature involves a third-party API: fetch current docs (official docs, SDK examples, or library documentation tools)
- If the feature uses cloud storage/messaging: check your org's shared utility libraries
- Extract exact field names, types, required/optional, constraints
Agent 2: Existing Pattern Scan
- Scan the target repo for how similar features are implemented
- Check: import grouping, error handling, validation patterns, test patterns
- Check: CONVENTIONS.md, typed enums, Unicode handling, DB type casts
- Check: existing shared packages that should be reused
- Check: constructor return types — must return interfaces (not concrete structs)
- Check: test tooling — use your org's standard mocking + assertion libraries (not hand-written stubs)
- HTTP handler pattern: Search 3+ similar repos in your org for the exact framework + handler convention. Use the established framework — NOT a different one
- Package naming: Check sibling directories for naming pattern. Match existing siblings, don't invent new conventions
- Convention scan (BEFORE creating packages): Run
ls on sibling directories to confirm naming patterns. Check acronym conventions (all-caps or mixed?), spelling conventions (American or British?), mock generation patterns
- Config defaults: Environment-specific values (callback URLs, service URLs) must have empty defaults. Let environment variables and deployment config set the actual values
- Infrastructure client errors: New infrastructure clients must return errors gracefully — NOT crash the process. Match existing graceful degradation patterns
Agent 3: Prototype & Design Alignment
- If a prototype exists, extract field definitions
- If tech design exists, extract proposed structs
- Cross-reference: API schema x prototype x tech design
- Flag any mismatches BEFORE coding
Gate: All 3 agents must complete. Review their findings. Flag and resolve any mismatches.
Phase 2: Architect (addresses: inconsistency across types)
-
Design all related types together — if multiple content types or entities are involved, design them ALL before implementing any. Check for:
- Same field pattern across similar types
- Shared constants (typed enums, not magic strings)
- Shared validation helpers
-
Write the implementation plan to the plan file:
- List every file to create/modify
- List every struct/type with all fields (cross-referenced against API + prototype)
- List every validation rule with its constraint source
- List every test case
-
Ask for plan approval before proceeding.
Gate: Plan approved by user.
Phase 3: Test-Driven Implementation (addresses: validation gaps)
Test tooling: Use your org's standard mocking + assertion libraries. Do NOT use hand-written stub structs.
API gateway/BFF rule: If working on a thin proxy layer, the service layer should be pure passthrough (no validation). The handler validates input format. The backend service validates business rules. Do NOT duplicate validation across layers.
For each component in the plan:
-
Write tests FIRST — for every validation rule, including:
- Happy path (valid input)
- Required field missing
- Field exceeds max length (use multi-byte text like Thai/emoji for Unicode testing)
- Invalid enum value
- Edge cases: null, empty string, empty array, JSON
null
- Boundary: max+1 items, max+1 characters
- Edge case checklist (for every new function):
- What if input is nil/empty/zero?
- What if the DB row doesn't exist?
- What if the external API returns error?
- What if the message is malformed (bad encoding, invalid JSON)?
- What if the result set is 0 items after filtering?
- What if the operation partially succeeds (some batches fail)?
- What if the context is canceled mid-operation?
-
Run tests — confirm they FAIL (red phase)
-
Implement code to make tests pass (green phase)
-
Refactor — check for:
- Nested conditions (max 2 levels, use early return / continue / extract helper)
- Magic strings (use typed constants)
- Inconsistent trim-before-length (trim THEN check length)
- Import grouping (stdlib | external+shared | internal)
-
Run tests — confirm they PASS
Phase 4: Verify (addresses: missed conventions)
Run all checks (adapt commands to your language/toolchain):
# Build
your-build-command ./...
# Lint
your-lint-command ./...
# Test
your-test-command ./...
IMPORTANT: Run EVERY checklist item on ALL changed files (not just new files). Do not skip.
Universal Checks
Naming Conventions
Wiring (critical — bugs caught in retrospectives)
Choreography (prevents scattered state transitions)
Phase 5: Self-Review Loop (skip with --skip-review)
Iterative review-fix cycle until clean (max 3 iterations):
- Run simplification pass on all changed files — fix any findings
- Run internal review (same analysis as your review command but WITHOUT posting comments):
- Convention compliance, naming limits, nesting depth
- Validation completeness, error handling, edge cases
- Type safety, constructor patterns, typed enums vs strings
- Code reuse, duplication, efficiency
- Fix findings >= 75 confidence immediately
- Re-run build + test to verify fixes
- Repeat steps 2-4 until no new actionable findings (or max 3 iterations)
After loop completes: amend commit, push once.
Phase 6: Ship
- Squash all commits to a single commit with descriptive message
- Push to origin
- Create or update PR with comprehensive description:
- What changed (with API mapping rationale)
- Architecture decisions
- Checklist (build/lint/test/mocks/review)
- Update tech design docs if schemas changed
- Validate PR description — cross-check tables/lists against actual code (types, test counts, validation rules)
- Validate PR comments — fetch and reply to any automated review comments
Quality Checklist (verify before marking complete)