From cc-arsenal
Implement features with senior staff engineer best practices and parallel subagents. Activates when users want to implement, build, or add new functionality.
npx claudepluginhub mgiovani/cc-arsenal --plugin cc-arsenal-teamsThis skill is limited to using the following tools:
Implement a new feature as a **Senior Staff Engineer** following best practices (SOLID, DRY, YAGNI) to create a secure, fast, and reliable production application.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Implement a new feature as a Senior Staff Engineer following best practices (SOLID, DRY, YAGNI) to create a secure, fast, and reliable production application.
$ARGUMENTS
CRITICAL: Before implementing anything:
bun, npm, make, etc. existMakefile, justfile, package.json, pyproject.toml, etc.This skill includes automatic implementation verification before completion:
When you attempt to stop working (mark implementation as complete), an automated verification agent runs to ensure quality:
Verification Steps:
make test, npm test, pytest)make lint, npm run lint, ruff check)Behavior:
Example blocked completion:
⚠️ Implementation verification failed:
Tests: ❌ FAILED (3 tests failing)
- test_user_authentication: AssertionError
- test_oauth_flow: Connection timeout
- test_token_refresh: Invalid token
Lint: ✅ PASSED
Type Check: ✅ PASSED
🔧 Cannot complete implementation until all tests pass. Please fix the failing tests.
Benefits:
This skill uses Claude Code's Task Management System to track implementation progress with dependency-aware task tracking.
When to Use Tasks:
When to Skip Tasks:
Task Structure: Each implementation creates tasks for all 6 phases with dependencies, tracking progress and blocking relationships. Tasks support parallel execution where independent work can proceed simultaneously.
Task tracking replaces TodoWrite. Create task structure at start, update as completing each phase.
Step 0.1: Create Task Structure
Before starting implementation, create the dependency-aware task structure:
TaskCreate:
subject: "Phase 0: Discover project workflow"
description: "Identify test, lint, build, dev server commands from CLAUDE.md and task runners"
activeForm: "Discovering project workflow"
TaskCreate:
subject: "Phase 1: Research best practices"
description: "Web search and Context7 research for [FEATURE]"
activeForm: "Researching best practices"
TaskCreate:
subject: "Phase 2: Create implementation plan"
description: "Enter plan mode and get user approval"
activeForm: "Creating implementation plan"
TaskCreate:
subject: "Phase 3: Implement feature"
description: "Execute implementation with parallel subagents"
activeForm: "Implementing feature"
TaskCreate:
subject: "Phase 4: Verify implementation"
description: "Run full test suite, lint, type-check"
activeForm: "Verifying implementation"
TaskCreate:
subject: "Phase 5: Final commit"
description: "Create conventional commit with summary"
activeForm: "Creating final commit"
# Set up dependencies (strict sequential chain)
TaskUpdate: { taskId: "2", addBlockedBy: ["1"] } # Research after Discovery
TaskUpdate: { taskId: "3", addBlockedBy: ["2"] } # Plan after Research
TaskUpdate: { taskId: "4", addBlockedBy: ["3"] } # Implement after Plan
TaskUpdate: { taskId: "5", addBlockedBy: ["4"] } # Verify after Implement
TaskUpdate: { taskId: "6", addBlockedBy: ["5"] } # Commit after Verify
# Mark first task as in progress
TaskUpdate: { taskId: "1", status: "in_progress" }
Step 0.2: Discover Project Workflow
Use Haiku-powered Explore agent for token-efficient discovery:
Use Task tool with Explore agent:
- prompt: "Discover the development workflow for this project:
1. Read CLAUDE.md if it exists - extract all development commands
2. Check for task runners: Makefile, justfile, package.json scripts, pyproject.toml scripts
3. Identify the test command (e.g., make test, just test, npm test, pytest, bun test)
4. Identify the lint command (e.g., make lint, npm run lint, ruff check)
5. Identify the build/type-check command
6. Identify the dev server command if applicable
7. Note any pre-commit hooks or quality gates
Return a structured summary of all available commands."
- subagent_type: "Explore"
- model: "haiku" # Token-efficient for discovery
Store discovered commands for use in later phases. Example output:
Project Commands:
- Test: `make test` or `pytest`
- Lint: `make lint` or `ruff check`
- Type Check: `make type-check` or `pyright`
- Build: `make build` or `npm run build`
- Dev Server: `make dev` or `npm run dev`
- Quality: `make check` (runs all checks)
Step 0.3: Complete Phase 0
TaskUpdate: { taskId: "1", status: "completed" }
TaskList # Check that Task 2 is now unblocked
Step 1.1: Start Phase 1
TaskUpdate: { taskId: "2", status: "in_progress" }
Step 1.2: Research Best Practices
Before implementing, research best practices and understand the codebase context using Haiku-powered Explore agent:
Use Task tool with Explore agent:
- prompt: "Research and gather context for implementing [FEATURE]:
1. **Best Practices Research**: Search the web for 'latest best practices' and 'current year best practices' related to [FEATURE]. Look for:
- Current industry standards and patterns
- Security considerations
- Performance recommendations
- Common pitfalls to avoid
2. **Library Documentation** (if using external libraries/frameworks):
- Use Context7 MCP to fetch up-to-date documentation
- Validate API usage patterns against current docs
- Check for deprecated methods or breaking changes
3. **Codebase Exploration**:
- Find similar existing implementations to reference
- Identify coding patterns and conventions used
- Locate test patterns and fixtures
- Note file organization and naming conventions
Return a comprehensive summary with:
- Relevant best practices (with sources)
- Library API patterns to follow (if applicable)
- Specific file paths and existing patterns from the codebase"
- subagent_type: "Explore"
- model: "haiku" # Token-efficient for research
Step 1.3: Complete Phase 1
TaskUpdate: { taskId: "2", status: "completed" }
TaskList # Check that Task 3 is now unblocked
Step 2.1: Start Phase 2
TaskUpdate: { taskId: "3", status: "in_progress" }
Step 2.2: Create Implementation Plan
EnterPlanMode to create a detailed implementation planStep 2.3: Complete Phase 2
TaskUpdate: { taskId: "3", status: "completed" }
TaskList # Check that Task 4 is now unblocked
Step 3.1: Start Phase 3
TaskUpdate: { taskId: "4", status: "in_progress" }
Step 3.2: Create Parallel Subagent Tasks
For features with independent components, create parallel child tasks:
# Example: API + UI + Tests in parallel
TaskCreate:
subject: "Implement API endpoint"
description: "Create /api/feature endpoint with validation"
activeForm: "Implementing API endpoint"
metadata: { parent: "4", component: "api" }
TaskCreate:
subject: "Implement UI component"
description: "Create FeatureComponent.tsx with tests"
activeForm: "Implementing UI component"
metadata: { parent: "4", component: "ui" }
TaskCreate:
subject: "Write integration tests"
description: "E2E tests for feature flow"
activeForm: "Writing integration tests"
metadata: { parent: "4", component: "tests" }
# All parallel tasks blocked only by planning phase
TaskUpdate: { taskId: "api-task", addBlockedBy: ["3"] }
TaskUpdate: { taskId: "ui-task", addBlockedBy: ["3"] }
TaskUpdate: { taskId: "test-task", addBlockedBy: ["3"] }
# Phase 5 (Verification) blocked by ALL parallel tasks
TaskUpdate: { taskId: "5", addBlockedBy: ["api-task", "ui-task", "test-task"] }
Step 3.3: Execute Parallel Subagents
For each parallelizable task group, spawn subagents using the Task tool:
Subagent Instructions Template:
Task tool call:
- subagent_type: "general-purpose"
- model: "sonnet" # REQUIRED - never leave unset (defaults to parent model)
- prompt: |
Implement [specific task description].
First, read the project's CLAUDE.md to understand conventions and patterns.
Requirements:
1. Follow existing codebase patterns and conventions
2. Apply SOLID, DRY, YAGNI principles
3. Write comprehensive tests (unit + integration where applicable)
4. All tests MUST pass before completion
5. Handle errors appropriately
6. Add necessary type definitions (if typed language)
Project-specific commands (discovered in Phase 0):
- Test command: [INSERT DISCOVERED TEST COMMAND]
- Lint command: [INSERT DISCOVERED LINT COMMAND]
After implementation:
1. Run the test suite to verify all tests pass
2. Run linting to ensure code quality
3. Report back what was implemented (do NOT commit - the main agent will handle commits)
If tests fail, fix them before reporting completion.
If you encounter ambiguous requirements, report back and ask for clarification instead of guessing.
Model Selection for Subagents:
model: "sonnet" for code implementation, test writing, documentation writing, architecture decisions, and complex changesmodel: "haiku" ONLY for exploration and research tasksmodel explicitly - unset defaults to the parent model (which may be opus)After each subagent completes:
TaskUpdate: { taskId: "child-task-id", status: "completed" }/cc-arsenal:git:commit to commit the subagent's work (if available) or create a conventional commit manuallyParallelization Strategy:
Step 3.4: Complete Phase 3
# After all subagent tasks complete
TaskUpdate: { taskId: "4", status: "completed" }
TaskList # Verify Phase 5 is now unblocked
Step 4.1: Start Phase 4
TaskUpdate: { taskId: "5", status: "in_progress" }
Step 4.2: Run Quality Checks
After all subagents complete, run verification using the discovered commands from Phase 0:
Example verification (commands vary by project):
# Python project with Makefile
make test && make lint && make type-check
# Node.js project with package.json
npm test && npm run lint && npm run build
# Python project with just
just test && just lint
# Simple Python project
pytest && ruff check . && pyright
Step 4.3: Complete Phase 4
TaskUpdate: { taskId: "5", status: "completed" }
TaskList # Check that Task 6 is now unblocked
Step 5.1: Start Phase 5
TaskUpdate: { taskId: "6", status: "in_progress" }
Step 5.2: Create Final Commit
Only proceed when all checks pass:
Step 5.3: Complete Phase 5 and Feature Implementation
TaskUpdate: { taskId: "6", status: "completed" }
TaskList # Show final status - all tasks should be completed
If the feature has a UI component, use the agent-browser skill for browser automation:
agent-browser open <url>agent-browser snapshot -iagent-browser click @e1)agent-browser screenshot page.pngagent-browser closeWhen to Skip Manual Testing:
Each subagent MUST ensure:
If a subagent encounters issues:
If encountering unclear or ambiguous requirements at any phase:
AskUserQuestion to clarify before proceedingProvide a summary including:
# Implement a specific feature
/implement-feature Add user authentication with OAuth2
# Implement with more context
/implement-feature Create a REST API endpoint for managing user preferences with validation
# Implement a refactoring task
/implement-feature Refactor the payment module to use the strategy pattern