Creates dated implementation plans in docs/plans/ for features, bug fixes, refactors, and migrations. Gathers context interactively, asks focused questions, and proposes approaches.
npx claudepluginhub amplicode/spring-skills --plugin spring-toolsThis skill uses the workspace's default tool permissions.
create an implementation plan in `docs/plans/yyyymmdd-<task-name>.md` with interactive context gathering.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
create an implementation plan in docs/plans/yyyymmdd-<task-name>.md with interactive context gathering.
before asking questions, understand what the user is working on:
parse user's command arguments to identify intent:
gather relevant context quickly — use direct spring-explore skill, NOT an Explore agent. keep discovery under 30 seconds:
CRITICAL: do NOT launch an Explore agent. the goal is a quick scan, not exhaustive analysis. if more context is needed, ask the user in step 1.
show the discovered context, then ask questions one at a time using the AskUserQuestion tool:
"based on your request, i found: [context summary]"
ask questions one at a time (do not overwhelm with multiple questions):
plan purpose: use AskUserQuestion - "what is the main goal?"
scope: use AskUserQuestion - "which components/files are involved?"
constraints: use AskUserQuestion - "any specific requirements or limitations?"
testing approach: use AskUserQuestion - "do you prefer TDD, regular approach or no tests?"
plan title: use AskUserQuestion - "short descriptive title?"
after all questions answered, synthesize responses into plan context.
once the problem is understood, propose implementation approaches:
example format:
i see three approaches:
**Option A: [name]** (recommended)
- how it works: ...
- pros: ...
- cons: ...
**Option B: [name]**
- how it works: ...
- pros: ...
- cons: ...
which direction appeals to you?
use AskUserQuestion tool to let user select preferred approach before creating the plan.
skip this step if:
check docs/plans/ for existing files, then create docs/plans/yyyymmdd-<task-name>.md (use current date):
# [Plan Title]
## Overview
- clear description of the feature/change being implemented
- problem it solves and key benefits
- how it integrates with existing system
## Context (from discovery)
- files/components involved: [list from step 0]
- related patterns found: [patterns discovered]
- dependencies identified: [dependencies]
## Development Approach
- **testing approach**: [TDD / Regular - from user preference in planning]
- complete each task fully before moving to the next
- make small, focused changes
- **CRITICAL: every task MUST include new/updated tests** for code changes in that task
- tests are not optional - they are a required part of the checklist
- write unit tests for new functions/methods
- write unit tests for modified functions/methods
- add new test cases for new code paths
- update existing test cases if behavior changes
- tests cover both success and error scenarios
- **CRITICAL: all tests must pass before starting next task** - no exceptions
- **CRITICAL: update this plan file when scope changes during implementation**
- run tests after each change
- maintain backward compatibility
## Testing Strategy
- **unit tests**: required for every task (see Development Approach above)
- **e2e tests**: if project has UI-based e2e tests (Playwright, Cypress, etc.):
- UI changes → add/update e2e tests in same task as UI code
- backend changes supporting UI → add/update e2e tests in same task
- treat e2e tests with same rigor as unit tests (must pass before next task)
- store e2e tests alongside unit tests (or in designated e2e directory)
- example: if task implements new form field, add e2e test checking form submission
## Progress Tracking
- mark completed items with `[x]` immediately when done
- add newly discovered tasks with ➕ prefix
- document issues/blockers with ⚠️ prefix
- update plan if implementation deviates from original scope
- keep plan in sync with actual work done
## Solution Overview
- high-level approach and architecture chosen
- key design decisions and rationale
- how it fits into the existing system
## Technical Details
- data structures and changes
- parameters and formats
- processing flow
## What Goes Where
- **Implementation Steps** (`[ ]` checkboxes): tasks achievable within this codebase - code changes, tests, documentation updates
- **Post-Completion** (no checkboxes): items requiring external action - manual testing, changes in consuming projects, deployment configs, third-party verifications
## Implementation Steps
<!--
Task structure guidelines:
- Each task = ONE logical unit (one function, one endpoint, one component)
- Use specific descriptive names, not generic "[Core Logic]" or "[Implementation]"
- Each task MUST have a **Files:** block listing files to Create/Modify (before checkboxes)
- Aim for ~5 checkboxes per task (more is OK if logically atomic)
- **CRITICAL: Each task MUST end with writing/updating tests before moving to next**
- tests are not optional - they are a required deliverable of every task
- write tests for all NEW code added in this task
- write tests for all MODIFIED code in this task
- include both success and error scenarios in tests
- list tests as SEPARATE checklist items, not bundled with implementation
Example (NOTICE: Files block + tests as separate checklist items):
### Task 1: Add password hashing utility
**Files:**
- Create: `src/auth/hash`
- Create: `src/auth/hash_test`
- [ ] create `src/auth/hash` with HashPassword and VerifyPassword functions
- [ ] implement bcrypt-based hashing with configurable cost
- [ ] write tests for HashPassword (success + error cases)
- [ ] write tests for VerifyPassword (success + error cases)
- [ ] run tests - must pass before task 2
### Task 2: Add user registration endpoint
**Files:**
- Create: `src/api/users`
- Modify: `src/api/router`
- Create: `src/api/users_test`
- [ ] create `POST /api/users` handler in `src/api/users`
- [ ] add input validation (email format, password strength)
- [ ] integrate with password hashing utility
- [ ] write tests for handler success case with table-driven cases
- [ ] write tests for handler error cases (invalid input, missing fields)
- [ ] run tests - must pass before task 3
-->
### Task 1: [specific name - what this task accomplishes]
**Files:**
- Create: `exact/path/to/new_file`
- Modify: `exact/path/to/existing`
- [ ] [specific action with file reference - code implementation]
- [ ] [specific action with file reference - code implementation]
- [ ] write tests for new/changed functionality (success cases)
- [ ] write tests for error/edge cases
- [ ] run tests - must pass before next task
### Task N-1: Verify acceptance criteria
- [ ] verify all requirements from Overview are implemented
- [ ] verify edge cases are handled
- [ ] run full test suite: `<project test command>`
- [ ] run e2e tests if project has them: `<project e2e test command>`
- [ ] verify test coverage meets project standard
### Task N: [Final] Update documentation
- [ ] update README.md if needed
- [ ] update CLAUDE.md if new patterns discovered
- [ ] move this plan to `docs/plans/completed/`
## Post-Completion
*Items requiring manual intervention or external systems - no checkboxes, informational only*
**Manual verification** (if applicable):
- manual UI/UX testing scenarios
- performance testing under load
- security review considerations
**External system updates** (if applicable):
- consuming projects that need updates after this library change
- configuration changes in deployment systems
- third-party service integrations to verify
after writing the plan file, scan the plan for tasks that involve JPA entities. also check whether the project uses JPA.
if the plan touches domain entities AND the project is a JPA project:
spring-data-jpa skillif the plan does NOT touch domain entities or the project is not a JPA project: skip this step entirely.
after creating the file, tell user: "created plan: docs/plans/yyyymmdd-<task-name>.md"
then use AskUserQuestion:
{
"questions": [{
"question": "Plan created. What's next?",
"header": "Next step",
"options": [
{"label": "Interactive review", "description": "Open plan in IDE for review and feedback"},
{"label": "Implement", "description": "Commit plan and start implementing"},
{"label": "Done", "description": "Commit plan, no further action"}
],
"multiSelect": false
}]
}
mcp__ide__getDiagnostics with the plan file's uri (absolute path prefixed with file://). This causes the IDE to navigate to the file. Then tell the user: "Plan is open in the IDE — review it and tell me what to change." Wait for feedback in the conversation, apply any requested changes, and ask again with the same options (minus "Interactive review").Rules during implementation:
complete each task fully before moving to the next:
[x] in plan fileif tests are used and fail:
plan tracking during implementation:
on completion:
docs/plans/completed/mkdir -p docs/plans/completedthis ensures each task is solid before building on top of it.