From ralphex
Creates structured implementation plans in docs/plans/YYYY-MM-DD-<task-name>.md by parsing intent, exploring project context with git/project scan, and asking one-at-a-time questions. Preps for ralphex CLI execution.
npx claudepluginhub umputun/ralphex --plugin ralphexThis skill uses the workspace's default tool permissions.
Create an implementation plan in `docs/plans/YYYY-MM-DD-<task-name>.md` with interactive context gathering.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Create an implementation plan in docs/plans/YYYY-MM-DD-<task-name>.md with interactive context gathering.
Check if ralphex CLI is installed (needed to execute the plan after creation):
which ralphex
If not found, inform user they'll need it to execute the plan:
brew install umputun/apps/ralphex.deb from https://github.com/umputun/ralphex/releases.rpm from https://github.com/umputun/ralphex/releasesgo install github.com/umputun/ralphex/cmd/ralphex@latestProceed with plan creation regardless, but remind user to install before execution.
Before asking questions, understand what the user is working on:
Parse user's command arguments to identify intent:
Launch Explore agent (via Task tool with subagent_type: Explore) to gather relevant context based on intent:
For feature development:
For bug fixing:
For refactoring/migration:
For generic/unclear requests:
git status and recent file activitySynthesize findings into context summary:
Show the discovered context, then ask questions one at a time using the AskUserQuestion tool:
"Based on your request, I found: [context summary]"
Ask questions one at a time (do not overwhelm with multiple questions):
Plan purpose: use AskUserQuestion - "What is the main goal?"
Scope: use AskUserQuestion - "Which components/files are involved?"
Constraints: use AskUserQuestion - "Any specific requirements or limitations?"
Testing approach: use AskUserQuestion - "Do you prefer TDD or regular approach?"
Plan title: use AskUserQuestion - "Short descriptive title?"
After all questions answered, synthesize responses into plan context.
Once the problem is understood, propose implementation approaches:
Example format:
I see three approaches:
**Option A: [name]** (recommended)
- How it works: ...
- Pros: ...
- Cons: ...
**Option B: [name]**
- How it works: ...
- Pros: ...
- Cons: ...
Which direction appeals to you?
Use AskUserQuestion tool to let user select preferred approach before creating the plan.
Skip this step if:
Check docs/plans/ for existing files, then create docs/plans/YYYY-MM-DD-<task-name>.md:
# [Plan Title]
## Overview
- Clear description of the feature/change being implemented
- Problem it solves and key benefits
- How it integrates with existing system
## Context (from discovery)
- Files/components involved: [list from step 0]
- Related patterns found: [patterns discovered]
- Dependencies identified: [dependencies]
## Development Approach
- **Testing approach**: [TDD / Regular - from user preference in planning]
- Complete each task fully before moving to the next
- Make small, focused changes
- **CRITICAL: every task MUST include new/updated tests** for code changes in that task
- tests are not optional - they are a required part of the checklist
- write unit tests for new functions/methods
- write unit tests for modified functions/methods
- add new test cases for new code paths
- update existing test cases if behavior changes
- tests cover both success and error scenarios
- **CRITICAL: all tests must pass before starting next task** - no exceptions
- **CRITICAL: update this plan file when scope changes during implementation**
- Run tests after each change
- Maintain backward compatibility
## Testing Strategy
- **Unit tests**: required for every task (see Development Approach above)
- **E2E tests**: if project has UI-based e2e tests (Playwright, Cypress, etc.):
- UI changes → add/update e2e tests in same task as UI code
- Backend changes supporting UI → add/update e2e tests in same task
- Treat e2e tests with same rigor as unit tests (must pass before next task)
- Store e2e tests alongside unit tests (or in designated e2e directory)
## Progress Tracking
- Mark completed items with `[x]` immediately when done
- Add newly discovered tasks with ➕ prefix
- Document issues/blockers with ⚠️ prefix
- Update plan if implementation deviates from original scope
- Keep plan in sync with actual work done
## What Goes Where
- **Implementation Steps** (`[ ]` checkboxes): tasks achievable within this codebase - code changes, tests, documentation updates
- **Post-Completion** (no checkboxes): items requiring external action - manual testing, changes in consuming projects, deployment configs, third-party verifications
- **Checkbox placement**: Checkboxes belong only in Task sections (`### Task N:` or `### Iteration N:`). Do not put checkboxes in Success criteria, Overview, or Context — they cause extra loop iterations.
## Implementation Steps
<!--
Task structure guidelines:
- Each task = ONE logical unit (one function, one endpoint, one component)
- Use specific descriptive names, not generic "[Core Logic]" or "[Implementation]"
- Aim for ~5 checkboxes per task (more is OK if logically atomic)
- **CRITICAL: Each task MUST end with writing/updating tests before moving to next**
- tests are not optional - they are a required deliverable of every task
- write tests for all NEW code added in this task
- write tests for all MODIFIED code in this task
- include both success and error scenarios in tests
- list tests as SEPARATE checklist items, not bundled with implementation
Example (NOTICE: tests are separate checklist items):
### Task 1: Add password hashing utility
- [ ] create `auth/hash` module with HashPassword and VerifyPassword functions
- [ ] implement secure hashing with configurable cost
- [ ] write tests for HashPassword (success + error cases)
- [ ] write tests for VerifyPassword (success + error cases)
- [ ] run project tests - must pass before task 2
### Task 2: Add user registration endpoint
- [ ] create `POST /api/users` handler
- [ ] add input validation (email format, password strength)
- [ ] integrate with password hashing utility
- [ ] write tests for handler success case with table-driven cases
- [ ] write tests for handler error cases (invalid input, missing fields)
- [ ] run project tests - must pass before task 3
-->
### Task 1: [specific name - what this task accomplishes]
- [ ] [specific action with file reference - code implementation]
- [ ] [specific action with file reference - code implementation]
- [ ] write tests for new/changed functionality (success cases)
- [ ] write tests for error/edge cases
- [ ] run tests - must pass before next task
### Task N-1: Verify acceptance criteria
- [ ] verify all requirements from Overview are implemented
- [ ] verify edge cases are handled
- [ ] run full test suite (unit tests)
- [ ] run e2e tests if project has them
- [ ] run linter - all issues must be fixed
- [ ] verify test coverage meets project standard (80%+)
*Note: manual testing, deployment verification, and external checks go in Post-Completion (no checkboxes). Task section checkboxes must be automatable by the agent.*
### Task N: [Final] Update documentation
- [ ] update README.md if needed
- [ ] update project knowledge docs if new patterns discovered
*Note: ralphex automatically moves completed plans to `docs/plans/completed/`*
## Technical Details
- Data structures and changes
- Parameters and formats
- Processing flow
## Post-Completion
*Items requiring manual intervention or external systems - no checkboxes, informational only*
**Manual verification** (if applicable):
- Manual UI/UX testing scenarios
- Performance testing under load
- Security review considerations
**External system updates** (if applicable):
- Consuming projects that need updates after this library change
- Configuration changes in deployment systems
- Third-party service integrations to verify
After creating the file, tell user:
"Created plan: docs/plans/YYYY-MM-DD-<task-name>.md
Ready to start implementation?"
If yes, begin with task 1.
CRITICAL testing rules during implementation:
After completing code changes in a task:
[x] in plan fileIf tests fail:
Only proceed to next task when:
[x]Plan tracking during implementation:
On completion:
docs/plans/completed/Partial implementation exception:
[x] write tests ... (fails until Task X)This ensures each task is solid before building on top of it.