From ultraship
Writes detailed implementation plans from specs for multi-step tasks before coding, with file structure maps, TDD bite-sized steps, and markdown tracking format.
npx claudepluginhub houseofmvps/ultraship --plugin ultrashipThis skill uses the workspace's default tool permissions.
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Creates detailed implementation plans from specs for multi-step tasks before coding, with file structure mapping, bite-sized TDD steps, architecture overview, and tech stack.
Generates detailed implementation plans from specs for multi-step tasks before coding, with file structure, bite-sized TDD steps, architecture, and tech stack.
Generates implementation plans from specs for multi-step tasks before coding, mapping files, defining TDD steps, and breaking into 2-5 minute tasks.
Share bugs, ideas, or general feedback.
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "I'm using the writing-plans skill to create the implementation plan."
Context: This should be run in a dedicated worktree (created by brainstorming skill).
Save plans to: docs/ultraship/plans/YYYY-MM-DD-<feature-name>.md
If the spec covers multiple independent subsystems, it should have been broken into sub-project specs during brainstorming. If it wasn't, suggest breaking this into separate plans — one per subsystem. Each plan should produce working, testable software on its own.
Before defining tasks, map out which files will be created or modified and what each one is responsible for. This is where decomposition decisions get locked in.
This structure informs the task decomposition. Each task should produce self-contained changes that make sense independently.
Each step is one action (2-5 minutes):
Every plan MUST start with this header:
# [Feature Name] Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use ultraship:subagent-driven-development (recommended) or ultraship:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
- [ ] **Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
- [ ] **Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
- [ ] **Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
- [ ] **Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
- [ ] **Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
Every plan should include a brief risk section at the top:
| Risk | Likelihood | Impact | Mitigation |
|---|---|---|---|
| [e.g., API schema changes mid-implementation] | Medium | High | Pin API version, add integration test |
| [e.g., Migration breaks existing data] | Low | Critical | Backup before migration, test with prod-like data |
| [e.g., Third-party rate limit hit during testing] | Medium | Low | Use mock in tests, real API only in integration |
Only include risks that are specific to this plan. Don't pad with generic risks. If there are no meaningful risks, omit the section.
If Task 3 depends on Task 1's database schema, say so explicitly. The execution agent has no memory of task ordering intent — it needs to see:
users table created there)"drizzle-kit drop")After writing the complete plan:
Review loop guidance:
After saving the plan, offer execution choice:
"Plan complete and saved to docs/ultraship/plans/<filename>.md. Two execution options:
1. Subagent-Driven (recommended) - I dispatch a fresh subagent per task, review between tasks, fast iteration
2. Inline Execution - Execute tasks in this session using executing-plans, batch execution with checkpoints
Which approach?"
If Subagent-Driven chosen:
If Inline Execution chosen: