From theclauu
Use when you have a spec or requirements for a multi-step task, before touching code
npx claudepluginhub artemis-xyz/theclauu --plugin theclauuThis skill uses the workspace's default tool permissions.
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "I'm using the writing-plans skill to create the implementation plan."
Context: This should be run in a dedicated worktree (created by brainstorming skill).
Save plans to: docs/theclauu/plans/YYYY-MM-DD-<feature-name>.md
If the spec covers multiple independent subsystems, it should have been broken into sub-project specs during brainstorming. If it wasn't, suggest breaking this into separate plans — one per subsystem. Each plan should produce working, testable software on its own.
Before defining tasks, map out which files will be created or modified and what each one is responsible for. This is where decomposition decisions get locked in.
This structure informs the task decomposition. Each task should produce self-contained changes that make sense independently.
Each step is one action (2-5 minutes):
Every plan MUST start with this header:
# [Feature Name] Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use subagent-driven-development (recommended) or implement-plan to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
- [ ] **Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
- [ ] **Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
- [ ] **Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
- [ ] **Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
- [ ] **Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
Every step must contain the actual content an engineer needs. These are plan failures — never write them:
After writing the complete plan, look at the spec with fresh eyes and check the plan against it. This is a checklist you run yourself — not a subagent dispatch.
1. Spec coverage: Skim each section/requirement in the spec. Can you point to a task that implements it? List any gaps.
2. Placeholder scan: Search your plan for red flags — any of the patterns from the "No Placeholders" section above. Fix them.
3. Type consistency: Do the types, method signatures, and property names you used in later tasks match what you defined in earlier tasks? A function called clearLayers() in Task 3 but clearFullLayers() in Task 7 is a bug.
If you find issues, fix them inline. No need to re-review — just fix and move on. If you find a spec requirement with no task, add the task.
After the plan self-review passes, before execution handoff, offer the user a structured plan review through one or more lenses. These are absorbed from gstack's plan-*-review skills and should be invoked when the plan touches the corresponding concern. Use AskUserQuestion to let the user pick which lens(es) to run.
When to invoke: user is questioning scope, feels the plan could think bigger, or the plan is for a product-facing feature. Rethink the problem, find the 10-star product, challenge premises.
Four modes — pick based on signal:
Key questions to ask the user:
Update the plan with user's chosen expansions/reductions before execution.
When to invoke: the plan touches new subsystems, introduces cross-cutting concerns, or involves non-trivial data flow. Lock in architecture before coding.
Check list:
Surface each issue with a specific fix. Update the plan inline.
When to invoke: the plan includes UI changes, new components, or user-facing workflows.
Score each dimension 0–10, then explain what would make it a 10:
| Dimension | Ask |
|---|---|
| Visual hierarchy | Can the user identify the primary action in <2 seconds? |
| Typography | Is the font stack + sizing consistent with the rest of the app? |
| Color | Does the palette communicate state (success/warning/error) unambiguously? |
| Spacing | Is the rhythm consistent? No one-off margins? |
| Motion | Do transitions give feedback without being distracting? |
| Accessibility | Keyboard nav, focus states, color contrast, screen reader labels? |
| Responsive | Works at 320px / 768px / 1280px / 1920px without horizontal scroll? |
| Empty/loading/error states | All three states have explicit designs? |
For each dimension scoring <7, add a task to the plan that raises it.
When to invoke: the plan is for APIs, CLIs, SDKs, libraries, or developer docs.
Three modes:
Personas to trace the plan through:
npm install, stops at "working hello world")cd <project>, stops at "know what to do next")For each persona, identify friction points and add tasks to eliminate them.
When to invoke: user says "run all reviews," "auto review," or wants a full review gauntlet without manual lens selection.
Flow:
Do NOT auto-apply taste decisions. Surface them for user sign-off.
After saving the plan, offer execution choice:
"Plan complete and saved to docs/theclauu/plans/<filename>.md. Two execution options:
1. Subagent-Driven (recommended) - I dispatch a fresh subagent per task, review between tasks, fast iteration
2. Inline Execution - Execute tasks in this session using executing-plans, batch execution with checkpoints
Which approach?"
If Subagent-Driven chosen:
If Inline Execution chosen: