Guide for writing effective prompts for the Lisa technique. Use this skill when the user wants to start a Lisa loop, prepare files for Lisa, write a PROMPT.md, or needs help structuring autonomous iterative tasks. Invoke with /lisa-prep to prepare groundwork files, then /lisa-loop to execute.
Creates preparation files (specs/, PLAN.md, PROMPT.md) for autonomous iterative coding. Use this to structure complex tasks before starting a Lisa loop with /lisa-loop.
/plugin marketplace add Arakiss/lisa/plugin install lisa@lisa-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/prep-templates.mdreferences/prompt-templates.mdLisa prompts are simple BECAUSE the complexity lives in preparation files.
Phase 1: PREPARE Phase 2: EXECUTE
┌─────────────────┐ ┌─────────────────┐
│ /lisa-prep │ ──▶ │ /lisa-loop │
│ │ │ │
│ Creates: │ │ Simple prompt │
│ - specs/ │ │ references the │
│ - PLAN.md │ │ prepared files │
│ - PROMPT.md │ │ │
└─────────────────┘ └─────────────────┘
Before starting a Lisa loop, create the groundwork files. When user invokes /lisa-prep:
Before creating files, ask the user how they want to handle Lisa artifacts:
Question to ask:
"Do you want Lisa artifacts (PROMPT.md, IMPLEMENTATION_PLAN.md, specs/) to be committed or ignored in git?"
| Option | When to use |
|---|---|
| Commit | Documentation matters, team project, want to preserve design decisions |
| Ignore | Personal project, temporary task, files will become obsolete |
If user chooses to ignore, add to .gitignore:
# Lisa loop artifacts
PROMPT.md
IMPLEMENTATION_PLAN.md
specs/
Explore the existing project structure, patterns, and conventions:
Create specs/ with clear, concise requirement files:
specs/
├── overview.md # What the project does (1 paragraph)
├── features.md # Feature list with acceptance criteria
├── tech-stack.md # Technologies and why
└── conventions.md # Code patterns to follow
Each file should be SHORT. Claude doesn't need essays, just facts.
A simple checklist of tasks:
# Implementation Plan
## Phase 1: Foundation
- [ ] Setup project structure
- [ ] Configure database
- [ ] Add authentication
## Phase 2: Core Features
- [ ] Feature A
- [ ] Feature B
- [ ] Feature C
## Phase 3: Polish
- [ ] Error handling
- [ ] Tests
- [ ] Documentation
Tasks should be:
Generate the simple prompt that references the prepared files:
Build [PROJECT_NAME].
Read specs/ for requirements.
Read IMPLEMENTATION_PLAN.md for tasks.
Pick the first uncompleted task, implement it, test it, mark done.
Commit after every completed task.
When all tasks are done, output: <promise>DONE</promise>
That's ~50 words. The complexity is in the files, not the prompt.
Once preparation is complete, start the loop:
/lisa-loop --max-iterations 30 PROMPT.md
The script will auto-detect <promise>DONE</promise> from PROMPT.md. You can also explicitly set it:
/lisa-loop --max-iterations 30 --completion-promise "DONE" PROMPT.md
Lisa will:
The most important lesson from production use: simple prompts beat complex ones.
From the RepoMirror hackathon:
Claude already knows how to code. Tell it WHAT to do, not HOW.
while true; do
cat PROMPT.md | claude --continue
done
# Mission
[One sentence describing the goal]
# Success Criteria
[How to know when done - be specific]
# Rules
- Commit after every file change
- Output <promise>DONE</promise> when complete
# Mission
Port the Python library in /src to TypeScript in /ts-src.
# Success Criteria
All .py files have equivalent .ts files with passing type checks.
# Rules
- Commit after every file edit
- Run `tsc --noEmit` to verify
- Output <promise>DONE</promise> when all files ported and types pass
# Mission
Add user authentication to the Express API.
# Success Criteria
- POST /auth/register creates users
- POST /auth/login returns JWT
- Protected routes require valid token
- All tests pass
# Rules
- Commit after each endpoint
- Write tests for each endpoint
- Output <promise>DONE</promise> when tests pass
# Mission
Fix all TypeScript errors in the codebase.
# Success Criteria
`npm run typecheck` exits with 0 errors.
# Rules
- Fix one file at a time
- Commit after each fix
- Output <promise>DONE</promise> when typecheck passes
Lisa loops until it sees <promise>DONE</promise> in Claude's output.
Auto-detection: If your PROMPT.md contains <promise>...</promise> tags, the setup script will automatically extract and configure the promise. No need to pass --completion-promise separately.
Always include this rule in your prompt:
Output <promise>DONE</promise> when [specific condition]
Be SPECIFIC about the condition. Not "when done" but "when tests pass" or "when all files are ported".
Instead of relying on Claude to declare completion, you can configure Lisa to stop based on external command output. This is useful for integrating with task trackers, CI systems, or any tool that reports status.
/lisa-loop PROMPT.md --stop-command "beads list --count" --stop-when "0"
Lisa will execute beads list --count after each iteration. When the output equals 0, the loop stops.
# Stop when task tracker has no remaining tickets
--stop-command "beads list --count" --stop-when "0"
# Stop when TypeScript has no errors
--stop-command "tsc --noEmit 2>&1 | grep -c error || echo 0" --stop-when "0"
# Stop when all tests pass
--stop-command "npm test > /dev/null 2>&1 && echo pass || echo fail" --stop-when "pass"
You can also embed stop conditions in your prompt:
<stop-command>beads list --count</stop-command>
<stop-when>0</stop-when>
Lisa checks in this order:
This methodology is inspired by Thariq's workflow - a Claude Code team member at Anthropic.
With basic prompts, Claude might output <promise>DONE</promise> prematurely because:
Result: Loop exits early, work is incomplete.
Instead of trusting Claude's judgment alone, we:
Traditional Approach:
┌─────────────────────────────────────────────────────────────┐
│ Claude works → Claude decides "I'm done" → Outputs promise │
│ │
│ Problem: Claude's judgment is the ONLY check │
└─────────────────────────────────────────────────────────────┘
Spec-Based Approach:
┌─────────────────────────────────────────────────────────────┐
│ Claude works → Re-reads spec → Checks EACH requirement → │
│ Only promises if ALL requirements verified → Outputs │
│ │
│ Advantage: Spec is the source of truth, not Claude's │
│ subjective sense of "done" │
└─────────────────────────────────────────────────────────────┘
Before writing code, have Claude interview you to create a detailed spec.
The Interview Prompt:
Read @SPEC.md (if exists) and interview me in detail using the
AskUserQuestionTool about literally anything: technical implementation,
UI & UX, concerns, tradeoffs, etc. but make sure the questions are not obvious.
Be very in-depth and continue interviewing me continually until
it's complete, then write the spec to the file.
Why Interview First:
Your specs/ should have testable requirements, not vague descriptions:
# specs/features.md
## Authentication (AUTH)
### AUTH-1: User Registration
- [ ] POST /auth/register accepts { email, password, name }
- [ ] Password must be >= 8 characters
- [ ] Email must be unique (409 if exists)
- [ ] Returns { user: { id, email, name }, token: string }
- [ ] Password is hashed with bcrypt (cost factor 12)
### AUTH-2: User Login
- [ ] POST /auth/login accepts { email, password }
- [ ] Returns 401 for invalid credentials
- [ ] Returns { user, token } on success
- [ ] Token expires in 7 days
### AUTH-3: Protected Routes
- [ ] Middleware validates Bearer token
- [ ] Returns 401 if token missing/invalid
- [ ] Attaches user to request object
Notice: Each requirement is a checkbox. This becomes your verification list.
# Mission
Implement the authentication system as specified in specs/.
# Source of Truth
The ONLY definition of "done" is specs/features.md.
Your subjective sense of completion is IRRELEVANT.
# Process Per Iteration
1. Read specs/features.md completely
2. Find first unchecked requirement
3. Implement it with tests
4. Mark it [x] in specs/features.md
5. Commit with requirement ID (e.g., "feat(auth): implement AUTH-1")
# Mandatory Verification (NEVER SKIP)
Before outputting <promise>DONE</promise>, you MUST:
1. Re-read specs/features.md from disk (not from memory)
2. For EACH requirement:
- Verify the code exists
- Verify tests exist and pass
- If test doesn't exist, WRITE IT and run it
3. Count: X requirements checked, Y unchecked
4. If Y > 0:
- List unchecked requirements
- Continue working on them
- DO NOT output the promise
5. If Y == 0 AND all tests pass:
- Output <promise>ALL REQUIREMENTS VERIFIED</promise>
# Critical Rules
- NEVER output the promise if ANY requirement is unchecked
- NEVER mark a requirement [x] without verifying it works
- NEVER trust your memory - always re-read the spec file
- If unsure about a requirement, implement it conservatively and note it
# Completion
<promise>ALL REQUIREMENTS VERIFIED</promise>
| Rule | Prevents |
|---|---|
| "Re-read specs from disk" | Claude forgetting requirements across iterations |
| "For EACH requirement verify" | Partial implementations slipping through |
| "Count X checked, Y unchecked" | Systematic tracking instead of gut feeling |
| "If Y > 0, continue working" | Premature completion |
| "NEVER trust memory" | Hallucinated completions |
| "Mark [x] only when verified" | False progress tracking |
Step 1: Interview Phase
User: /lisa:prep
Claude: [Uses AskUserQuestionTool]
- "What authentication method: JWT, sessions, or OAuth?"
- "Should tokens be refreshable? What's the expiry?"
- "What user fields beyond email/password?"
- "Any rate limiting requirements?"
- "What's the password policy?"
[Creates detailed specs/features.md]
Step 2: Specs Created
specs/
├── overview.md # 1 paragraph summary
├── features.md # Checkboxed requirements (THE CONTRACT)
├── tech-stack.md # Chosen technologies
├── api-contracts.md # Request/response shapes
└── edge-cases.md # Error handling requirements
Step 3: Execution
/lisa:loop PROMPT.md --max-iterations 50
Step 4: Each Iteration
Iteration 1: Reads spec → AUTH-1 unchecked → Implements → Marks [x] → Commits
Iteration 2: Reads spec → AUTH-2 unchecked → Implements → Marks [x] → Commits
...
Iteration N: Reads spec → All [x] → Runs all tests → Pass → <promise>...</promise>
| Failure Mode | How Prompt Prevents It |
|---|---|
| Claude says "done" without checking | "MUST re-read specs from disk" |
| Partial implementation accepted | "For EACH requirement verify" |
| Requirements forgotten over iterations | Spec file is persistent, re-read every time |
| Tests not written | "If test doesn't exist, WRITE IT" |
| False confidence | "Your subjective sense is IRRELEVANT" |
Use it when:
Skip it when:
For maximum robustness, combine spec verification with external checks:
# In PROMPT.md
...verification rules...
# External Verification
<stop-command>npm test 2>&1 | grep -q "0 failing" && echo PASS || echo FAIL</stop-command>
<stop-when>PASS</stop-when>
Now Lisa requires BOTH:
# BAD - Don't do this
When implementing the authentication system, first analyze the existing
codebase structure. Look for patterns in how other features are implemented.
Consider using JWT tokens with RS256 algorithm. Make sure to handle edge
cases like expired tokens, invalid signatures, and missing headers...
[500 more words]
Claude knows all this. You're just confusing it.
# BAD
Output <promise>DONE</promise> when you're finished.
Finished with what? Be specific.
# BAD - Missing commits
Port the library to TypeScript.
Without "commit after every file", you lose the incremental progress that makes Lisa work.
# Mission
Build a CLI todo app.
# Phases (do in order)
1. Setup: Initialize project with package.json and TypeScript
2. Core: Implement add, list, complete, delete commands
3. Storage: Persist todos to ~/.todos.json
4. Polish: Add colors and help text
# Rules
- Complete each phase before moving to next
- Commit after each phase
- Output <promise>DONE</promise> when all phases complete and working
# Mission
Refactor the database layer to use connection pooling.
# Success Criteria
- All tests pass
- No direct db.connect() calls remain
- Pool configured in config.ts
# Verification
Run these after each change:
- `npm test`
- `grep -r "db.connect" src/` should return nothing
# Rules
- Commit after each file refactored
- Output <promise>DONE</promise> when verification passes
Use the /lisa-loop command:
/lisa-loop "Your prompt here" --max-iterations 20 --completion-promise "DONE"
Or create a PROMPT.md file and run manually:
while :; do cat PROMPT.md | claude --continue; done
Lisa generates artifacts. Whether to commit them depends on your project:
| Commit when... | Ignore when... |
|---|---|
| Team project needs context | Personal/solo project |
| Specs document architectural decisions | Task is temporary/one-off |
| You want historical record of "why" | Files will become obsolete |
| Onboarding future developers | Keeping repo clean matters |
| File | Purpose |
|---|---|
PROMPT.md | Execution script for the loop |
IMPLEMENTATION_PLAN.md | Task checklist with phases |
specs/ | Requirements and design specs |
# Lisa loop artifacts
PROMPT.md
IMPLEMENTATION_PLAN.md
specs/
If you committed files but later decide to ignore:
git rm --cached PROMPT.md IMPLEMENTATION_PLAN.md specs/*
# Then add to .gitignore
git commit -m "chore: remove Lisa artifacts from tracking"
Files remain on disk but leave version control.
If Lisa isn't making progress:
| Element | Required | Example |
|---|---|---|
| Mission | Yes | "Port library to TypeScript" |
| Success Criteria | Yes | "All tests pass" |
| Commit rule | Recommended | "Commit after every file" |
| Completion promise | Yes | <promise>DONE</promise> |
| Detailed instructions | NO | Claude already knows |
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.