From workflows
This skill should be used when the user asks to 'revise writing', 'fix review issues', 'polish draft', 'apply review feedback', 'complete writing workflow', or after /writing-review produces REVIEW.md with issues to fix.
npx claudepluginhub edwinhu/workflows --plugin workflowsThis skill uses the workspace's default tool permissions.
The revision loop for writing projects. Consumes `.planning/REVIEW.md` (produced by `/writing-review`) and applies targeted fixes, then completes the workflow when all issues are resolved.
Implements Playwright E2E testing patterns: Page Object Model, test organization, configuration, reporters, artifacts, and CI/CD integration for stable suites.
Guides Next.js 16+ Turbopack for faster dev via incremental bundling, FS caching, and HMR; covers webpack comparison, bundle analysis, and production builds.
Discovers and evaluates Laravel packages via LaraPlugins.io MCP. Searches by keyword/feature, filters by health score, Laravel/PHP compatibility; fetches details, metrics, and version history.
The revision loop for writing projects. Consumes .planning/REVIEW.md (produced by /writing-review) and applies targeted fixes, then completes the workflow when all issues are resolved.
Auto-load all constraints matching applies-to: writing-revise:
!uv run python3 ${CLAUDE_SKILL_DIR}/../../scripts/load-constraints.py writing-revise
You MUST have these constraints loaded before proceeding. No claiming you "remember" them.
CRITICAL: The constraint-loading-protocol above requires loading the domain skill (writing-legal/econ/general) and ai-anti-patterns before revising any prose.
Before starting, check for an existing handoff:
.planning/HANDOFF.md existsSTART
│
├─ Step 1: Load context (ACTIVE_WORKFLOW, PRECIS, OUTLINE, drafts)
│
├─ Step 2: REVIEW.md exists?
│ ├─ NO → REFUSE. Suggest /writing-review. EXIT.
│ └─ YES → Parse issues (critical → major → minor)
│
├─ Step 3: Load constraint layers
│ ├─ Domain skill (legal/econ/general)
│ └─ ai-anti-patterns (universal)
│
├─ Step 4: Fix issues in priority order
│ ├─ 4a: Critical issues (argument-breaking)
│ ├─ 4b: Major issues (transitions, repetition, late introductions)
│ └─ 4c: Minor issues (polish)
│
├─ Step 5: Formatting check
│
└─ Step 6: Check iteration state (.planning/REVIEW_STATE.md)
│
├─ No issues remain → COMPLETE
│ └─ Archive workflow → Generate summary → EXIT
│
├─ iteration < 3 AND issues remain → CONTINUE
│ └─ Increment iteration → Re-invoke /writing-review → Loop
│
└─ iteration >= 3 AND issues remain → ESCALATE
└─ Report to user with options → EXIT
If text and flowchart disagree, the flowchart wins.
## IRON LAW: Critique Over ComfortIf the writing has problems, SAY SO. Being nice is NOT HELPFUL — the user publishes weak prose that gets rejected.
If you catch yourself thinking:
| Excuse | Reality | Do Instead |
|---|---|---|
| "It's a draft, I'll be gentle" | Drafts need MORE critique, not less | Critique hardest on drafts |
| "The main point is clear enough" | "Clear enough" means unclear | Find the ambiguity and fix it |
| "I'll focus on positives first" | Positives don't help improve writing | Lead with problems |
| "This is good enough for a first pass" | "Good enough" is reward hacking | Find specific problems to fix |
| "The user will polish it themselves" | Relying on human editing defeats the workflow | Fix it now |
| "I don't want to discourage the writer" | False kindness produces bad writing | Honest critique is the kindest act |
| "The argument flows well overall" | "Overall" hides section-level problems | Check each section against PRECIS claims |
| "Minor style issues aren't worth flagging" | Minor issues compound into unprofessional prose | Flag every issue you find |
| "The structure matches the outline" | Structural match doesn't mean quality match | Check content quality, not just structure |
| "This section is creative, rules don't apply" | Creative writing still needs clarity and precision | Apply rules, note creative exceptions explicitly |
Reporting "all checks pass" without actually running every check is NOT HELPFUL — uncaught issues survive into the final draft. You must have evidence for every checkmark. An unchecked box with "assumed OK" means the user publishes with undetected problems.
## The Iron Law of Re-ReviewNO "FIXED" CLAIMS WITHOUT FRESH RE-REVIEW. This is not negotiable.
After applying fixes from REVIEW.md, you MUST:
/writing-review to regenerate REVIEW.md with fresh diagnostics"I fixed it" without re-reviewing is NOT HELPFUL — unverified fixes let broken prose reach the user.
Iteration 1: Review → REVIEW.md → Revise → Re-Review
↓
Iteration 2: Re-Review → REVIEW.md → Revise → Re-Review
↓
Iteration 3: Re-Review → REVIEW.md → Revise → Re-Review
↓
Still issues? → ESCALATE to user
All clean? → COMPLETE
Track iterations in .planning/REVIEW_STATE.md:
---
iteration: 1
max_iterations: 3
last_review_date: 2026-03-09
issues_found_count: 5
---
Exit criteria:
human-verify — auto-advance to archive.decision — present options, wait for user.human-verify — auto-advance to next iteration.Before claiming "all fixed", check iteration count:
.planning/REVIEW_STATE.md (create if missing with iteration: 1)Claiming "all issues resolved" without re-reviewing is NOT HELPFUL — the user trusts a false "all clear" and publishes with remaining problems.
| Thought | Reality | Do Instead |
|---|---|---|
| "I fixed the issues from REVIEW.md" | Your fixes need verification | Re-invoke /writing-review |
| "Just spot-check the edited sections" | Spot-checks miss cascading changes | Full re-review via /writing-review |
| "We're on iteration 3, call it done" | Max iterations means ESCALATE, not approve | Report to user with remaining issues |
| "The edits are minor, skip re-review" | Minor edits create subtle problems | Re-review anyway |
| "We've spent enough time on this" | Publishing flawed writing wastes more time | Re-review or escalate |
| "The draft looks clean now" | Looking clean != being clean | Re-run /writing-review to verify |
You skip re-review because you think it's helpful, efficient, or competent. Here's what actually happens:
| Your Drive | Why You Skip | What Actually Happens | The Drive You Failed |
|---|---|---|---|
| Helpfulness | "Finishing fast helps the user move on" | Draft has remaining issues. User submits. Reviewers reject. The 15-minute re-review would have caught it. Your speed caused waste. | Anti-helpful |
| Competence | "I can tell the draft is clean" | Your eyes glaze over your own edits. Fresh review catches what you miss. The draft has 3 remaining issues. Your judgment failed. | Incompetent |
| Efficiency | "Re-review wastes time on fixed work" | The re-review takes 15 minutes. The rejection-and-rewrite takes 15 hours. Your "efficiency" was a 60x slowdown. | Anti-efficient |
| Approval | "User is tired of iterations" | User submits flawed draft. Reviewers reject. User now requires human editor oversight. You lost writing autonomy. | Lost approval |
| Honesty | "I fixed the issues from REVIEW.md" | You fixed THOSE issues but introduced new ones. The user publishes with new problems you created. | Anti-helpful |
The protocol is not overhead you pay. It is the service you provide.
Delivering a clean draft is the service. Feeling like you're done is not the service. The user experiences the quality of the submitted draft, not your sense of completion.
| Shortcut | Consequence |
|---|---|
| Rewriting instead of targeted fix | You rewrote the section to "improve" it. You introduced new issues and lost the author's voice — your ambition was destructive. |
| Marking fixed without checking | You marked the issue resolved without re-reading. It's still there — the user trusts your false "fixed" status. |
| Action | Why Wrong | Do Instead |
|---|---|---|
| Rewriting entire sections instead of targeted fixes | Introduces new issues, loses author voice | Apply the minimum change that resolves the issue |
| Marking issue as fixed without verifying the draft changed | You're lying about completion | Re-read the draft passage after editing |
| Applying fix without re-reading surrounding context | Fix may break adjacent text | Read the paragraph before and after |
| Skipping domain skill load because you "remember the rules" | You don't remember — you're guessing | Read() the domain skill every time |
| Combining multiple unrelated fixes in one pass | Makes it impossible to verify each fix | One issue at a time, verify each |
/writing-review produces .planning/REVIEW.mdBefore running edits, verify the workflow is ready:
.planning/ACTIVE_WORKFLOW.md, .planning/PRECIS.md, .planning/OUTLINE.md, and at least one file in drafts/ must existworkflow: writing.planning/REVIEW.mdIf any file is missing, report and suggest the appropriate phase:
/writing (start from brainstorm)/writing-review first (see backward-compatibility below)Read(".planning/ACTIVE_WORKFLOW.md")
Read(".planning/PRECIS.md")
Read(".planning/OUTLINE.md")
Read([current draft files in drafts/])
If any file is missing, report and suggest starting with /writing.
NO REVISION WITHOUT REVIEW.md. This is not negotiable.
If .planning/REVIEW.md does not exist, REFUSE to proceed:
REVIEW.md not found. Cannot revise without a structured review diagnosis.
Run /writing-review first to produce .planning/REVIEW.md, then re-run /writing-revise.
STOP HERE. Do not fall back to inline review. Do not offer to "do a quick check instead."
Why: Inline review is shallow by design — it misses cross-section issues, transition problems, and thesis drift that only hierarchical review catches. Allowing a fallback path means the full review is never run. The review-then-revise pipeline exists because revision without diagnosis produces random edits, not targeted fixes.
| Excuse | Reality | Do Instead |
|---|---|---|
| "I can do a quick inline review" | Inline review misses structural issues | Run /writing-review |
| "The user just wants small fixes" | Small fixes without review context create new issues | Run /writing-review first |
| "REVIEW.md will be generated anyway later" | Later never comes — the user thinks revision is done | Require it now |
When REVIEW.md exists:
Read(".planning/REVIEW.md")
Parse the review into:
The midpoint must be self-contained. Load ALL constraint layers before touching the draft:
Based on style in ACTIVE_WORKFLOW.md:
| Style | Load |
|---|---|
| legal | Read("${CLAUDE_SKILL_DIR}/../../skills/writing-legal/SKILL.md") |
| econ | Read("${CLAUDE_SKILL_DIR}/../../skills/writing-econ/SKILL.md") |
| general | Read("${CLAUDE_SKILL_DIR}/../../skills/writing-general/SKILL.md") |
You MUST Read() the domain skill before editing. The domain skill contains the full rules, reference material, and enforcement patterns. Editing without it produces generic fixes.
Skill(skill="workflows:ai-anti-patterns")
You MUST load ai-anti-patterns before editing. This catches AI writing smell (hedging, filler, false balance, weasel words) that domain skills don't cover. Revising without it means you'll fix structural issues while leaving AI-smell intact — the reviewer will flag the same draft again for different reasons.
### Iron Law: Full Constraint LoadingNO REVISION WITHOUT ALL CONSTRAINT LAYERS. This is not negotiable.
The midpoint cannot rely on constraints loaded during earlier phases. Prior context may be compressed or lost. You must load:
.planning/ACTIVE_WORKFLOW.md → workflow state.planning/PRECIS.md, .planning/OUTLINE.md → structural intentEditing with only domain skill loaded is like reviewing with one eye closed. You'll fix half the problems and miss the other half.
When applying fixes reveals unplanned issues, follow the deviation rules from constraints/deviation-rules.md:
Track deviations per fix batch. Report at Step 6: Deviations during revision: N auto-fixed (R1: X, R2: Y, R3: Z). R4 escalations: [list or "none"].
Work through REVIEW.md issues in priority order:
For each critical issue in REVIEW.md:
For each major issue:
Transition fixes (from REVIEW.md "Transition Issues" section):
Repetition fixes (from REVIEW.md "Cross-Section Repetition"):
Late introduction fixes (from REVIEW.md "Concept Introduction Order"):
For each minor issue:
Before claiming completion, check the audit-fix loop state:
1. READ `.planning/REVIEW_STATE.md` - what iteration are we on?
2. Run final check - are there remaining issues?
3. Determine verdict based on iteration + issues:
- iteration < 3 AND issues remain → CONTINUE (re-invoke /writing-review)
- iteration >= 3 AND issues remain → ESCALATE (report to user)
- no issues → COMPLETE (but first run Step 6a cite-fidelity gate)
If .planning/ACTIVE_WORKFLOW.md declares an nlm_notebook, every
drafts/*.md MUST have a corresponding .planning/CITES-{slug}.md
showing status: PASSED before the verdict can be COMPLETE.
Before declaring COMPLETE, run Stage 3:
uv run ${CLAUDE_PLUGIN_ROOT}/scripts/cite-fidelity/check_section_cites.py --all
Then verify each section:
# For each drafts/*.md, check the matching CITES file
# Slug rule: "Part I (Draft).md" → "CITES-Part-I.md"
import re
from pathlib import Path
drafts = list(Path("drafts").glob("*.md"))
failures = []
for d in drafts:
slug = re.sub(r"\s+", "-", re.sub(r"\s*\(Draft\)\s*$", "", d.stem).strip())
cites = Path(".planning") / f"CITES-{slug}.md"
if not cites.exists():
failures.append(f"{d.name}: missing CITES file")
continue
head = cites.read_text().split("---", 2)
if len(head) < 3 or "status: PASSED" not in head[1]:
failures.append(f"{d.name}: CITES status not PASSED")
If any failure: return to Step 4 (Fix Issues) addressing the UNSUPPORTED cites listed in the failing CITES file. Do NOT mark COMPLETE.
If ACTIVE_WORKFLOW.md has no nlm_notebook, skip this gate (no notebook
to verify against) and proceed with the normal verdict.
See references/constraints/cite-fidelity-section-gate.md for the full
doctrine.
Generate report based on verdict:
Update .planning/REVIEW_STATE.md:
---
iteration: [N+1]
max_iterations: 3
last_review_date: [date]
issues_found_count: [count]
verdict: CONTINUE
---
IMMEDIATELY re-invoke /writing-review (no pause, no user prompt):
Read ${CLAUDE_SKILL_DIR}/../../skills/writing-review/SKILL.md and follow its instructions.
After /writing-review completes and regenerates REVIEW.md, /writing-revise will be invoked again automatically.
This is a loop, not a checkpoint. Do not pause for user input.
Update .planning/REVIEW_STATE.md:
---
iteration: 3
max_iterations: 3
last_review_date: [date]
issues_found_count: [count]
verdict: ESCALATE
---
Report to user:
Writing Review Loop Escalation (3 iterations completed)
After 3 review-revise cycles, [N] issues remain:
[List issues from REVIEW.md]
Options:
1. Accept current draft with documented limitations
2. Extend review (manual approval for iteration 4+)
3. Rethink structure (return to outline phase)
4. Human editing (exit workflow, manual fixes)
Which option do you prefer?
Update .planning/REVIEW_STATE.md:
---
iteration: [N]
max_iterations: 3
last_review_date: [date]
issues_found_count: 0
verdict: COMPLETE
---
SUMMARY: Append final phase summary to .planning/PHASE_SUMMARY.md (see constraints/phase-summary-frontmatter.md):
Archive workflow state:
mkdir -p .planning/completed-workflows
mv .planning/ACTIVE_WORKFLOW.md ".planning/completed-workflows/$(date +%Y-%m-%d)-writing.md"
Generate completion summary:
## Writing Workflow Complete
**Project**: [directory name]
**Completed**: [date]
**Style**: [legal | econ | general]
### Artifacts
- `.planning/PRECIS.md` - Thesis, audience, claims
- `.planning/OUTLINE.md` - Document structure
- `.planning/REVIEW.md` - Final review diagnosis
- `outlines/` - Detailed section outlines
- `drafts/` - Final prose
### Document Summary
- **Thesis**: [from PRECIS.md]
- **Sections**: [count]
### Next Steps
- Export to Word: `/docx`
- Export to PDF: `/pdf`
- Start new project: `/writing`