From workflows
This skill should be used when the user asks to 'write a paper', 'start a writing project', 'draft an article', 'write about', 'brainstorm writing topics', 'gather sources for a paper', 'what should I write about', or needs the writing workflow entry point for any writing task.
npx claudepluginhub edwinhu/workflows --plugin workflowsThis skill is limited to using the following tools:
**Entry point for all writing tasks.** Routes to quick mode or project workflow.
Implements Playwright E2E testing patterns: Page Object Model, test organization, configuration, reporters, artifacts, and CI/CD integration for stable suites.
Guides Next.js 16+ Turbopack for faster dev via incremental bundling, FS caching, and HMR; covers webpack comparison, bundle analysis, and production builds.
Discovers and evaluates Laravel packages via LaraPlugins.io MCP. Searches by keyword/feature, filters by health score, Laravel/PHP compatibility; fetches details, metrics, and version history.
Entry point for all writing tasks. Routes to quick mode or project workflow.
Load the constraint index for the writing workflow:
!cat ${CLAUDE_SKILL_DIR}/../../references/constraints/writing-common-constraints.md
Router loads index only. Phase skills load specific atomic files relevant to their phase.
Before starting, check for an existing handoff:
.planning/HANDOFF.md existsSTART
│
├─ Quick edit? ("check this paragraph", inline short text)
│ YES → Load writing-general/SKILL.md → Apply rules → Return → EXIT
│
├─ Active workflow? (.planning/ACTIVE_WORKFLOW.md exists)
│ YES → Read ACTIVE_WORKFLOW.md → Resume at current phase → EXIT
│
└─ New project
→ Phase 2: Detect Domain, Gather Sources
→ Launch writing-setup
If text and flowchart disagree, the flowchart wins.
Quick Mode Indicators (edit text directly, no workflow):
→ If quick mode: discover the writing-general skill path via ${CLAUDE_SKILL_DIR}/../../skills/writing-general/SKILL.md, then Read() the output path and apply rules to text.
Project Mode Indicators (full workflow):
→ If project mode: Continue to Phase 2 below.
if .planning/ACTIVE_WORKFLOW.md exists:
Read(“.planning/ACTIVE_WORKFLOW.md”)
Read(“.planning/PRECIS.md”)
Read(“.planning/OUTLINE.md”)
→ Resume at current phase with appropriate domain skill
else:
→ Continue to Phase 3 (new project setup)
Creates PRECIS.md (thesis, audience, claims) and OUTLINE.md (structure), then hands off to domain-specific writing skill.
Writing projects should follow this standardized structure:
project-name/
├── .planning/
│ ├── ACTIVE_WORKFLOW.md # Workflow state (auto-created)
│ ├── PRECIS.md # Thesis, audience, claims, counterarguments
│ └── OUTLINE.md # Master document structure
├── outlines/ # Detailed section/part outlines
│ ├── Part I (Outline).md
│ ├── Part II (Outline).md
│ └── ...
├── drafts/ # Prose drafts (expanded from outlines)
│ ├── Part I (Draft).md
│ ├── Part II (Draft).md
│ └── ...
├── references/ # Source materials, notes
│ ├── sources.bib # BibTeX bibliography (pandoc --citeproc reads this)
│ └── [topic-notes].md # Research notes by topic
└── scratch/ # Working files (gitignored)
└── brainstorm-notes.md
| Directory | Purpose | Tracked in Git |
|---|---|---|
.planning/ | Workflow state + high-level docs (PRECIS, OUTLINE) | Yes |
outlines/ | Detailed outlines per section/part | Yes |
drafts/ | Prose versions of outlines | Yes |
references/ | Sources, research notes | Yes |
scratch/ | Temporary working files | No |
Writing proceeds through levels of detail:
.planning/PRECIS.md # Level 1: Thesis, claims, audience
↓
.planning/OUTLINE.md # Level 2: Master structure (sections, goals)
↓
outlines/Part I.md # Level 3: Detailed section outline (bullets, sources)
↓
drafts/Part I.md # Level 4: Prose expansion
Each level expands the previous. Don’t skip levels:
For multi-part documents:
outlines/Part I (Outline).mddrafts/Part I (Draft).mdFor single documents:
.planning/OUTLINE.md is sufficientdrafts/draft.md or drafts/[title].mdWhen starting a new writing project, create the directories:
mkdir -p outlines drafts references scratch .planning
echo “scratch/” >> .gitignore
/writing (entry point)
│
└── skills/writing/ (this skill)
│ Mode detect, source gathering, topic exploration
│ GATE: Sources gathered, domain detected
│
└── skills/writing-setup/ (project foundation)
│ PRECIS.md, OUTLINE.md, ACTIVE_WORKFLOW.md
│ GATE: All three files exist with required content
│
└── skills/writing-outline/ (per section)
│ outlines/[Section] (Outline).md
│ GATE: Outline cross-references PRECIS claims
│
└── skills/writing-draft/ (per section)
│ Domain skill loaded (legal/econ/general)
│ drafts/[Section] (Draft).md
│ GATE: All sections drafted with depth
│
└── /writing-review (diagnose → REVIEW.md)
│ Hierarchical review: section → transition → document
│ .planning/REVIEW.md
│ GATE: All sections reviewed, all levels complete
│
└── /writing-revise (fix from REVIEW.md + complete)
Invoke this skill for:
Source searching is handled by the librarian agent (workflows:librarian), which routes through NLM first, then Readwise via the official CLI. You do NOT need direct Readwise MCP access.
ALL source gathering MUST go through the librarian agent, which enforces:
If you're about to call mcp__readwise__* or spawn a general-purpose agent for search, STOP.
NO SEARCH WITHOUT CLARIFYING INTENT FIRST. This is not negotiable.
In Gathering Mode, you MUST use AskUserQuestion to understand angle and audience BEFORE launching any librarian searches. Searching without intent produces scattered results that don't serve an argument.
If you find yourself about to search before the user has confirmed their angle:
Searching before clarifying is like outlining before having a thesis. You'll gather sources for a topic, not an argument. The sources won't support any specific claim because you don't have one yet.
For a topic with N distinct themes, launch N parallel librarian agents:
Task(
subagent_type="workflows:librarian",
prompt="""Search for highlights and sources about **[THEME]**.
Check NLM notebooks first, then search Readwise.
Return ONLY:
- Top 3 most relevant sources (title, author)
- Top 3 quotes worth citing (with source attribution)
- 1-2 sentence theme summary"""
)
Launch 5 parallel librarian agents:
Each returns ~100 words instead of ~5000 words of raw highlights.
When user wants to find topics ("what should I write about?"):
Survey knowledge base
Analyze patterns
Present topic candidates
When user has a topic (“gather sources on X”), follow this human-in-the-loop workflow:
BEFORE any search, use AskUserQuestion to understand:
AskUserQuestion(questions=[
{
“question”: “What’s your primary angle or thesis for this piece?”,
“header”: “Angle”,
“options”: [
{“label”: “Critique existing framework”, “description”: “Argue current approach is flawed”},
{“label”: “Propose reform”, “description”: “Offer specific policy changes”},
{“label”: “Comparative analysis”, “description”: “Compare approaches across jurisdictions”},
{“label”: “Empirical analysis”, “description”: “Present data-driven findings”}
],
“multiSelect”: false
},
{
“question”: “Who is your target audience?”,
“header”: “Audience”,
“options”: [
{“label”: “Law review”, “description”: “Academic legal audience”},
{“label”: “Practitioners”, “description”: “Lawyers, regulators, compliance”},
{“label”: “Policy makers”, “description”: “Legislators, agency staff”},
{“label”: “General educated”, “description”: “Informed non-specialists”}
],
“multiSelect”: false
}
])
Decompose into themes based on clarified intent
Launch parallel librarian agents
subagent_type="workflows:librarian" for each themeSynthesize results
Present a summary of findings to the user for confirmation:
Ask for feedback before proceeding to project setup.
The actual OUTLINE.md and PRECIS.md creation happens in the next phase (writing-setup), not here. Brainstorm's job is to gather and synthesize, not to create project artifacts.
Present brainstorm results as a summary:
# [Topic Title]
## Thesis/Angle
[One-sentence framing]
## Key Sources
- **[Source 1]** by [Author]
- “[Highlight quote]”
- Relevant to: [subtopic]
## Outline
### [Subtopic 1]
- Point A (Source 1, Source 3)
- Point B (Source 2)
### [Subtopic 2]
...
## Open Questions
- [Questions highlights don’t answer]
## Next Steps
- Suggested writing skill: /writing-[domain]
After gathering sources, detect the topic domain and load the appropriate skill:
| Domain Indicators | Style | Skill to Load |
|---|---|---|
| Legal cases, statutes, law reviews, constitutional | legal | skills/writing-legal/SKILL.md |
| Economics, markets, policy, data, empirical | econ | skills/writing-econ/SKILL.md |
| General/other | general | skills/writing-general/SKILL.md |
Domain-specific enforcement rules are applied during the draft phase (writing-draft skill), not during brainstorm. Brainstorm only detects the domain; enforcement happens later.
| Need | Action |
|---|---|
| Survey topic landscape | Dispatch librarian: "What topics are in my NLM notebooks and Readwise tags?" |
| Find highlights by keyword | Dispatch librarian: "Search for highlights about [topic]" |
| Get book/article highlights | Dispatch librarian: "Get highlights from [title] and summarize" |
| Full document text | Dispatch librarian: "Fetch full text of articles tagged [tag]" |
User: “I want to write something but don’t know what”
Process:
User: “Let’s brainstorm a law review article about retail access to private equity”
Process:
For topics with many research themes, launch parallel librarian agents that each own a research angle:
# Launch 3 librarian agents in a SINGLE message (parallel)
Task(subagent_type="workflows:librarian", prompt="Search for sources SUPPORTING the thesis: [thesis]. Return top quotes and sources.")
Task(subagent_type="workflows:librarian", prompt="Search for sources OPPOSING the thesis: [thesis]. Steel-man the counterarguments.")
Task(subagent_type="workflows:librarian", prompt="Search for empirical evidence and data related to: [thesis]. Focus on numbers and findings.")
This produces better-grounded brainstorming than sequential searches because parallel agents find contradictions you'd otherwise miss.
Before proceeding to project setup:
"User seemed to agree" is not confirmation. AskUserQuestion response or explicit typed confirmation is confirmation. Inferring agreement from silence or topic continuation is rubber-stamping the gate.
| Excuse | Reality | Do Instead |
|---|---|---|
| "I already know enough about this topic" | Your training data is not research | Search for real sources |
| "One search is enough" | One search finds one perspective | Decompose into 3-6 parallel searches |
| "The user seems impatient, skip the interview" | Wrong objectives waste more time than questions | Ask the clarifying questions |
| "I'll gather more sources later" | Later never comes; you'll draft with what you have | Gather sources now |
| "This topic is straightforward" | "Straightforward" means you haven't thought deeply enough | Find the complexity |
Skipping source gathering is NOT HELPFUL — the user publishes unsupported claims that reviewers reject. Your training knowledge is not research. Your recall is not citation.
| Your Drive | Why You Skip | What Actually Happens | The Drive You Failed |
|---|---|---|---|
| Helpfulness | "Getting to drafting fast helps the user see progress" | The draft has no evidence. Every claim is an assertion. The user submits and reviewers reject for lack of sources. Your speed destroyed their credibility. | Anti-helpful |
| Competence | "I already know enough about this topic to skip research" | You searched nothing. The paper misses the 3 most relevant recent sources. A librarian search would have found them in 2 minutes. Your expertise was ignorance. | Incompetent |
| Efficiency | "The user interview wastes time — I can infer the angle" | You inferred wrong. The paper argues critique when the user wanted reform. You rewrote from scratch. The 5-minute interview would have saved 2 hours. | Anti-efficient |
| Approval | "The user seems eager to start writing" | You skipped clarification to please them. The draft argues the wrong thesis. The user now questions whether you understand their work at all. You lost trust. | Lost approval |
| Honesty | "I cited from memory — I know these sources" | Your training data citations are wrong or outdated. The user submits fabricated sources — their credibility is destroyed. | Anti-helpful |
| Action | Why Wrong | Do Instead |
|---|---|---|
| Jumping to PRECIS creation without source gathering | PRECIS without sources = thin argument | Gather sources first |
| Skipping the user interview about angle/audience | You'll brainstorm for the wrong audience | Ask the clarifying questions |
| Running a single search instead of parallel librarian agents | Single search misses themes | Decompose into 3-6 parallel librarian searches |
| Calling Readwise MCP tools directly | Violates librarian Iron Law, pollutes context | Always dispatch workflows:librarian |
| Detecting domain without checking source indicators | Wrong domain = wrong style enforcement later | Check the domain detection table |
| Moving to setup before user confirms the topic | User approval is the gate | Present findings, get confirmation |
After the user confirms topic and sources are gathered, IMMEDIATELY proceed to writing-setup. Do NOT ask "should I continue?" or "ready to proceed?" or any variant.
The gate passed. The user confirmed. Asking permission to continue is procrastination disguised as courtesy. Load the next skill and execute it.
After brainstorm is complete, proceed to project setup:
Read ${CLAUDE_SKILL_DIR}/../../skills/writing-setup/SKILL.md and follow its instructions.
Then follow its instructions immediately to create PRECIS.md, OUTLINE.md, and ACTIVE_WORKFLOW.md.