Damascus — forge documents through iterative multi-LLM review
npx claudepluginhub flashwade03/damascus-for-claude-codeForge documents through iterative multi-LLM review — like Damascus steel, stronger through repeated folding.
No description available.
Production-ready workflow orchestration with 79 focused plugins, 184 specialized agents, and 150 skills - optimized for granular installation and minimal token usage
Directory of popular Claude Code extensions including development tools, productivity plugins, and MCP integrations
Damascus
Forge documents through iterative multi-LLM review
Installation · Usage · Configuration · 한국어 · 日本語
Like Damascus steel, documents become stronger through repeated forging.
Damascus is a Claude Code plugin that refines documents through an iterative review loop powered by multiple LLMs. Write an implementation plan or technical document, have it reviewed by Claude, Gemini, and OpenAI in parallel, then refine until approved.
/forge [-n max] [-o path] <task description>
/forge-team [-n max] [-o path] <task description>
When Claude's plan mode doesn't get it right, you start over. New context, new exploration, new attempt — everything from the previous try is gone. Do this three times and you've spent three full plan mode runs, but the result learned nothing from prior failures.
This is rolling dice until you get a six.
Damascus takes a different approach: feedback accumulates, context is preserved, and each iteration builds on the last. The first draft's weaknesses become the second draft's input. Reviewers catch what the author missed, and the author addresses it — within the same context, not from scratch.
The result isn't random. It converges.
Development costs 3–5× more than planning. Without Damascus, you pay that cost every attempt.
Without Damascus — re-roll until it works
Attempt 1: Plan [====] → Develop [================] → ✗ flawed
Attempt 2: Plan [====] → Develop [================] → ✗ still flawed
Attempt 3: Plan [====] → Develop [================] → ✓ acceptable
Total tokens: ~300K (plan) + ~900K (develop) = ~1.2M
Prior context: 0% — each attempt starts from scratch
With Damascus — iterate on the cheap side, develop once
Iteration 1: Draft [====] → Review [==] → refine
Iteration 2: Refine [==] → Review [==] → refine
Iteration 3: Refine [==] → Review [==] → ✓ approved
Development: Develop [================] → ✓ done
Total tokens: ~340K (plan + reviews) + ~300K (develop) = ~640K
Prior context: 100% — every iteration builds on the last
~1.2M tokens spent on 3 failed-then-succeeded developments, or ~640K tokens on 3 refined plans and 1 clean development. The iteration happens where it's cheap.
Fewer tokens isn't just cheaper — it means higher information density in the context window. Re-rolling spends tokens re-exploring the same codebase from scratch. Damascus spends tokens on feedback that refines what's already known.
/forge — Sequential (v3) ┌─────────────┐
│ Author │ Draft the document
└──────┬──────┘
│
┌──────▼──────┐
│ Save │ Write to file
└──────┬──────┘
│
┌───────────┼───────────┐
▼ ▼ ▼
Claude Gemini OpenAI Review in parallel
└───────────┼───────────┘
▼
┌─────────────┐
│ Judge │──── Approved ──▶ Done
└──────┬──────┘
│ Needs work
└──▶ Back to Author (up to N iterations)
Each iteration folds in feedback from all reviewers, strengthening the document like layers of Damascus steel. The authoring agent is resumed across iterations — it remembers every file it read, every pattern it discovered, and refines surgically instead of re-exploring from scratch.
/forge-team — Agent Teams (v4) Lead ──▶ Planner ──▶ Explorers (parallel codebase investigation)
◄──── findings
Planner ──▶ Lead (plan via ExitPlanMode)
Lead ──▶ Scribe (polish & write)
Lead ──▶ Reviewers (parallel: Claude + Gemini + OpenAI)
◄──── reviews
Lead: verdict ── APPROVED ──▶ Shutdown
│ NEEDS_REVISION
└──▶ Planner (revise, up to N rounds)
Agent Teams mode uses Claude Code's Agent Teams to run multiple specialized teammates in parallel. Every teammate stays alive across rounds — full context is preserved without resume.
| Role | Count | Responsibility |
|---|---|---|
| Lead | 1 | Orchestrates rounds, determines verdict |
| Explorer | 1–3 | Investigates specific codebase areas, reports to Planner |
| Planner | 1 | Manages explorers, synthesizes findings into a plan |
| Scribe | 1 | Only agent that writes files (document + review) |
| Reviewer | 1–3 | Independent review (Claude, Gemini, OpenAI) |
Both modes produce reviewed, multi-LLM-approved documents. The difference is depth: