Verification-first manifest workflows for Claude Code
npx claudepluginhub doodledood/manifest-devManifest-driven workflows for structured task execution with verification gates.
Post-processing utilities for manifest workflows. ADR synthesis from session transcripts.
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Production-ready workflow orchestration with 79 focused plugins, 184 specialized agents, and 150 skills - optimized for granular installation and minimal token usage
Directory of popular Claude Code extensions including development tools, productivity plugins, and MCP integrations
Share bugs, ideas, or general feedback.
Stop iterating with the model after implementation. Define what you'd accept, run two commands, ship it.
# Claude Code (primary)
/plugin marketplace add doodledood/manifest-dev
/plugin install manifest-dev@manifest-dev-marketplace
# Gemini CLI — everything (skills, agents, hooks)
curl -fsSL https://raw.githubusercontent.com/doodledood/manifest-dev/main/dist/gemini/install.sh | bash
# OpenCode — everything (skills, agents, commands, plugin)
curl -fsSL https://raw.githubusercontent.com/doodledood/manifest-dev/main/dist/opencode/install.sh | bash
# Codex CLI — everything (skills, TOML stubs, rules, config)
curl -fsSL https://raw.githubusercontent.com/doodledood/manifest-dev/main/dist/codex/install.sh | bash
Then use it:
# Define what to build, then execute
/define <what you want to build>
/do <manifest-path>
# Or go end-to-end autonomously:
/auto <what you want to build>
# Tend a PR through review:
/tend-pr <manifest-path-or-pr-url>
# Optional: figure something out before acting
/figure-out <topic or problem>
/define interviews you and builds a manifest. /do executes it. /auto chains both — define autonomously, auto-approve, execute — in a single command. /figure-out is optional — a truth-convergent thinking partner for when figuring it out IS the goal, or before /define when the problem space is foggy.
Control interview depth with --interview minimal|autonomous|thorough (default: thorough). Thorough asks everything. Minimal asks scope and high-impact items. Autonomous builds the manifest without asking, presents it for approval.
If you use zsh and want easy upgrade commands for the non-Claude distributions, add this to ~/.zshrc:
alias upgrade-manifest-dev-codex='curl -fsSL https://raw.githubusercontent.com/doodledood/manifest-dev/main/dist/codex/install.sh | bash'
alias upgrade-manifest-dev-gemini='curl -fsSL https://raw.githubusercontent.com/doodledood/manifest-dev/main/dist/gemini/install.sh | bash'
alias upgrade-manifest-dev-opencode='curl -fsSL https://raw.githubusercontent.com/doodledood/manifest-dev/main/dist/opencode/install.sh | bash'
alias upgrade-manifest-dev-all='upgrade-manifest-dev-codex && upgrade-manifest-dev-gemini && upgrade-manifest-dev-opencode'
Then run source ~/.zshrc once. Future updates are just upgrade-manifest-dev-codex, upgrade-manifest-dev-gemini, upgrade-manifest-dev-opencode, or upgrade-manifest-dev-all for all three.
Uninstall uses the same entrypoints:
curl -fsSL https://raw.githubusercontent.com/doodledood/manifest-dev/main/dist/gemini/install.sh | bash -s -- uninstall
curl -fsSL https://raw.githubusercontent.com/doodledood/manifest-dev/main/dist/opencode/install.sh | bash -s -- uninstall
curl -fsSL https://raw.githubusercontent.com/doodledood/manifest-dev/main/dist/codex/install.sh | bash -s -- uninstall
Instead of telling the AI how to build something, you tell it what you'd accept.
Say you need a login page. The old way: "use React Hook Form, validate with Zod, show inline errors, disable the button while submitting." You've made every design decision upfront. The manifest way: "invalid credentials show an error without clearing the password field" and "the form can't be submitted twice." You define the bar. The AI picks how to clear it. Automated verification confirms it did.
flowchart TD
A["/define 'task'"] --> B["Interview"]
B --> C["Manifest file"]
C --> D["/do manifest.md"]
D --> E{"For each Deliverable"}
E --> F["Satisfy ACs"]
F --> G["/verify"]
G -->|failures| H["Fix specific criterion"]
H --> G
G -->|all pass| I["/done"]
E -->|risk detected| J["Consult trade-offs, adjust approach"]
J -->|ACs achievable| E
J -->|stuck| K["/escalate"]
/define interviews you to surface what you actually want. The stuff you'd reject in a PR but wouldn't think to specify upfront. Then /do implements toward those acceptance criteria, flexible on how but not on what.
After each deliverable, /verify runs automated checks against every criterion. Failing checks say exactly what's wrong. The AI fixes what failed, only what failed, and the loop continues until everything passes or a blocker needs your attention.
Your first pass lands closer to done. Issues get caught by verification before you see them, and the fix loop handles cleanup without your involvement. Every acceptance criterion has been verified, and you know what was checked.