SpecOps - spec-driven development workflow for AI coding assistants
npx claudepluginhub sanmak/specopsSpec-driven development workflow - transforms ideas into structured specifications (requirements, design, tasks) before implementation.
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Production-ready workflow orchestration with 79 focused plugins, 184 specialized agents, and 150 skills - optimized for granular installation and minimal token usage
Directory of popular Claude Code extensions including development tools, productivity plugins, and MCP integrations
Share bugs, ideas, or general feedback.
You describe a feature to your AI coding assistant. It starts writing code immediately. No requirements. No design. No task breakdown. You spend the next hour correcting assumptions it made in the first minute.
The problem isn't the AI. It's that nobody told it to think first.
SpecOps adds a structured thinking step to AI coding. One command triggers a 4-phase workflow:
Specs are git-tracked, survive across sessions, and work natively with Claude Code, Cursor, OpenAI Codex, GitHub Copilot, and Google Antigravity.
Claude Code (plugin marketplace):
/plugin marketplace add sanmak/specops
/plugin install specops@specops-marketplace
/reload-plugins
One-line install (any platform):
bash <(curl -fsSL https://raw.githubusercontent.com/sanmak/specops/main/scripts/remote-install.sh)
# Inspect the script first: https://github.com/sanmak/specops/blob/main/scripts/remote-install.sh
Or clone and run:
git clone https://github.com/sanmak/specops.git && cd specops && bash setup.sh
Try it:
/specops Add user authentication with OAuth
Platform-specific install details: QUICKSTART.md | Full command reference: docs/COMMANDS.md
Without SpecOps:
You: "Add OAuth authentication"
Agent: *writes auth.ts, picks JWT without asking, hardcodes Google,
skips rate limiting, creates 6 files*
You: "No, I needed GitHub too, and..." (30 min of corrections)
With SpecOps:
You: "/specops Add OAuth authentication"
Agent:
requirements.md -> 4 user stories, 12 acceptance criteria (EARS notation)
design.md -> JWT vs sessions trade-off, provider abstraction layer
tasks.md -> 8 ordered tasks with dependencies and effort estimates
Then implements each task against verified criteria.
| Problem | How SpecOps handles it |
|---|---|
| AI starts coding without understanding the domain | 7 vertical templates: backend, frontend, infra, data pipelines, library/SDK, fullstack, builder |
| Specs lost when you close the session | Git-tracked spec files with cross-session context recovery |
| Agent forgets decisions from yesterday | Local memory layer, loaded automatically every session |
| No way to review specs before coding starts | Built-in team review workflow with configurable approval gates |
| Agent hallucinates vague acceptance criteria | EARS notation for precise requirements: WHEN [event] THE SYSTEM SHALL [behavior] |
| Specs drift from codebase after implementation | 5 automated drift checks with audit and reconcile commands |
| AI adds packages without checking maintenance or license | Dependency introduction gate: 5-criteria evaluation (scope, maintenance, size, security, license) before any install |
| Agent marks its own work as "done" without scrutiny | Adversarial evaluation: separate evaluator scores specs and implementations against hard thresholds |
| Production reveals things specs missed | Production learnings layer: capture discoveries, link to specs, surface in future work |
| Locked into one AI coding tool | One source of truth, 5 platform outputs |
Every feature of SpecOps was specified, designed, and implemented using the SpecOps workflow. All specs are public in .specops/. The friction log captures 42 lessons learned that shaped the tool.
Large features that span multiple bounded contexts are automatically detected and split into coordinated specs.