From jarvus-skills
Spec-driven development workflow where specs are the source of truth. Use this skill whenever starting new features, planning implementation, writing specs, reviewing code against specs, or when the user mentions "spec", "specs/", "spec-first", or asks how something should work. Also use when creating a new project that will use this development methodology, or when onboarding someone to a spec-driven codebase.
npx claudepluginhub jarvusinnovations/agent-skills --plugin jarvus-skillsThis skill uses the workspace's default tool permissions.
Specs declare the complete desired state of the software. Implementation follows spec. All work begins with a spec update.
Transforms ideas into structured specifications (requirements, design, tasks) before implementation. Use when building features, fixing bugs, refactoring, or designing systems.
Creates structured specifications before coding for new projects, features, or changes with unclear requirements. Covers objectives, commands, project structure, code style, testing strategy, and boundaries.
Establishes spex SDD methodology: workflow routing, process discipline, spec-first principle, skill discovery. Invoke at start of spex conversations to select workflow skills.
Share bugs, ideas, or general feedback.
Specs declare the complete desired state of the software. Implementation follows spec. All work begins with a spec update.
This is not documentation-driven development (where docs describe what was built). This is specification-driven development — the spec describes what should exist, and the implementation is brought into conformance with it. The spec leads; the code follows.
When agents or developers implement features, they make hundreds of micro-decisions. Without a spec, each decision is a guess that may or may not match the user's intent. With a spec, those decisions are already made — the implementer's job is execution, not invention. This is especially powerful for AI-assisted development where multiple agents may work on different parts of the same system.
1. Spec change → propose what should be true
2. Accept → reviewer agrees on desired state
3. Implement → bring code into conformance
4. Verify → compare running software to spec
Code without a corresponding spec is unspecified behavior — it may exist for practical reasons, but nothing guarantees it. Spec without corresponding code is a known gap — track it.
Specs declare what must be true, not how to implement it.
Right level — declarative state + rules:
"Each row shows: from_stop_name, to_stop_name, pathway_mode label, completion fraction (populated field count / applicable field count). Sort: incomplete first, then alphabetical by from_stop_name. A pathway is 'on this level' when both its from_stop and to_stop share the same level_id."
This tells an implementer what must be true without dictating how. It's testable — you can look at the screen and verify conformance.
Too vague — feature narratives:
"The task list shows pathways grouped by level with completion tracking."
An agent reading this still has to make hundreds of decisions. Which fields? What sort order? What happens when data is missing?
Too detailed — implementation pseudocode:
"Query pathways WHERE from_stop.level_id == to_stop.level_id, LEFT JOIN field_notes, ORDER BY field_complete ASC, render each as a
<li>with..."
This is just writing the code twice. The spec rots the moment implementation diverges.
Organize specs by what they describe, not by when they were written:
specs/
├── README.md # Workflow docs, directory layout, format conventions
├── architecture.md # Tech stack, project structure, foundational decisions
├── data-model.md # Schema, field definitions, relationships
├── api/ # One file per endpoint or endpoint group
│ ├── conventions.md # Auth, versioning, error envelope, content types
│ └── <endpoint>.md
├── screens/ # One file per screen/route
│ └── <screen-name>.md
└── behaviors/ # Cross-cutting rules that span multiple screens
└── <behavior>.md
screens/ — one file per screen/route. What the user sees and can do at that URL.
behaviors/ — rules that span multiple screens. When a screen spec says "completion fraction", the completion behavior spec defines how it's calculated.
api/ — the contract between client and server. Both sides implement to these specs.
# Screen: <Name>
## Route
The path/URL for this screen.
## Data Requirements
What data this screen needs and where it comes from.
## Display Rules
Declarative description of what appears and under what conditions.
This is what a reviewer checks the implementation against.
## Actions
What the user can do and what each action causes.
## Navigation
Where you can go from here, where you came from.
# Behavior: <Name>
## Rule
The invariant or rule, stated declaratively.
## Applies To
Which screens or components this behavior affects.
## Details
Edge cases, calculations, timing, error handling.
# API: <Name>
## Endpoint
Method, path, auth requirements.
## Request
Parameters, body shape with field types.
## Response
Success body shape with field types. Error cases.
## Notes
Caching, idempotency, offline implications.
When implementing a feature or fixing a bug:
Compare the implementation against the spec, not against your own ideas of how it should work. The spec is the acceptance criteria.
specs/ directory at the project rootspecs/README.md documenting the workflow and directory layoutspecs/architecture.md with foundational tech decisionsThe spec drift auditor is a specialized agent that does an exhaustive comparison of your specs/ directory against the actual implementation, producing tables of gaps, undocumented implementations, and conflicts. To set it up in a project:
Copy the agent definition from this skill's references/spec-drift-auditor.md into your project at .claude/agents/spec-drift-auditor.md. Customize the "Methodology" phases to match your project's structure — for example, update Phase 3 ("Inventory the Implementation") to list the specific directories and key files in your codebase (source directories, migration paths, frontend code, infrastructure files, etc.).
Copy the command definition from this skill's references/audit-spec-drift.md into your project at .claude/commands/audit-spec-drift.md. This gives users a /audit-spec-drift slash command that launches the auditor agent.
Reference in CLAUDE.md — add a note to the project's CLAUDE.md mentioning the auditor is available, e.g.:
## Spec Drift Auditing
Run `/audit-spec-drift` to launch a comprehensive audit comparing specs/ against the implementation.
The reference files are located at:
references/spec-drift-auditor.md — the agent definition (goes in .claude/agents/)references/audit-spec-drift.md — the command definition (goes in .claude/commands/)Specs rot when they diverge from reality. Prevent this by: