From shipshitdev-library
Enforces spec-first workflow for non-trivial builds: frames problems, creates spec.md, todo.md, decisions.md before coding to clarify decisions and verify outcomes.
npx claudepluginhub shipshitdev/skillsThis skill uses the workspace's default tool permissions.
A structured workflow for LLM-assisted coding that delays implementation until decisions are explicit.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Analyzes competition with Porter's Five Forces, Blue Ocean Strategy, and positioning maps to identify differentiation opportunities and market positioning for startups and pitches.
A structured workflow for LLM-assisted coding that delays implementation until decisions are explicit.
Delay implementation until tradeoffs are explicit — Use conversation to clarify constraints, compare options, surface risks. Only then write code.
Treat the model like a junior engineer with infinite typing speed — Provide structure: clear interfaces, small tasks, explicit acceptance criteria. Code is cheap; understanding and correctness are scarce.
Specs beat prompts — For anything non-trivial, create a durable artifact (spec file) that can be re-fed, diffed, and reused across sessions.
Generated code is disposable; tests are not — Assume rewrites. Design for easy replacement: small modules, minimal coupling, clean seams, strong tests.
The model is over-confident; reality is the judge — Everything important gets verified by execution: tests, linters, typecheckers, reproducible builds.
Goal: Decide before you implement.
Prompts that work:
Output: Decision notes for .agents/DECISIONS/[feature-name].md
Goal: Turn decisions into unambiguous requirements.
File: .agents/SPECS/[feature-name].md
# [Feature Name] Spec
## Purpose
One paragraph: what this is for.
## Non-Goals
Explicitly state what you are NOT building.
## Interfaces
Inputs/outputs, data types, file formats, API endpoints, CLI commands.
## Key Decisions
Libraries, architecture, persistence choices, constraints.
## Edge Cases and Failure Modes
Timeouts, retries, partial failures, invalid input, concurrency, idempotency.
## Acceptance Criteria
Bullet list of testable statements. Avoid "should be fast."
Prefer: "processes 1k items under 2s on M1 Mac."
## Test Plan
Unit/integration boundaries, fixtures, golden files, what must be mocked.
Goal: Stepwise checklist where each step has a verification command.
File: .agents/TODOS/[feature-name].md
# [Feature Name] TODO
- [ ] Add project scaffolding (build/run/test commands)
Verify: `npm run build && npm test`
- [ ] Implement module X with interface Y
Verify: `npm test -- --grep "module X"`
- [ ] Add tests for edge cases A/B/C
Verify: `npm test -- --grep "edge cases"`
- [ ] Wire integration
Verify: `npm run integration`
- [ ] Add docs
Verify: `npm run docs && open docs/index.html`
Each item must be independently checkable. This prevents "looks right" progress.
Goal: Small diffs, frequent verification, controlled context.
Rules:
For large codebases:
Goal: Force the model to try to break its own work.
Prompts:
Goal: Keep the system easy to delete and rewrite.
Heuristics:
Keep in the .agents/ folder (not project root):
.agents/
├── SPECS/
│ └── [feature-name].md # what/why/constraints
├── TODOS/
│ └── [feature-name].md # steps + verification commands
└── DECISIONS/
└── [feature-name].md # tradeoffs, rejected options, assumptions
Naming: Use the feature/task name as the filename (e.g., user-auth.md, api-refactor.md).
Why .agent folder:
Before running autonomous/agentic execution, verify:
| Dimension | Question | If No... |
|---|---|---|
| Intent | Do you have acceptance criteria and a test harness? | Don't run agent |
| Memory | Do you have durable artifacts (spec/todo) so it can resume? | It will thrash |
| Planning | Can it produce/update a plan with checkpoints? | It will improvise badly |
| Authority | Is what it can do restricted (edit, test, commit)? | Too risky |
| Control Flow | Does it decide next step based on tool output? | It's just generating blobs |
| Tools | Does it have minimum necessary tooling and nothing extra? | Attack surface too large |
Approve at meaningful checkpoints (end of todo item, after test suite passes), not every micro-step.
Authoritarian (for correctness):
Edit these files: [paths]
Interface: [exact signatures]
Acceptance criteria: [list]
Required tests: [list]
Don't change anything else.
Options and tradeoffs (for design):
Give me 3 options and a recommendation.
Make the recommendation conditional on constraints A/B/C.
Context discipline (for large codebases):
Only use the files I provided.
If you need more context, ask for a specific file and explain why.
Make it provable:
Add a test that fails on the buggy version and passes on the correct one.
When this skill activates, produce:
SPEC-FIRST WORKFLOW
STAGE A - FRAMING:
[3 approaches with tradeoffs]
[Recommendation]
STAGE B - SPEC:
[Draft spec.md content]
STAGE C - TODO:
[Draft todo.md with verification commands]
Ready to proceed to Stage D (execution)?