From vini-workflow
Creates production-grade PRD for greenfield projects via discovery interview, battle-tested architecture defaults, adversarial analysis, and sprint-ready specs output.
npx claudepluginhub vinicius91carvalho/.claude --plugin vini-workflowThis skill uses the workspace's default tool permissions.
This skill takes a project idea and produces a complete, sprint-ready PRD through structured
Creates comprehensive PRDs via guided conversational discovery for planning software projects. Covers features, audience, platforms, tech stack, and outputs structured docs/PRD.md.
Generates structured PRDs and plans for new projects with repo scaffolding, features, or retrospectives using researched Q&A engine with parallel agents and issue creation.
Plans new projects or major epics by exploring domain, defining boundaries and architecture, decomposing into phased features, and producing a project plan artifact. Use for multi-feature work.
Share bugs, ideas, or general feedback.
This skill takes a project idea and produces a complete, sprint-ready PRD through structured discovery and adversarial analysis. It bridges the gap between "I have an idea" and "here's exactly what to build and how."
/plan generates PRDs for tasks within an existing project (features, bugs, refactors).
/create-project generates PRDs for entirely new projects — it covers market strategy,
tech stack selection, architecture decisions, security model, data design, and
implementation roadmap. The output is compatible with the existing sprint system so
/plan-build-test can execute it.
Phase 0: Discovery Interview (ask before generating anything)
Phase 1: Parallel Deep Analysis (5 virtual analysis tracks)
Phase 2: Cross-Agent Consolidation (consistency, gaps, feasibility)
Phase 3: PRD Generation (single markdown file)
Phase 4: Sprint Extraction (compatible with /plan-build-test)
Before generating anything, ask the user ALL discovery questions in a single message. Group them clearly. Wait for answers before proceeding.
Read ${CLAUDE_PLUGIN_ROOT}/skills/create-project/references/discovery-interview.md for the full
question set. The questions cover four areas:
For any question where the user says "recommend" or doesn't have a preference, apply the
architecture defaults. Read ${CLAUDE_PLUGIN_ROOT}/skills/create-project/references/architecture-defaults.md
for the default decisions and their rationale.
Important: These defaults are battle-tested recommendations, not mandates. The user can override any decision. When applying a default, briefly explain why it's the recommendation so the user can make an informed choice. If the user specifies a different tech or pattern, respect that completely — the defaults exist to reduce decision fatigue, not to force a stack.
When the user says "recommend" for the tech stack:
Some tools benefit from being installed at the system level (not per-project). When the architecture defaults suggest tools like pnpm, Docker, or LocalStack, tell the user: "These tools work best installed globally on your system. Want me to check if they're available and help set them up?" Only proceed with system setup if the user agrees.
After receiving answers, run five analysis tracks. These are "virtual agents" — structured reasoning passes, each owning a domain. They challenge each other's assumptions, and only surviving decisions make it to the final PRD.
Input: Product & Market answers.
Output: Vision, Market Data, Benchmark Analysis, Competitor Map, Positioning.
Input: Technical answers + architecture defaults.
For each major architectural decision, run the adversarial ADR process:
Tiebreaker when speed and scale conflict:
Minimum ADRs: Compute model, application architecture, database, auth/security, observability, testing strategy. Add AI/agent architecture if the project uses AI/LLM.
Input: Constraints, multi-tenancy model, ADR outputs.
Output: Credential Management, Threat Model table, Compliance Mapping, Worst-Case Analysis.
Input: ADR outputs, security constraints, integrations.
Output: Project Structure, Module Descriptions, Data Model, Port Interfaces.
Input: All previous outputs, timeline, team.
Output: Roadmap, MVP Definition of Done, Testing Pyramid, DX Plan.
After all tracks complete, run a consolidation pass:
Generate the final PRD. Read ${CLAUDE_PLUGIN_ROOT}/skills/create-project/references/prd-output-template.md
for the complete output format.
Location: docs/tasks/project/feature/YYYY-MM-DD_HHmm-project-name/spec.md
The PRD includes these sections:
After writing spec.md, run the sprint extraction protocol.
Full protocol (sprint spec format, file boundary rules, progress.json schema, INVARIANTS.md
format, validate-sprint-boundaries.sh invocation, Build Candidate tagging):
${CLAUDE_PLUGIN_ROOT}/skills/plan/sprint-extraction-protocol.md
Summary: create sprints/NN-title.md per sprint (self-contained, file boundaries declared)
→ create progress.json → create INVARIANTS.md → run
bash ${CLAUDE_PLUGIN_ROOT}/hooks/scripts/validate-sprint-boundaries.sh <prd-dir> → tag Build Candidate.
Maximum 5 sprints per PRD. This ensures full compatibility with /plan-build-test.
The PRD is NOT complete unless:
Run the Spec Self-Evaluator (same as /plan) — spawn a separate haiku agent to evaluate. Must score 11+/14 with zero cross-section contradictions.
Tell the user:
"PRD saved at [directory-path]/. Sprint specs extracted to sprints/.
INVARIANTS.md created. Build Candidate tagged.
Run /plan-build-test to start building, or review and adjust first.
If you want to scaffold the project directory structure now, I can do that too."
Do NOT execute. This skill produces the plan only.