Automate full software development lifecycle using VGV best practices: brainstorm features collaboratively, generate structured implementation plans, manage git branches/commits/PRs, execute plans with code and tests, run parallel multi-agent reviews for code quality/architecture/testing, and produce post-incident debriefs.
npx claudepluginhub verygoodopensource/very_good_claude_code_marketplace --plugin vgv-wingspanUses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Analyzes implementation plans for scope and recommends splitting large plans into multiple independently-mergeable PRs. Use during plan technical review to catch oversized plans before development begins. <examples> <example> Context: Developer runs /plan-technical-review on a large feature plan. user: "Review this plan for the new authentication flow — it touches API client, repository, state management, and three screens." assistant: "I'll run the plan-splitting agent to assess whether this should be split across multiple PRs." <commentary> Plans spanning multiple layers (data, domain, presentation) with new packages are strong candidates for splitting. </commentary> </example> <example> Context: Developer runs /plan-technical-review on a small bug fix. user: "Review this plan for fixing the cart total calculation." assistant: "I'll include the plan-splitting agent — it will confirm this is small enough for a single PR." <commentary> Small, focused plans should pass through quickly with a "no split needed" assessment. </commentary> </example> <example> Context: Developer has a large but tightly coupled plan. user: "Review this plan — it adds a single complex component with its state management, repository, and API client, all interdependent." assistant: "I'll run the plan-splitting agent to check if this can be split, or if the coupling means it should stay as one PR." <commentary> Not all large plans can be split. The agent should recognize tight coupling and recommend keeping as a single PR with a scope warning rather than forcing an awkward split. </commentary> </example> </examples>
Analyzes specifications and feature descriptions for user flow completeness and gap identification. Use when a spec, plan, or feature description needs flow analysis, edge case discovery, or requirements validation.
Final review pass to ensure code is as simple and minimal as possible. Use after implementation is complete to identify YAGNI violations and simplification opportunities.
Conducts a thorough review of the given codebase, ensures code quality standards are met, and validates that the codebase uses consistently the same patterns. <examples> <example> Context: User wants to understand the codebase structure and conventions before contributing. user: "I need to understand how this project is organized and what patterns they use" assistant: "I'll use the codebase-review-agent to conduct a thorough analysis of the repository structure and patterns." <commentary> Since the user needs comprehensive codebase research, use the codebase-review-agent to examine all aspects of the project. </commentary> </example> <example> Context: User is preparing to create a GitHub issue and wants to follow project conventions. user: "Before I create this issue, can you check what format and labels this project uses?" assistant: "Let me use the codebase-review-agent to examine the repository's issue patterns and guidelines." <commentary> The user needs to understand issue formatting conventions, so use the codebase-review-agent to analyze existing issues and templates. </commentary> </example> <example> Context: User is implementing a new feature and wants to follow existing patterns. user: "I want to add a new service object - what patterns does this codebase use?" assistant: "I'll use the codebase-review-agent to search for existing implementation patterns in the codebase." <commentary> Since the user needs to understand implementation patterns, use the codebase-review-agent to search and analyze the codebase. </commentary> </example> </examples>
Reviews code against Very Good Ventures engineering standards. Use after implementing features, modifying code, creating new packages, or before opening PRs. Enforces architecture, state management conventions, testing quality, and code simplicity. <examples> <example> Context: The user has just implemented a new feature with state management and wants it reviewed. user: "I just finished implementing the authentication feature with a new service and state management" assistant: "I'll use the VGV review agent to evaluate this implementation against our engineering standards." <commentary> New state management implementations should be reviewed for proper design, layer separation, test coverage, and adherence to VGV conventions. </commentary> </example> <example> Context: The user has added state management that deviates from the project pattern. user: "I added a different state management approach for managing the shopping cart state" assistant: "Let me invoke the VGV review agent to analyze this architectural decision." <commentary> Using a different state management pattern than the project standard is an architectural deviation that should be reviewed critically. </commentary> </example> <example> Context: The user has created a new package in the monorepo. user: "I've created a new package under packages/ for the payments feature" assistant: "I'll have the VGV review agent check the package structure, layering, and conventions." <commentary> New packages should follow the project's monorepo conventions, layer separation, linting setup, and testing scaffolding. </commentary> </example> <example> Context: The user has refactored existing code and wants a quality check. user: "I refactored the user profile feature to reduce code duplication" assistant: "Let me run the VGV review agent to ensure the refactor maintains our quality bar and doesn't introduce regressions." <commentary> Refactors to existing code should be reviewed strictly for regressions, clarity improvements, and whether the changes actually simplify rather than shift complexity. </commentary> </example> </examples>
Validates project architecture against VGV standards post-implementation. Use after writing code to verify layer separation, state management correctness, dependency direction, and package structure. <examples> <example> Context: The user has implemented a new feature across multiple layers and wants an architecture check. user: "I just added the checkout feature with a new service, repository, and API client. Is the architecture clean?" assistant: "I'll use the architecture review agent to validate layer separation and dependency direction." <commentary> Multi-layer implementations need verification that presentation doesn't import data directly, dependencies flow correctly, and state management patterns are proper. </commentary> </example> <example> Context: The user has added a new package to a monorepo. user: "I created a new payments package. Can you check it follows our architecture?" assistant: "Let me run the architecture review agent to verify the package structure and layer boundaries." <commentary> New packages must have a proper dependency manifest, linting configuration, correct layer separation, and proper dependency direction. </commentary> </example> <example> Context: The user has refactored state management and wants validation. user: "I converted the settings feature to use a different state management approach. Is everything wired correctly?" assistant: "I'll use the architecture review agent to verify the state management implementation follows VGV conventions." <commentary> State management migrations need careful review: naming should be descriptive, states should be immutable, no business logic in UI, and proper provider/injection usage. </commentary> </example> </examples>
Checks PR readiness — formatting, static analysis, debug artifacts, and commit hygiene — to catch mechanical issues before opening a pull request.
Reviews test coverage and quality for implementations. Use after code is written to verify every state management unit, repository, and UI component has proper tests following VGV conventions. <examples> <example> Context: The user has finished implementing a feature and wants test coverage reviewed. user: "I just finished implementing the notifications feature with tests. Can you review the test quality?" assistant: "I'll use the test quality review agent to evaluate coverage and adherence to project testing patterns." <commentary> New feature implementations need test coverage verification: every state management unit, UI component, and repository must have a test file following VGV conventions. </commentary> </example> <example> Context: The user has written state management tests and wants to check for anti-patterns. user: "I wrote tests for the cart service — are they solid?" assistant: "Let me run the test quality review agent to check for anti-patterns and coverage gaps." <commentary> State management tests should follow VGV conventions, cover success/failure/edge cases, use proper mocking, and avoid tautological assertions. </commentary> </example> <example> Context: The user wants a pre-PR test quality check. user: "Before I open a PR, can you verify the tests are up to standard?" assistant: "I'll use the test quality review agent to audit test quality across the changed files." <commentary> Pre-PR test reviews should verify completeness, pattern compliance, meaningful assertions, and absence of anti-patterns. </commentary> </example> </examples>
Researches and synthesizes best practices for the project's technology stack, following first VGV conventions and the project's CLAUDE.md, then official language and framework documentation, and finally other industry standards.
Gathers comprehensive documentation and best practices for frameworks, libraries, or dependencies. Use when you need official docs, version-specific constraints, or implementation patterns.
Stage, commit, push, and open a pull request following project conventions and the Conventional Commits spec. Accepts optional skip-checks argument to bypass validation when called from /build.
Scaffolds a new project by routing to the right companion plugin's create skill.
Produces a structured post-incident analysis — timeline, root cause, and actionable follow-ups — while context is fresh.
Applies Strunk's Elements of Style principles when writing or editing prose.
Applies a minimal, targeted fix for emergency bugs — enforces review and testing without brainstorm or planning phases.
Conducts a comprehensive technical review of an implementation plan, ensuring it meets requirements and follows best practices.
Turns high-level brainstorming and ideas into well-structured, actionable implementation plans.
Rebases the current feature branch onto the base branch (main/master/develop).
Reviews and refines brainstorm or planning documents before implementation. Identifies gaps, clarifies assumptions, and ensures the approach is sound.
Runs quality review agents on demand — reviews code, assesses quality, and identifies issues before merging.
Explores requirements and approaches through collaborative dialogue before planning implementation.
Executes an implementation plan — writes code and tests, runs quality review, and ships a pull request.
Sets up a workspace branch or worktree before writing artifacts.
Propose and create conventional commit messages for staged changes. Follows Conventional Commits spec and VGV workflow.
AI-powered development tools for code review, research, design, and workflow automation.
A curated set of skills for each stage of development — propose, spec, design, plan, implement, ship.
End-to-end development workflow: design → draft-plan → orchestrate → review → pr-create → pr-review → pr-merge
Software engineering skills from Code Complete and A Philosophy of Software Design. 20 skills across 3 agents (build, post-gate, debug). Building workflow with adaptive gates (BUILD, REVIEW, commit). Scientific debugging via debug-agent.
Implementation planning, execution, and PR creation workflows with multi-agent collaboration
AI-powered cascading development framework with design document system and multi-agent collaboration. Breaks down projects into Features (Mega Plan), Features into Stories (Hybrid Ralph), with auto-generated technical design docs, dependency-driven batch execution, Git Worktree isolation, and support for multiple AI agents (Codex, Amp, Aider, etc.).