By ryanthedev
Execute a gated software engineering workflow powered by 20+ skills and agents: plan complex tasks, build with TDD and design checklists, pass automated reviews and sanity gates, debug via scientific method, refactor safely, and commit via git worktrees only after verification.
npx claudepluginhub ryanthedev/code-foundationsExecute plans through gated phases with subagent dispatch.
Guide systematic debugging
Validate technical feasibility with minimum code before full implementation
Code review with checklist-driven checks
Setup tree-sitter CLI and grammars for AST-powered code review
Brainstorm and plan features
Discovery, design, and TDD implementation in one pass. Scopes phase work, makes design decisions, and implements via test-driven development.
Distilled from 99 checks via 7-agent blind consensus study. These are the checks that multiple independent agents identified as most critical for catching production bugs.
Investigate and fix bugs using scientific debugging method. Predict, log, run, resolve. Returns root cause analysis and fix.
Review building phase implementation against plan requirements and test coverage. Checks requirement fulfillment (done-when items), test-DW coverage, dead code, correctness verification, and defensive programming. Returns PASS or FAIL with specific findings.
Focused on design, correctness, performance, and structure. Security/input validation deferred to full PR review.
Use when designing modules, APIs, or classes before implementation.
Use when reviewing code, assessing interfaces, during PR review, or evaluating 'is this too complex?' Triggers on: code review, design review, module complexity, interface assessment, PR review, structural analysis.
Use when code is too complex, has scattered error handling, configuration explosion, or callers doing module work. Triggers on: too complex, simplify, scattered errors, configuration proliferation, verbose error handling
Use after implementing code. Triggers on: is it done, ready to commit, verify correctness, did I miss anything, pre-commit check.
Use when designing system architecture, drawing boundaries between business logic and infrastructure, or when changes touch many unrelated files. Triggers on: architecture design, dependency direction, separating business rules from database/UI/frameworks.
Use when code has deep nesting (3+ levels), complex conditionals, loop design questions, high cyclomatic complexity (McCabe >10), or callback hell. Symptoms: arrow-shaped code, repeated conditions, confusing loop exits, lengthy if-else chains
Use when auditing defensive code, designing barricades, choosing assertion vs error handling, or deciding correctness vs robustness strategy. Triggers on: empty catch blocks, missing input validation, assertions with side effects, wrong exception abstraction level, garbage in garbage out mentality, deadline pressure to skip validation, trusted source rationalization.
Use when designing routines, stuck on where to start coding, caught in compile-debug loops, or code works but you don't understand why. Triggers on: starting a new coding task
Use when planning QA, choosing review methods, designing tests, or debugging fails. Triggers on: defects found late, tests pass but production bugs, coverage disputes, review ineffective, spending excessive time debugging.
Use when modifying existing code, improving structure without changing behavior, or deciding between refactor, rewrite, or fix-first.
Use when designing routines or classes, reviewing class interfaces, choosing between inheritance and containment, or evaluating routine cohesion. Also trigger when inheritance is used without LSP verification, or when design issues are present despite passing tests
Decompose user intent through structured brainstorming. Detects underspecification, ambiguity, and false premises through hypothesis-driven questioning. Use when a request is unclear, could have multiple valid interpretations, or critical details are missing.
Use when reviewing code clarity, writing comments, checking documentation accuracy, or auditing AI-facing docs. Triggers on: naming, comments, documentation, README, CLAUDE.md.
Generate or update docs/code-standards.md by scanning codebase conventions. Produces example-rich standards that help LLMs write consistent code. Use when starting a planning or building task. Triggers on 'code standards', 'codebase scan', 'scan conventions'.
Use when code is too slow, has performance issues, timeouts, OOM errors, high CPU/memory, or doesn't scale. Triggers on: profiler hot spots, latency complaints, needs optimization, critical path analysis.
Use when facing untested legacy code, test harness problems, dependency issues, or time pressure. Triggers on: legacy code, no tests, can't test, afraid to change, need to modify untested code.
Standard/Full planning pipeline for whiteboarding. Steps: discover, classify, explore, detail, save, check, confirm, handoff. Use when dispatched from whiteboarding command for Medium/Complex tasks. Triggers on 'planning pipeline', 'standard track', 'full track'.
A curated set of skills for each stage of development — propose, spec, design, plan, implement, ship.
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
VGV Wingspan - AI-native workflows following Very Good Ventures best practices.
AI-powered development tools for code review, research, design, and workflow automation.
End-to-end development workflow: design → draft-plan → orchestrate → review → pr-create → pr-review → pr-merge
23 agent skills for systematic software development. Covers design, planning, TDD, code review, debugging, quality gates, and adversarial testing. Every skill is eval-tested with measured A/B deltas using Anthropic's skill evaluation framework.
Persona-driven AI development team: orchestrator, team agents, review agents, skills, slash commands, and advisory hooks for Claude Code