By baseinfinity
Enforce structured SDLC workflows in AI-assisted coding: scan codebase to detect and generate TDD/CI/docs setups, guide planning/documentation/implementation/testing/self-review/deployment, monitor CI runs, apply self-updates, and submit repo feedback via GitHub issues.
npx claudepluginhub baseinfinity/claude-sdlc-wizard --plugin sdlc-wizardSubmit feedback, bug reports, feature requests, or share SDLC patterns you've discovered. Privacy-first — always asks before scanning.
Full SDLC workflow for implementing features, fixing bugs, refactoring code, testing, releasing, publishing, and deploying. Use this skill when implementing, fixing, refactoring, testing, adding features, building new code, or releasing/publishing/deploying.
Setup wizard — scans codebase, builds confidence per data point, only asks what it can't figure out, generates SDLC files. Use for first-time setup or re-running setup.
Smart update for SDLC wizard — shows changelog, compares files, lets you selectively adopt changes while preserving customizations.
AI-First SDLC — zero-debt development with validators, enforcement, and workflows
Modifies files
Hook triggers on file write and edit operations
Share bugs, ideas, or general feedback.
Persona-driven AI development team: orchestrator, team agents, review agents, skills, slash commands, and advisory hooks for Claude Code
AI-powered development workflow automation - Phase-based planning, implementation orchestration, preflight code quality checks with security scanning, ship-it workflow, and development principles generator for CLAUDE.md
PROJECT.md-first autonomous development with hybrid auto-fix documentation. 8-agent pipeline, auto-orchestration, docs auto-update on commit (true vibe coding). Knowledge base system with 90% faster repeat research. Strict mode enforces SDLC best practices automatically. Works for ANY Python/JavaScript/TypeScript/Go project.
23 agent skills for systematic software development. Covers design, planning, TDD, code review, debugging, quality gates, and adversarial testing. Every skill is eval-tested with measured A/B deltas using Anthropic's skill evaluation framework.
Analyze and enforce best practices for AI coding agent projects. Assess codebase readiness across 8 pillars with /readiness, then scaffold enforcement with /setup: TDD, secret scanning, file size limits, auto-generated docs, and git hooks.