Complete SDLC workflow with integrated code review, Diátaxis documentation, analysis, observability, and git lifecycle. 11 lifecycle commands with branch-aware git operations, 31 review commands, 7 aggregate reviews, 11 skills (4 analysis + 7 Diátaxis doc writing/review), and setup-wide-logging command.
npx claudepluginhub jayteealao/agent-skills --plugin sdlc-workflowComprehensive code review covering all 30 review dimensions — correctness, security, architecture, infrastructure, quality, and UX — in a single thorough pass
Architecture review covering design quality, performance, scalability, and API contracts in a single pass
Infrastructure review covering deployment config, CI/CD, release management, migrations, logging, and observability in a single pass
Pre-merge code review covering correctness, testing, security, refactor safety, and maintainability
Quick code review covering correctness, style consistency, developer experience, UX copy, and overengineering in a single pass
Security-focused review covering vulnerabilities, privacy, infra security, data integrity, and supply chain in a single pass
UX review covering accessibility, frontend accessibility, frontend performance, and UX copy in a single pass
Review UI changes for keyboard and assistive technology usability, avoid ARIA misuse
Review API contracts for stability, correctness, and usability for consumers
Review code for architectural issues including boundaries, dependencies, and layering
Review backend code for race conditions, atomicity violations, locking issues, and idempotency bugs
Review CI/CD pipelines for security, correctness, and deployment safety
Review code for missed reuse, quality issues, and inefficiencies — the three simplification lenses
Review code for logic flaws, broken invariants, edge-case failures, and correctness issues
Review code for changes that increase cloud infrastructure costs
Review data integrity - ensure stored data remains correct over time, across failures, retries, and concurrent writes
Review documentation completeness and accuracy for behavior/config/API changes
Review developer experience - make the project easier to build, run, debug, and contribute to
Review frontend code for accessibility issues in modern SPAs (React, Vue, Angular)
Review frontend changes for bundle size, rendering efficiency, and user-perceived latency
Review infrastructure code for security issues in IAM, networking, secrets, and configuration
Review infrastructure and deployment config for safety, least privilege, and operational clarity
Review logging for secrets exposure, PII leaks, wide-event patterns, and query-optimized observability
Review code for long-term readability, ease of change, and reduced change amplification
Review database migrations for safety, compatibility, and operability in production
Review observability completeness - logs, metrics, tracing, error reporting, alertability, and runbook hooks
Review code for unnecessary complexity, abstractions, and YAGNI violations
Review code for algorithmic and system-level performance issues
Review data handling for PII collection, storage, transmission, and privacy compliance
Hunt semantic drift in refactors to ensure behavior equivalence and prevent subtle bugs
Review changes for safe shipping with clear versioning, rollout, migration, and rollback plans
Review code for reliability, failure modes, and operational safety under partial outages
Review code for scalability issues under higher load, larger datasets, and more tenants
Review code for vulnerabilities, insecure defaults, and missing security controls
Enforce consistency with existing codebase style and language idioms to reduce cognitive load
Review dependency and build integrity risks, lockfiles, build scripts, and artifact provenance
Review test quality, coverage, and reliability to ensure changes are well-verified
Review user-facing text for clarity, consistency, actionability, and helpful error recovery
Set up wide-event logging with tail sampling to replace scattered logs with canonical log lines
Turn the completed and reviewed work into a PR-ready handoff package with reviewer and QA context.
Implement one selected planned slice. Writes per-slice implementation record with cross-links to slice definition and plan.
Convert a rough request into a clear intake brief, create the workflow folder, capture the first product-owner answers, and establish the canonical slug.
Read the workflow index and tell the user the exact next command to run, carrying forward the correct slug and slice.
Create or review-and-fix implementation plans. First invocation creates plans. Re-invocation auto-reviews against current codebase and artifacts, fixes issues found. Supports single slice, all slices (parallel), or explicit feedback.
Extract reusable lessons and turn them into concrete improvements to prompts, hooks, repo instructions, tests, and automation.
Intelligent review dispatch. Reads workflow artifacts and diff, selects relevant review commands, spawns one parallel sonnet sub-agent per command (each writes its findings to file), then aggregates, deduplicates, and triages findings via AskUserQuestion into a unified review verdict. Re-run with "triage" to revisit deferred findings.
Turn the intake brief into a compact implementable mini-spec with explicit acceptance criteria and edge cases.
Assess release readiness, ask mandatory rollout questions, and define rollout plus rollback.
Break a shaped work item into thin, independently verifiable vertical slices. Writes a master index and one file per slice.
Verify that the selected slice meets acceptance criteria and is ready for review.
Use when a documentation request is ambiguous, involves planning a docs structure, or a page seems to mix multiple purposes. Classifies content into Diátaxis quadrants (tutorial, how-to, reference, explanation), proposes a documentation map, and produces a writing plan with ordering. Triggers on phrases like "plan my docs", "what docs do I need", "help me organise my documentation", "docs architecture", "I need to write docs for my project", or when a user asks for a README and a full docs set together.
Use when the user asks to review, audit, improve, classify, or reorganise existing documentation — for a single page or a whole docs set. Evaluates docs against Diátaxis principles: type fit, boundary discipline, user fit, structure, and quality. Returns concrete prioritised fixes and, where needed, recommends splitting overloaded pages. Triggers on phrases like "review my docs", "audit the documentation", "what's wrong with this guide", "improve this README", "tell me what's wrong".
This skill should be used when analyzing errors, stack traces, and logs to identify root causes and implement fixes.
Use when the user asks for conceptual guides, architecture overviews, design rationale, trade-off discussions, background on how a subsystem works, historical context, or "why is it built this way?" documentation. Creates understanding-oriented content that builds mental models — the why, not the how. Do not use for direct task execution (use how-to) or factual lookup (use reference).
Use when the user asks for a how-to guide, step-by-step instructions for a specific goal, troubleshooting guide, configuration guide, deployment steps, migration guide, or operational runbook. Creates goal-oriented guides for competent users who know what they want to achieve. The reader already understands the basics — this is about getting work done. Do not use for beginner onboarding (use tutorial) or conceptual background (use explanation).
Use when the user asks for a README, a GitHub front page, an open source library landing page, or a documentation homepage for a repository. Writes the README as a front door that orients readers and routes them to the right deeper docs — not a tutorial, not a full reference manual, not a conceptual essay. Use a different diataxis skill when the request is specifically for a step-by-step lesson, a task guide, API details, or conceptual background.
This skill provides patterns for safe, systematic refactoring including extract, rename, move, and simplification operations with proper testing and rollback strategies.
Use when the user asks for API docs, CLI command reference, configuration reference, parameter tables, schema documentation, error code lists, or version compatibility matrices. Creates neutral, structured, scannable technical reference — factual description of the machinery for lookup during active work. Do not use for onboarding, task guides, or conceptual justification.
This skill provides patterns and best practices for generating and organizing tests. It covers unit testing, integration testing, test data factories, and coverage strategies across multiple languages and frameworks.
Use when the user asks for a tutorial, beginner walkthrough, getting-started lesson, first-project guide, or onboarding material for new users. Creates learning-oriented step-by-step lessons where readers learn by doing something meaningful. The goal is skill and confidence, not task completion. Do not use for advanced tasks, troubleshooting, or exhaustive product coverage — those are how-to guides.
Design and implement wide-event logging with tail sampling for context-rich, queryable observability
Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use
20 modular skills for idiomatic Go — each under 225 lines, backed by 48 reference files, 8 automation scripts (all with --json, --limit, --force), and 4 asset templates. Covers error handling, naming, testing, concurrency, interfaces, generics, documentation, logging, performance, and more. Activates automatically with progressive disclosure and conditional cross-references.
Tools to maintain and improve CLAUDE.md files - audit quality, capture session learnings, and keep project memory current.
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Manus-style persistent markdown files for planning, progress tracking, and knowledge storage. Works with Claude Code, Kiro, Clawd CLI, Gemini CLI, Cursor, Continue, and 11+ AI coding assistants.
AI-supervised issue tracker for coding workflows. Manage tasks, discover work, and maintain context with simple CLI commands.