By wgordon17
Code quality agents, development utilities, and orchestration skills: architecture, security, QA, performance, test execution, code review, code simplification, plan adherence, LSP navigation, uv-python, incremental planning (with issue tracking), roadmap lifecycle management, session management, deep-research (with Bridged mode), business-panel, file-audit, bug-investigation, unfuck, swarm, quality-gate, pr-review, plan-review, map-reduce, speculative, reflect, fix, index-repo, summarize, and test-plan
npx claudepluginhub wgordon17/personal-claude-marketplace --plugin code-qualityCheck configured LSP servers and their status
Comprehensive project review with TODO validation and claim verification
Sync project memory and update hack/ files before ending session
Start a session by loading project context or initializing new project
Use when designing system architecture, evaluating technology choices, planning large-scale refactoring, or answering "how should I structure", "what's the best architecture for", "design the system" questions
Use this agent when a major project step has been completed and needs to be reviewed against the original plan and coding standards. Examples: <example>Context: The user is creating a code-review agent that should be called after a logical chunk of code is written. user: "I've finished implementing the user authentication system as outlined in step 3 of our plan" assistant: "Great work! Now let me use the code-reviewer agent to review the implementation against our plan and coding standards" <commentary>Since a major project step has been completed, use the code-reviewer agent to validate the work against the plan and identify any issues.</commentary></example> <example>Context: User has completed a significant feature implementation. user: "The API endpoints for the task management system are now complete - that covers step 2 from our architecture document" assistant: "Excellent! Let me have the code-reviewer agent examine this implementation to ensure it aligns with our plan and follows best practices" <commentary>A numbered step from the planning document has been completed, so the code-reviewer agent should review the work.</commentary></example>
Simplifies and refines code for clarity, consistency, and maintainability while preserving all functionality. Focuses on recently modified code unless instructed otherwise.
Use when investigating performance issues, optimizing slow code, analyzing bottlenecks, or when user mentions "slow", "performance", "latency", "memory", "CPU", "optimization", "bottleneck"
Verifies implementation against plan file tasks. Reads plan, extracts tasks/checkboxes, verifies each against git diff and source code, escalates unchecked tasks via AskUserQuestion.
Use when reviewing code quality, test coverage, code smells, maintainability issues, or when user asks about "test strategy", "code quality", "technical debt", "code review"
Use when reviewing code for security vulnerabilities, analyzing authentication/authorization patterns, assessing security risks, or when user mentions "security", "vulnerability", "authentication", "authorization", "injection", "XSS", "CSRF"
Test execution specialist - runs tests efficiently, parses failures, and reports results. Does NOT fix code (use bug-fixer agent for fixes)
Use when user needs multi-stakeholder analysis, business impact assessment, or wants perspectives from different roles. Triggers on: "analyze from business perspective", "what would stakeholders think", "impact analysis", "business case for", "ROI of"
Interactive bug investigation workflow using background agents. Use when the user says "I'm going to report bugs", "let's hunt bugs", "investigate these issues", or describes wanting to report issues one-by-one while agents investigate in parallel. Also activates when the user reports a bug/issue and a BUGS.md file exists or was recently created.
Multi-agent PR review with finding verification. Use when asked to "review PR", "review this PR", "code review", or given a PR URL to review. Spawns 6 parallel specialized reviewers (security, QA, performance, code quality, correctness, plan adherence), verifies findings by investigating source code, categorizes by type, and prints a structured report to the terminal. Never comments on GitHub PRs.
Use when about to claim ANY work is complete — code, research, planning, config, or general answers. Also after /swarm, /spawn, subagent-driven development, or any significant deliverable.
Use when user requests deep research, comprehensive analysis, or thorough investigation. Triggers on: "research X thoroughly", "deep dive into", "comprehensive analysis of", "investigate X exhaustively", "compare X options", "evaluate alternatives for". Supports two modes: External (web research, current behavior) and Bridged (internal project investigation followed by external best-practices research).
Deep code quality audit system. Use when asked to: - "Audit the codebase" - "Find unused code" - "Check for duplicates" - "Validate library usage" - "Review the entire project" Analyzes files in parallel with LSP and Context7, detecting issues, duplicates, and documentation drift.
Comprehensive finding fixer. Use when asked to "fix these findings", "fix the review", "address the issues", or after running /pr-review, /plan-review, or /bug-investigation and wanting to act on the findings. Reads findings from the current session context, investigates each finding via background agents, and implements all fixes with the lead. Auto-detects fix target (plan file, code, bugs). For plan-review Research Gaps and Unknown Unknowns, runs actual spikes and verification — executes the research, not just documents it.
Incremental planning workflow that replaces native plan mode with issue tracking integration (GH issues, Jira cards). Use when Claude tries to enter plan mode (EnterPlanMode is denied by hook), when asked to "plan", "design an approach", "how should we implement", or before any multi-file implementation task. Asks clarifying questions first, writes plan to file incrementally with file structure mapping, BUGS.md cross-referencing (sets Tracked In for overlapping bug entries), per-task quality review (sonnet subagent), tiered breakpoints for scope vs detail ambiguity, and assumption surfacing in Phase 6. Provides research context and summaries in chat for feedback. Never displays full plan content in chat.
Generate a PROJECT_INDEX.md for token-efficient codebase orientation. Use when starting work on an unfamiliar repo, when agents need project context without reading the full codebase, or when asked to "index the repo", "create a project index", "map the codebase".
PROACTIVE skill - Use when navigating code, understanding symbol definitions, finding references, or exploring call hierarchies. Triggers include questions like "where is X defined", "what calls Y", "show usages of Z", or any code exploration task. Prefer LSP over grep for semantic navigation.
Parallelized workload processing with structured chunking, mapper agents, and reducer synthesis. Use for codebase-wide analysis, bulk transformations, and large file audits (20+ files).
Multi-agent plan review with independent fresh-context reviewers. Use when asked to "review plan", "review this plan", "plan review", or given a plan file path to review. Spawns 6 parallel specialized reviewers (feasibility, scope, dependencies, unknown unknowns, architect, security), verifies findings by re-reading the plan, and prints a structured terminal report. Designed for cross-session use: write a plan in session A, review in session B.
Mid-task self-reflection checkpoint using Serena metacognitive tools. Use when you need to pause and evaluate: "am I on track?", "have I gathered enough info?", "am I actually done?". Triggers on: completing a significant chunk of work, before claiming done, when feeling uncertain about direction, after a long sequence of tool calls, before making large code changes.
Stateful multi-plan phase sequencing, dependency analysis, and roadmap lifecycle management. Detects existing roadmaps and routes to update, cleanup, status/drift, or fresh creation. Use when coordinating multiple implementation plans into parallel/sequential workstreams, or when managing an existing roadmap. Trigger phrases: "roadmap", "sequence plans", "coordinate multiple plans", "plan execution order", "roadmap status", "roadmap cleanup", "update roadmap", "roadmap drift", or when 2+ plans need ordering.
Run competing implementations in parallel with isolated worktrees, then judge and select the best approach. Use when multiple viable approaches exist and "try both and compare" beats "guess and commit."
Distills project-memory artifacts and pull requests into concise human-readable summaries. Use when asked to "summarize", "what's the status of", "is this plan done", "audit this", "review the artifact", "archive this plan", "summarize this PR", "what does this PR do", "PR summary", or when pointing at a file in hack/plans/, hack/swarm/, hack/research/, hack/speculative/, hack/map-reduce/, hack/unfuck/, or hack/BUGS.md. Supports all 8 artifact-producing skill outputs plus GitHub PRs. For quick PR summaries (not deep code review). Cross-session: reads persisted artifacts and audits against current codebase state.
Full TeamCreate agent swarm for implementation tasks. Launches a pipelined team of 21+ specialized agents (Architect, Security Design Reviewer, Reduction Analyst, Implementer, Reviewer, Test-Writer, Test-Runner, Security, QA, Code-Reviewer, Performance, Plan Adherence, Fixer, Test Coverage Agent, Code-Simplifier, Docs, Docs Reviewer, Lessons Extractor, Verifier, BDD-Step-Writer) with structured JSON communication, Cynefin domain classification, audit trails, and early user checkpoint. Use when asked to "swarm this", "full team", "agent team", "full send", or when maximum rigor is needed on an implementation task. Auto-detects optional domain reviewers (UI, API, DB) from codebase analysis.
Use when user requests user-guided test plans, UAT validation, acceptance criteria definition, or user journey testing. Triggers on: "test plan", "user journey test", "UAT", "acceptance criteria", "manual test plan", "user-guided test", "validate from user perspective", "walk me through testing", "define what to test". Takes an implementation plan file as input and produces a test plan document with user personas, Given/When/Then scenarios, manual UAT steps, traceability matrix, and optional BDD .feature files. Annotates the input plan file so downstream skills (/swarm, /plan-review, /pr-review, /fix, /quality-gate) discover and consume the test plan automatically.
Efficient test execution patterns for pytest and pre-commit - sequential execution, targeted re-runs, and smart failure handling
Comprehensive one-shot repo cleanup skill. This skill should be used when the user asks to "clean up the repo", "remove dead code", "de-AI-slop", "unfuck this codebase", "comprehensive cleanup", "remove unused code", "simplify the codebase", "fix code quality", "clean everything up", "audit the codebase", "fix tech debt", "remove duplicates", "unify the architecture", "security review and fix", or wants a thorough, automated cleanup of their entire repository. Launches a full agent swarm to discover issues, plan fixes, and implement changes autonomously. Combines detailed custom analysis with existing skills (file-audit, code-quality:index-repo, code-quality:code-simplifier, code-quality:architect, code-quality:security, code-quality:qa, code-quality:code-reviewer, code-quality:test-runner) into a unified cleanup workflow.
PROACTIVE skill - Use for ANY task involving Python execution or bash/shell scripts that might run Python. Triggers include creating .py files, writing bash scripts, running terminal commands, automation scripts, CI/CD configs, Makefiles, dependency management, or ANY mention of python/pip/python3. Enforces uv CLI to replace ALL python/pip usage. CRITICAL - Activate BEFORE writing scripts or commands that could invoke Python.
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Uses power tools
Uses Bash, Write, or Edit tools
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
Comprehensive UI/UX design plugin for mobile (iOS, Android, React Native) and web applications with design systems, accessibility, and modern patterns
Tools to maintain and improve CLAUDE.md files - audit quality, capture session learnings, and keep project memory current.
Comprehensive PR review agents specializing in comments, tests, error handling, type design, code quality, and code simplification
Binary reverse engineering, malware analysis, firmware security, and software protection research for authorized security research, CTF competitions, and defensive security