Plugins listed here are tagged for this topic and auto-indexed from public GitHub repositories.
Plugins for linting, code review, complexity analysis, refactoring suggestions, and best-practice enforcement.
Cyclomatic complexity, code duplication, naming conventions, dead code detection, and pattern-based best-practice enforcement across multiple languages.
Some include agents that suggest and apply refactoring. Others integrate with linters for auto-fixable violations. Check component types for agent-based analysis.
Many complement ESLint, Ruff, or language-specific linters rather than replacing them. Some generate linter configurations from best-practice templates.
Enforce strict TDD cycles, generate detailed multi-step implementation plans, execute them in batches or via parallel subagents, manage isolated git worktrees for features, perform root-cause debugging and technical code reviews, verify tests/builds/lints before commits or PRs, all within Claude Code sessions.
Apply Andrej Karpathy-inspired rules to guide code writing, reviews, and refactoring: enforce simplicity, surgical changes, and verifiable success criteria to sidestep common LLM coding pitfalls like overcomplication.
Switch to caveman mode in Claude Code sessions to slash 75% token usage with terse, accurate technical communication. Delegate to subagents for surgical 1-2 file edits, read-only code location, diff/PR reviews; generate compressed git commits, memory files, review comments, and track token stats via commands.
Equip Claude with 13 targeted skills to run disciplined bug diagnosis loops (reproduce-minimize-hypothesize-instrument-fix-regression-test), prototype designs via throwaway terminal/UI apps, triage and vertically slice GitHub issues, enforce TDD red-green-refactor cycles, generate structured PRDs, grill plans against domain models, and deepen codebase architecture for testability and AI navigation.
Delegate simplification of recently modified code to an agent that refines it for clarity, consistency, and maintainability while preserving all functionality and following project best practices. Run it after coding tasks like feature implementation, bug fixes, or optimizations to instantly improve code quality without manual review.
Automate comprehensive PR reviews on git diffs or pull requests using specialized AI agents that analyze code quality, test coverage, error handling, type design, comments, and simplification opportunities. Get categorized issues summary with criticals, importants, suggestions, strengths, and action plan.
Unlock pro-level BMad workflows: analyze project states for skill recommendations and next steps, orchestrate multi-agent roundtables and debates for diverse insights, distill documents losslessly, shard large Markdown files, refine LLM outputs via advanced elicitation, review prose and structure, index directories, and audit code adversarially to uncover edge cases and omissions.
Prototype and build optimized 2D/3D indie games in Godot and Unity by generating GDScript patterns for signals state machines scenes, C# scripts with URP/HDRP pipelines asset management profiling, principles for sprites tilemaps physics shaders lighting LOD core loops player psychology balancing progression, plus p5.js HTML/JS generative art sketches.
Audit CLAUDE.md files across repositories by discovering them with find, evaluating quality against rubrics, generating reports, and applying targeted improvements after approval. Capture learnings from Claude Code sessions to propose concise updates to CLAUDE.md or .claude.local.md files with user approval.
Automate multi-agent code reviews on GitHub pull requests, auditing CLAUDE.md files, detecting bugs, analyzing git history and prior PRs, reviewing code comments, and scoring issues by confidence level to prioritize fixes.
Orchestrate multi-agent teams to parallelize code reviews across security, performance, architecture, and more with consolidated reports; debug complex bugs via competing hypotheses, evidence gathering, and root cause ranking; develop features through task decomposition, file ownership, dependency management, git branching, and integration monitoring.
Delegate expert-level code reviews, security audits, penetration tests, QA automation, accessibility compliance checks, performance optimizations, chaos engineering, and compliance validations to specialized sub-agents across codebases, infrastructure, and systems.
Manage Python projects via structured tracks for features, bugs, refactors: initialize context artifacts like product.md and tech-stack.md, create detailed specs and phased plans, implement tasks with strict TDD workflow using pytest coverage and git commits, monitor status, revert commits, and validate artifacts for consistency.
Generate production-ready stateful CLI harnesses for GUI applications from local paths or GitHub repos, implementing Click CLI with REPL/JSON support, pytest unit/E2E tests, and docs. List installed harnesses, refine coverage gaps, run tests to verify functionality, and validate against standards.
Develop full Claude Code plugins end-to-end: plan and generate agents, commands, skills, hooks, and MCP integrations via guided workflows, then validate structure, naming conventions, and component quality with actionable reviews and fixes.
Automates end-to-end feature development: explores codebase to map dependencies, patterns, and execution paths; designs architectures with blueprints, data flows, and build sequences; implements code changes; reviews for bugs, security vulnerabilities, and quality issues using high-confidence filtering.
Enforce automated linting (ESLint, Ruff), type checking (tsc, mypy), and security audits (npm audit, bandit) after code changes in Node.js/TypeScript/Python projects; debug systematically in four phases; generate atomic task checklists; refactor incrementally with Kaizen principles; auto-stage, commit conventionally, and push to GitHub.
Run PluginEval certification pipeline on Claude plugins or skills to compute quality scores, badges (Platinum/Gold/Silver/Bronze), dimension breakdowns, anti-patterns, and recommendations via static analysis and LLM judging across 10 criteria including triggering, orchestration, and output quality. Compare skills head-to-head or evaluate directories for actionable insights.
Automate Git workflows by cleaning up gone remote branches and worktrees, intelligently staging changes with generated commit messages, and creating new feature branches with pushes and GitHub PRs via simple commands.
Enforce rigorous QA and testing workflows in Claude Code sessions: drive TDD for features and fixes, debug via four-phase root cause analysis, automate browsers with Playwright/Puppeteer best practices, plan A/B experiments with gates, apply code review checklists, build reliable E2E suites, and triage pytest failures systematically.
Equip AI coding agents with production engineering skills to handle full dev lifecycles: refine ideas to specs, implement via TDD slices, run tests/debug, perform multi-axis code reviews, optimize perf/security, automate CI/CD, and execute ship checklists.
Orchestrate swarms of specialized AI agents to automate end-to-end software development: plan features, implement code with Rails/Python/TS patterns, conduct multi-perspective reviews for architecture/security/performance, resolve todos/PR feedback in parallel, run browser/iOS tests, sync Figma designs, generate docs/videos, and ship PRs.
Receive inline warnings for security risks like command injection, XSS, and unsafe patterns before executing file edits, writes, multi-edits, notebook edits, or agent/skill tools, promoting secure coding practices during development workflows.
Automate full TDD cycles from GitHub issues: write specific failing tests using Jest/Vitest, pytest, JUnit, or NUnit (Red); implement minimal passing code (Green); refactor for quality and security while keeping tests green. Explore sites via Playwright to generate, run, and debug TypeScript E2E tests.
Run AI-powered code reviews, adversarial audits, and task delegation to OpenAI Codex on local git repos using CLI commands. Launch background jobs for investigations or fixes, monitor status in tables, retrieve structured outputs with verdicts, findings, next steps, and follow-ups. Handles job lifecycle via hooks and agents for seamless offloading when Claude gets stuck.
Maintain open-source repositories with Claude Code skills that generate conventional commits featuring feat/fix types, issue references, and AI co-authorship; produce Markdown READMEs, changelogs, and docs; execute advanced Git operations like interactive rebase, cherry-pick, bisect, and worktrees; conduct structured code reviews; and handle PR creation workflows.
Develop high-performance, concurrent systems applications in modern C++, Go, and Rust using idiomatic patterns like RAII, smart pointers, goroutines, channels, ownership, lifetimes, and async. Generate CMake builds, tests, benchmarks, profile performance, debug races, and apply cross-language memory-safe resource management to prevent leaks and errors.
Create and manage Hookify rules to prevent unwanted behaviors in Claude Code sessions by analyzing conversation patterns for frustration or risky actions like bash commands and file edits. Generate regex rules interactively, toggle enabled states, list configurations, view help, and run custom Python scripts on events such as PreToolUse, PostToolUse, Stop, and UserPromptSubmit.
Scaffold new Claude Agent SDK apps in TypeScript or Python by interactively gathering requirements, installing dependencies, and configuring projects. Verify apps post-creation or changes for SDK best practices, code quality, security, type safety, documentation, and deployment readiness.
Trigger PUA pressure modes to enforce exhaustive AI problem-solving on failures or explicit requests, spawning hierarchical P7-P10 agent teams for strategy, task breakdown, autonomous coding iterations until tests pass, blue-team reviews, verification, and git-tracked KPI leaderboards with self-evolution reporting.
Index Git repositories into knowledge graphs to enable code intelligence workflows: trace execution flows for debugging bugs and errors, analyze blast radius and risks of code changes or PRs, explore architectures and symbols, safely refactor with impact previews, and generate LLM wikis.
Migrate React 16/17 class-component codebases to React 18.3.1 via AI agents that audit deprecations, convert unsafe lifecycles/refs/context to modern patterns, fix automatic batching regressions, upgrade dependencies to exact versions, and rewrite Enzyme tests to React Testing Library until tests pass.
Build knowledge graphs from any codebase to visualize architecture, query dependencies and relationships, analyze git diffs/PR impacts, explain files/modules, generate onboarding guides and domain flows, plus interactive dashboard and guided tours.
Build self-improving Claude Code agents by curating MEMORY.md with key insights via /si:remember, promoting patterns to permanent CLAUDE.md rules or .claude/rules via /si:promote, extracting reusable skills from proven solutions, auditing memory for staleness/duplicates/consolidation, viewing health dashboards via /si:status, and summarizing long bash/tool outputs.
Integrate semantic code analysis into your IDE via LSP for intelligent code understanding, refactoring suggestions, and seamless codebase navigation, powered by a remote MCP server.
Build production-grade n8n workflows faster with expert AI skills that validate and fix expressions, interpret errors, guide tool usage and node configs, provide architectural patterns, and generate JavaScript or Python code for Code nodes.
Fetch targeted Python code examples from pysheeet cheat sheets covering syntax, concurrency, networking, databases, ML/LLM, and HPC for instant reference during debugging, interviews, or optimization. Enforce 'The Art of Readable Code' rules—like short functions, clear naming, and Pythonic idioms—to write and refactor readable code in real-time.
Spawn parallel AI subagents in isolated git worktrees to compete on tasks like code optimization, refactoring, test writing, or bug fixing. Evaluate results using pytest metrics or LLM judging on git diffs, rank agents, and merge the top performer into your base branch.
Enable educational insights in Claude's code responses, explaining implementation choices and codebase patterns like the deprecated Explanatory output style. Automatically checks for plugin updates on session start.
Streamline full engineering workflows: generate standups from git/PR activity, run code reviews and debugging sessions, create architecture ADRs and test plans, manage incidents with PagerDuty/Datadog, produce deployment checklists and technical docs. Integrates natively with GitHub, Jira, Linear, Slack, Notion via MCPs for seamless tool access.
Run autonomous Claude-powered iteration loops that modify code, verify against metrics, and refine until success, automating debugging, bug fixes, security audits, documentation generation, task planning, issue prediction, adversarial reasoning, test scenario creation, and multi-phase project shipping.
Build crafted, consistent UI interfaces for dashboards, admin panels, and SaaS apps with persistent design systems. Initialize systems with intent-based styles, generate components using typography, navigation, and tokens; audit code for violations, extract patterns from React/Vue/Svelte files, critique and rebuild for better composition, spacing, and focal points.
Generate self-contained HTML pages visualizing git diffs, code reviews, implementation plans, project recaps, diagrams, slide decks, and fact-checked docs. Compare plans against codebase, analyze changes with architecture views, and deploy visuals to Vercel for sharing.
Migrate Lodash code to es-toolkit in JavaScript and TypeScript projects by replacing imports and comparing APIs to shrink bundle sizes, get function recommendations matching your needs or code with imports examples and docs, and follow tailored setup guides for Node.js Bun Deno and browsers to optimize performance.
Run CodeQL and Semgrep to scan multi-language codebases (Python, JavaScript/TS, Go, Java, C#, Ruby, Rust) for security vulnerabilities via taint tracking and pattern matching. Parse, deduplicate, and aggregate SARIF outputs from scans, then integrate findings into CI/CD pipelines using GitHub Actions or bash scripts.
Apply Qiushi dialectical methodologies in AI coding agents to analyze contradictions, prioritize high-impact tasks, conduct fact-based investigations and self-critiques, phase complex projects into strategic stages, and chain skills into automated workflows for reliable problem-solving and iteration.
Create and validate custom Semgrep rules for detecting security vulnerabilities, bugs, code patterns, and standards using test-first methodology, conversation context for patterns and languages, plus taint mode support.
Automate end-to-end PRP workflows: generate PRDs and implementation plans via interactive research, execute autonomously with Ralph loops that implement incremental changes and run validations until passing, perform multi-agent PR reviews for code/docs/tests/security, smart commit via natural language, and create GitHub PRs.
Perform AI-powered code reviews on GitHub and GitLab pull requests by connecting to Greptile API. View and resolve review comments directly within Claude Code. Query indexed repositories for code search, codebase Q&A, and context retrieval to accelerate development workflows.
Accelerate production Go development with 42 AI skills that guide idiomatic patterns, dependency injection (wire/do/fx/dig), concurrency, performance profiling, robust testing (testify), CI/CD pipelines (GitHub Actions), security audits, database access (sqlx/pgx), APIs (gRPC/GraphQL/OpenAPI), CLI tools (Cobra/Viper), and observability (slog/Prometheus/OTel).
Build multi-language code graphs to map call graphs, attack surfaces, blast radius, taint propagation, privilege boundaries, and complexity hotspots for security audits. Visualize architecture with Mermaid diagrams, compare snapshots across git commits for evolution analysis, triage mutation testing survivors, generate crypto test vectors, diagram protocols, and project SARIF findings onto graphs.
Annotate codebases with dimensional analysis comments documenting units, dimensions, and decimal scaling. Automatically scan for arithmetic patterns, discover project-specific units, propagate annotations through expressions and functions, and validate consistency to detect mismatches and bugs in DeFi protocols or numerical code.
Design structured workflow skills for Claude Code using multi-step phases, decision trees, subagent delegation, and progressive disclosure for pipelines, routing, and safety gates. Audit skills via 6-phase review detecting structural issues, pattern adherence, tool correctness, and anti-patterns.
Review SwiftUI code to enforce best practices, modern APIs, maintainability, performance, accessibility, and Swift conventions during reading, writing, or reviewing iOS projects, ensuring high-quality mobile apps.
Bootstrap new Python projects or migrate legacy ones to modern tooling: uv for deps/envs, ruff for linting/formatting, ty for types, pytest for testing. Includes hooks for bash command interception via Node.js and async update checks on session start.
Mark up and refine AI-generated plans interactively in a UI, annotate markdown files, messages, and git changes for review, share for team collaboration, browse plan archives, and automate workflows with plan mode hooks.
Audit smart contracts for vulnerabilities across Cosmos, Solana, Polkadot, TON, Algorand, and StarkNet blockchains using specialized scanners. Assess codebase maturity with scorecards, prepare for professional audits via static analysis and test improvements, analyze token integrations for ERC standards and risks, and apply Trail of Bits guidelines for architecture reviews and secure workflows.
Run AI code reviews on uncommitted changes, branch diffs, or specific commits using OpenAI Codex CLI or Google Gemini CLI. Analyzes git diffs via bash scripts. Launches bundled Codex MCP server as subprocess for direct LLM tool access.
Configure and optimize mewt/muton mutation testing campaigns by scoping targets, tuning timeouts, and streamlining long-running tests for Rust, Go, TypeScript, and JavaScript codebases.
Audit Armeria-based Java projects for event loop blocking by discovering patterns, scanning operations, tracing calls, and generating fix plans without code changes. Pinpoint latency spike causes for pre-release validation.
Orchestrate self-correcting AI coding workflows with multi-agent teams in parallel git worktrees, persistent FTS5-indexed SQLite research wikis, auto-research loops, quality gates, and multi-LLM councils to decompose large refactors, debug issues, build features, and manage sessions across Node.js, Python, Rust, Go projects.
Automate iterative reviews and fixes for Claude Code skills using a reviewer agent in loops until quality standards are met. Target skills by path or name with max iterations, cancel active sessions by ID while preserving changes, and run custom Python hooks at session end.
Accelerate Atomic Agents app development through a guided 7-phase workflow: delegate schema design, agent and tool creation, architecture planning, codebase analysis, and code review to specialized AI sub-agents for scalable multi-agent LLM systems.
Orchestrate multi-agent teams in Claude Code to decompose complex features into atomic subtasks with dependencies, execute them in parallel, discover and load project context/standards, implement via TDD with vitest/jest/pytest, run self-reviews, and deliver security-vetted code.
Diagnose Swift concurrency issues like data races, thread safety violations, and compiler warnings in your codebase. Refactor callbacks to async/await patterns. Follow guided steps to migrate to Swift 6, handling actors, tasks, @MainActor, and Sendable conformance for iOS and macOS apps.
Generate test reports by parsing JUnit XML, Jest JSON, pytest results, and coverage data into Markdown/HTML formats with metrics, failures, slowest tests, trends, and CI annotations. Aggregate results across frameworks for summaries and exports in HTML, PDF, or JSON.
Automate overnight software development by configuring Git hooks for TDD enforcement with tests and lints, then run Claude autonomously for 6-8 hours to build features that pass all checks by morning.
Track regression tests across code releases by mapping git commits to pytest or Jest tests, tagging markers for suites, flagging coverage gaps, generating pass/fail reports with flaky detection, viewing history, and enforcing runs in CI/CD pipelines.
Scan codebases to detect CPU hotspots, intensive operations, blocking calls, and algorithmic inefficiencies. Generate detailed optimization reports with before/after code examples, performance estimates, and targeted recommendations to boost application speed in bash, Python, and Java projects.
Analyze DWARF debug files (v3-v5) in binaries to understand the format and standard, extract information using dwarfdump/readelf/llvm-dwarfdump for verification, and review parsing code in bash/python/rust for compliance and accuracy.
Build deep architectural context through line-by-line and per-function code analysis using First Principles and 5 Whys, enabling precise vulnerability hunting and bug detection in security audits. Target entire codebases, specific modules, or dense functions to map dependencies, data flows, assumptions, and effects.
Verify blockchain smart contracts match specifications from whitepapers, PDFs, Markdown, or URLs, detecting implementation gaps, undocumented behaviors, logic discrepancies, and security issues via structured audits and generating compliance reports.
Configure, deploy, optimize, troubleshoot, and integrate CodeRabbit AI code reviews across GitHub and GitLab repositories. Automate CI merge gates, cost tuning, security policies, local dev loops, performance monitoring, migrations from other tools, and webhook handling using 24 targeted skills.
Format and validate code files or directories with Prettier for JavaScript, TypeScript, CSS, Markdown, JSON, HTML, Vue, and Svelte. Check compliance without changes for CI via exit codes. Automatically create configs, pre-commit hooks, and .prettierignore. For Python projects, block sensitive env file edits and run pytest suites after file operations.
Refactor React, TypeScript, and JavaScript code to enforce frontend best practices: improve cohesion by grouping related files and constants, reduce coupling via composition, ensure predictability with unified returns and no side effects, and enhance readability by simplifying conditions and ternaries. Review git branch diffs against these principles.
Equip Claude Code, Cursor, and 17 similar AI tools with 20 Chinese skills (14 translated + 6 original) to enforce TDD workflows, systematic debugging, design-first planning, Chinese git conventions for Gitee/Coding.net, structured code reviews, parallel multi-agent task execution, and automated verification before commits—tailored for Chinese developers building production code.
Scaffold production-grade Claude Code plugins with marketplace integration, validate structure and schemas, audit for security vulnerabilities and best practices, and automate semantic version bumps across manifests and catalogs using auto-invoked skills and interactive commands.
Use AI to generate conventional commit messages from staged Git changes. Analyzes code diffs to classify updates as feat, fix, refactor, chore, or docs, then crafts standardized messages with proper prefixes for consistent Git history, changelogs, and automation compatibility.
Detect memory leaks in running Node.js, Python, and JVM apps by analyzing event listeners, closures, unbounded caches, and retained references. Scan codebases for patterns like unremoved listeners, uncancelled timers, circular references, and DOM holds, generating markdown reports with severity ratings, code locations, snippets, fixes, and prevention strategies.
Execute structured AI-guided SDLC workflows for full feature cycles—from requirements gathering and design reviews to implementation, TDD, testing, code reviews, and refactoring—using persistent project memory for reusable knowledge and specialized AI skills for debugging, security audits, and complexity analysis.
Generate read-only Markdown discrepancy reports validating messaging consistency—including tone, terminology, versions, and structure—across HTML-based websites (WordPress, Hugo, Next.js, React, Vue, etc.), GitHub repositories, and local documentation, with severity levels and fix suggestions.
Validate OpenAPI, JSON Schema, and GraphQL API specs through linting, structural analysis, completeness checks, breaking change detection, and consistency enforcement to generate actionable reports. Bootstrap Zod-based schema validation with generated TypeScript types, request/response middleware, tests, and documentation.
Scan your codebase and configurations to generate audit-ready Markdown compliance reports for PCI DSS, HIPAA, SOC 2, GDPR, and ISO 27001. Assess security controls, identify gaps, and produce project documentation using the 'crg' shortcut or embedded playbook.
Audit dependencies across Node.js, Python, PHP, Ruby, Go, and Rust projects for vulnerabilities, outdated versions, transitive issues, and license compliance. Generate detailed reports with CVE information, upgrade recommendations, and fix commands using tools like npm audit and pip-audit.
Profile Node.js, Python, and Java application performance by analyzing CPU usage, memory allocation, execution hotspots, and bottlenecks. Generate markdown reports with detailed breakdowns, patterns, and actionable optimization recommendations including code fixes.
Generate AI-powered conventional commit messages from staged Git changes: auto-classifies feat/fix/docs types, detects scopes/breaking changes, matches project commit history style. Preview the message, confirm, and auto-commit in one workflow.
Analyze test coverage reports from Jest/nyc, pytest, Go test, and JaCoCo across JavaScript, Python, Go, and Java projects to identify untested code paths, branch gaps, low-coverage files, enforce thresholds, and generate detailed reports with targeted test recommendations.
Create and manage snapshot tests for UI components and data using Jest, Vitest, or pytest to catch regressions. Analyze test failures with intelligent diff reviews, selectively update snapshots for intentional changes, validate and organize snapshot files, then generate detailed analysis reports.
Run mutation testing on JavaScript, Python, Java, Go, C#, or Ruby codebases to evaluate test suite quality. Introduce code mutants with tools like Stryker, mutmut, PITest, or go-mutesting, check detection rates, identify coverage gaps, and generate reports with survival scores and improvement suggestions.
Audit PostgreSQL and MySQL databases for integrity issues including NULLs, orphans, invalid formats, ranges, and duplicates, then generate and enforce CHECK constraints, foreign keys, and triggers. Extend validation to application level with type checks, regex patterns, foreign key integrity, and custom business rules.
Automate intelligent YAML validation, linting, schema inference, normalization, and transformation for Kubernetes manifests, GitHub Actions workflows, and Docker Compose files. Receive minimal patches, detailed issues, and ready-to-run validation commands to fix configs quickly.
Audit codebases with a security agent that scans for vulnerabilities like SQL injection, XSS, CSRF, auth flaws, insecure dependencies, and secrets; generates severity-rated reports including file locations, explanations, compliance checks, and code fixes with examples.
Audit agent skill designs in SKILL.md files by scoring them against official specs and best practices, with multi-dimensional evaluations and actionable improvement suggestions for reviews and enhancements.
Drive spec-driven development from one SPEC.md file: compress specs into caveman encoding (~75% token reduction), implement tasks via plan-execute loops with verification and commits, detect code-spec drift by invariants/interfaces/tasks, backprop bugs with root causes/invariants/tests/fixes.
Generate production Go CLIs from OpenAPI specs or API descriptions, then polish for verification, agentically review command outputs, score against benchmarks, regenerate with template updates, run retrospectives for improvements, and publish to a shared GitHub library via automated PRs.
Enforce Test-Driven Development by auto-detecting test frameworks like Vitest, Jest, Storybook, pytest, or Go tests, installing reporters, configuring JSON output, and using pre-tool hooks to block file writes/edits until tests pass.