Plugins listed here are tagged for this technology stack and auto-indexed from public GitHub repositories.
Plugins listed here are tagged for this technology stack and auto-indexed from public GitHub repositories.
Claude Code plugins tagged for Ruff development. Browse commands, agents, skills, and more.
Build scalable production Python backends and APIs with Django 5.x async views, FastAPI microservices, Celery tasks, SQLAlchemy/Pydantic data handling, pytest testing strategies, and architecture optimizations using uv/ruff for modern 3.12+ codebases.
Enforce automated linting (ESLint, Ruff), type checking (tsc, mypy), and security audits (npm audit, bandit) after code changes in Node.js/TypeScript/Python projects; debug systematically in four phases; generate atomic task checklists; refactor incrementally with Kaizen principles; auto-stage, commit conventionally, and push to GitHub.
Profile Python performance bottlenecks with cProfile/py-spy, analyze pytest test suites for quality/coverage, check async code for issues/patterns, lint/fix with ruff, optimize algorithms/memory, generate unit/integration tests, and package/publish projects using uv/pyproject.toml.
Streamline Airflow data engineering workflows using Astro CLI: initialize and manage local/production environments, author/debug/deploy DAGs, profile warehouse schemas with lineage tracing, integrate dbt Cosmos, query tables, and migrate to Airflow 3.x.
Guides Python developers on using Astral tools: manage projects and dependencies with uv (including pip/poetry migrations and uvx scripts), lint/format/fix code with Ruff (replacing Flake8/Black/isort via pyproject.toml), and type-check with ty (mypy/Pyright migrations, LSP config). Activates on uv.lock or tool configs.
Establishes opinionated Python 3.11+ engineering standards with SOLID principles, strict typing, pytest testing, ruff linting; automates TDD workflows, routes to specialists for CLI apps (Typer/Rich/Textual), web APIs (FastAPI/Flask/Django), data pipelines, packaging, code reviews, and PyPI CI/CD deployment.
Build modern Python web apps with Django and FastAPI, define SQLAlchemy models and Alembic migrations, write pytest tests, debug errors, and review code quality using specialized agents that orchestrate database tasks and full codebase reviews.
Automate full linting pipelines for Python projects: discover linters from pyproject.toml or package.json, format code with prettier, lint with ruff/mypy/bandit/eslint, resolve root causes via agents, and verify architecture post-fixes. Invoke /lint on files/directories or use in orchestrators for task completion.
Automate code reviews for Python FastAPI backends with SQLAlchemy, PostgreSQL, and pytest. Analyze git diffs to check PEP8 style, type hints, async patterns, error handling, API routing, dependency injection, database sessions, queries, N+1 issues, test fixtures, mocking, and verify ruff/mypy linters while preventing false positives via verification protocols.
Automate code quality enforcement in git workflows with tiered reviews for OWASP security, performance, SOLID principles, and clean code on changes or PRs. Lint and auto-fix Markdown docs plus Python code via Ruff and pytest. Debug errors through root cause analysis and explore codebases for architecture, patterns, and debt.
Audit and auto-configure project infrastructure to enforce standards for CI/CD workflows, Dockerfiles, pre-commit hooks, linting, testing frameworks, security scans, feature flags, and documentation across JavaScript/TypeScript, Python, Rust, Go, and infrastructure projects using CLI flags like --check-only and --fix.
Route LLM tasks like research, code, analysis, and generation to cheapest capable models across 20+ providers via auto-classification by type and complexity, monitor subscriptions, track savings with dashboards and alerts to Slack/Discord, and automate llm-router releases to PyPI/GitHub.
Automate end-to-end best practices for scientific Python projects: initialize reproducible pixi environments with conda/PyPI deps, enforce code quality via ruff/mypy/pre-commit, build pytest numerical tests, create distributable Hatchling packages, and generate Sphinx/MkDocs docs with NumPy-style docstrings and Diataxis structure.
Build, test, modernize, package, and deploy Python 3.11+ CLI apps using Typer/Rich for UIs, pytest for TDD suites, ruff/mypy/ty for linting/types, uv/hatchling for pyproject.toml management, pre-commit hooks, GitHub Actions CI/CD pipelines, and agent-orchestrated workflows for feature addition, bug fixes, code reviews, and documentation.
Streamline Python project workflows: initialize and manage dependencies, Python versions, and tools with uv; lint, format, and detect dead code with ruff and vulture; type check rapidly with ty or basedpyright; run advanced pytest suites with fixtures, parametrization, and coverage; integrate into VSCode, pre-commit, and GitHub Actions; build and publish packages to PyPI.
Automate Myco substrate lifecycle management: boot with health checks and note hunger, end sessions via immune drift fixes and assimilation, investigate issues with root-cause agents, migrate schemas, sweep stale code, orchestrate full releases, and draft craft proposals for evolution.
Audit Python code for vulnerabilities by combining static scans from Bandit, pip-audit, Safety, Ruff S-rules, and detect-secrets with LLM-powered analysis detecting logic flaws, auth bypasses, race conditions, injections, path traversal, and secrets exposure.
Build production-grade Python applications using FastAPI for REST APIs, SQLAlchemy for ORM, Temporal for durable workflows, functional core/imperative shell architecture, stub-driven TDD with pytest, monorepo setups via uv/mise/ruff, and git workflows for stacked commits/reviews.
Enforce AI-first SDLC with zero technical debt: automate validation pipelines (linting, tests, security scans via ruff/pytest/mypy/bandit) before commits/PRs, create Git branches/PRs/CI workflows for Python/JS/Go projects, and use agents for compliance reviews/architecture checks.
Clone untrusted Python dependencies from GitHub, decompose them into verifiable sub-packages via test-driven evaluation, generate focused pytest unit tests, rewrite imports, and iteratively implement secure from-scratch replacements to mitigate supply chain attacks.
Develop idiomatic Python 3.12+ code, CLI tools, scripts, and services using stdlib-first patterns, type hints, protocols, and uv/ruff/pyright toolchain. Analyze codebases for clean architecture and type safety, then generate structured implementation proposals with tests to enforce best practices.
Build production Python 3.13+ projects with async FastAPI apps, pytest testing, uv packaging, Ruff linting, GitHub Actions CI/CD, Cloudflare Workers deployment, and Modal serverless for GPU-accelerated video pipelines using OpenCV and FFmpeg.
Automate end-to-end maintenance of Python/ML open-source projects on GitHub: triage and analyze issues/PRs/discussions, resolve PR conflicts with attributed fixes, conduct multi-agent code reviews for quality/security/performance, prepare release artifacts/changelogs/migration guides, and optimize GitHub Actions CI/CD pipelines.
Execute structured humans-in-the-loop idea-to-code workflows: plan ideas, enforce TDD and incremental development, resolve git conflicts, debug CI failures on GitHub Actions, optimize Dockerfiles, apply design patterns, manage test infrastructure, and commit with prechecks using specialized skills, commands, and hooks.
Develop, test, debug, review, and migrate Keboola Python components for data pipelines—including extractors, writers, apps—with AI skills for config schemas (conditional fields, UI), code quality (Ruff, architecture), local datadir/pytest/VCR testing, uv/pyproject.toml upgrades, Docker builds, and platform context via MCP/Datadog.
Orchestrate AI coding agents through automated loops for end-to-end feature development: brainstorm and plan interactively, create GitHub issues/PRs, implement code, perform iterative simplify/code/security reviews, wait for CI/CD, fix issues, and repeat until clean and merge-ready. Generate and run custom StateGraph workflows from natural language prompts with quality gates and diagram previews.
Automatically format Python files using Ruff after every Write, Edit, or MultiEdit operation. Maintains consistent code style effortlessly as a post-tool hook, integrating with bash scripts for additional automation like docs reminders and CLI syncing.
Bootstrap Claude Code with 17 specialized agents, skills, and hooks to audit/evolve .claude/ configs, engineer/refactor Python code via TDD, profile/optimize ML workloads, generate docs/tests, design systems, diagnose issues, and manage workflows professionally.
Run specialized AI agent skills to brainstorm designs, conduct multi-agent research and task decomposition, implement verified code with cross-model reviews, manage Kubernetes Tilt dev and agent sandboxes, handle Python uv/ruff workflows, perform security audits and git ops, plus consolidate session knowledge via dreaming cycles.
Audit Python test suites for inverted pyramids, coverage gaps, flakiness, distribution imbalances, and anti-patterns; diagnose root causes like race conditions or dependencies; review test code quality; set up CI/CD pipelines with progressive testing stages on GitHub Actions, GitLab, or Jenkins.
Orchestrate AI agent teams to dynamically plan multi-step Django projects with task dependencies and parallel execution, delegate implementation of models/views/serializers/admin/URLs following best practices, and validate via pytest tests, mypy checks, ruff linting, Django system checks, and migrations.
Perform thorough reviews of Python tests in code or projects using a standard checklist that evaluates isolation, mocks, execution time, flakiness, and naming clarity to uphold high testing standards.
Automate end-to-end git-centric development workflows using specialized agents and commands: triage changes for verification, create PRs with context-aware reviews and labels, monitor and fix CI/CD pipelines, generate release notes and docs, debug failures, run tests and static analysis, and autonomously execute plans with auto-fixes.
Automate enforcement of coding standards by converting PR review comments into lint rules for ESLint, Ruff, RuboCop, and more across JS/TS, Python, Ruby, Rust; validate, upgrade, and generate typed .spec.ts from CLAUDE.md/AGENTS.md; audit repos for AI dev feedback loop maturity with CI/linter checks.
Automatically detect and format Python files changed by Write, Edit, or MultiEdit operations using Ruff via a PostToolUse hook, ensuring consistent code style without manual formatting steps.
Summon Python specialists to scaffold production Django/FastAPI projects with uv/Docker/PostgreSQL, enforce Mypy/PEP8/security reviews, audit codebases for multi-agent parallelization, implement Celery tasks/WebSockets, and generate pytest strategies—all using 2025 patterns and official docs.
Convert CLAUDE.md rules into automated code checks using ESLint, Prettier, Biome, Ruff, and GitHub Actions workflows. Run verifications to confirm passes, then remove implemented rules to optimize agent context and cut token usage in AI interactions.
Run Python unittest tests using the rut test runner as a pytest alternative, detecting and executing only changed tests via --changed, generating coverage reports, applying TDD principles, and streamlining debug workflows through comprehensive CLI options.
Automate Git workflows including PR review processing into fix commits, merge conflict resolution with type checks, explanatory PR creation; perform precision code reviews for bugs/quality issues; run multi-language security scans with fix proposals; manage Linear tasks via API.
Run adaptive autonomous SDLC workflows that orchestrate agent teams to implement Python features via enforced TDD/BDD cycles with pytest-bdd scaffolding, git worktree isolation for parallel tasks, Beads CLI for dependency-tracked issue management, ruff/mypy/pytest verification pipelines, documentation updates, PR creation, and automated merges.
Audit repositories and refactor code with zero-tolerance pedantry: enforce precise naming, strict casing laws, structural symmetry, import discipline, declaration order, no magic values or dead code, and consistent patterns in Python, TypeScript, Go, and JavaScript projects. Generate detailed reports, compliance verdicts, and fix suggestions with pedantry scores.
Scaffold production-grade Python packages with best-practice configs for pyproject.toml, Ruff/mypy linting, pytest testing, GitHub Actions CI/CD, MkDocs docs, wheels/sdists packaging, CLI/API design, versioning, and security hardening—or audit repos for violations with structured reports, fixes, and health scores.
Scaffold opinionated Django projects with one-file-per-model organization, Ninja APIs via domain-grouped routers and Pydantic schemas, Unfold admin with HTMX and Tailwind, pytest tests using factory_boy, Dynaconf config, uv deps, and Docker setup. Delegate code reviews to the agent to enforce patterns after changes.
Automate end-to-end developer workflows in Claude Code: create ticket branches/PRs with JTBD stories, handle reviews/fixups/merges, scope/track Linear/GitHub/Jira tickets, run pytest/Playwright QA, generate ADRs/release notes, audit code/architecture, notify via Slack, query databases safely.
Run integrated Python TDD quality checks in your editor: execute pytest tests targeting 90%+ coverage, mypy strict typing, Black/isort formatting, and ruff linting for instant static analysis and code health enforcement during development.
Automate full GitHub PR preparation from any Python or Node.js branch: detect project type, set up/validate CI workflows, run local CI (linting with Ruff, type checks, tests, builds) with smart error recovery, generate Mermaid diagrams of file changes and CI results, create conventional PRs with summaries and test plans, then push, PR, optionally merge or release packages.
Run a multi-agent CI/CD preflight pipeline for Python packages that executes 8 sequential quality gates—linting with ruff/mypy, pytest tests/coverage, cross-platform compatibility, multi-version testing (3.9-3.13), security scans, API stability checks, and packaging validation—stopping on failure and auto-creating a GitHub PR with reports and diagrams on success.