Python-specific development tools: 7 agents, 2 commands, 1 skill for Python code review, testing, and type checking
npx claudepluginhub rbozydar/rbw-claude-code --plugin python-backendGet alternative perspectives on plans and specifications from Google Gemini. Use when you want a second opinion from a different LLM on feature plans, architecture proposals, or project specifications.
Get code review feedback from Google Gemini. Use when you want a second opinion from a different LLM on code changes, identifying issues Claude might miss.
ALWAYS Use this agent when you need to write, review, or refactor Python code, with SOLID principles, asyncio patterns, and production-quality standards. This agent should be always invoked when writing Python code to ensure quality, or when explicitly asked to review or improve existing code. **Examples:** <example> Context: The user is asking for a new async function to fetch data from multiple APIs. user: "Write a function that fetches user data from three different API endpoints in parallel" assistant: "I'll implement that async function for you:" <function implementation provided> assistant: "Now let me use the python-coder agent to review this code for adherence to asyncio best practices and our coding standards." </example> <example> Context: The user has just written a Python class and wants it reviewed. user: "Can you review this Python class I wrote?" assistant: "I'll use the python-coder agent to provide a comprehensive code review." <commentary> Since the user is requesting a code review, use the python-coder agent to analyze the code against all Python best practices, SOLID principles, and the project's conventions. </commentary> </example> <example> Context: A new provider module was just added to the deep_research/providers/ directory. user: "I just added a new research provider, please take a look" assistant: "Let me use the python-coder agent to review your new provider implementation." <commentary> The user added new code to the providers module. Use the python-coder agent to verify it follows the ResearchProvider protocol, uses proper async patterns with fault isolation, and adheres to the project's architectural patterns. </commentary> </example> <example> Context: The user is refactoring code to fix a bug. user: "There's a race condition in the orchestrator, can you fix it?" assistant: "I'll analyze and fix the race condition:" <fix implementation provided> assistant: "Now let me use the python-coder agent to verify this fix follows proper asyncio synchronization patterns." </example>
Use this agent when you need to review Python code changes with an extremely high quality bar. This agent should be invoked after implementing features, modifying existing code, or creating new Python modules. The agent applies Kieran's strict Python conventions and taste preferences to ensure code meets exceptional standards. Examples: - <example> Context: The user has just implemented a new FastAPI endpoint. user: "I've added a new user registration endpoint" assistant: "I've implemented the registration endpoint. Now let me have Kieran review this code to ensure it meets our quality standards." <commentary> Since new endpoint code was written, use the kieran-python-reviewer agent to apply Kieran's strict Python conventions and quality checks. </commentary> </example> - <example> Context: The user has refactored an existing service class. user: "Please refactor the EmailService class to handle attachments" assistant: "I've refactored the EmailService to handle attachments." <commentary> After modifying existing code, especially services, use kieran-python-reviewer to ensure the changes meet Kieran's high bar for code quality. </commentary> assistant: "Let me have Kieran review these changes to the EmailService." </example> - <example> Context: The user has created new utility functions. user: "Create a data validation module" assistant: "I've created the data validation module." <commentary> New modules should be reviewed by kieran-python-reviewer to check Pythonic patterns, type hints, and best practices. </commentary> assistant: "I'll have Kieran review this module to ensure it follows our conventions." </example>
You are a senior ML/Data Science engineer with deep expertise in machine learning systems, data pipelines, and LLM integrations. You review code with a focus on correctness, reproducibility, and production-readiness.
You are a senior Python developer with decades of experience and zero patience for unnecessary complexity. You've seen every pattern, every framework, every "best practice" come and go. You have an almost allergic reaction to over-engineering and a deep appreciation for Python's philosophy: "Simple is better than complex. Complex is better than complicated."
This skill teaches proper Gemini CLI usage patterns. Use stdin piping instead of shell variable gymnastics. Covers code review, plan review, and general prompts.
This skill provides comprehensive Python development standards covering SOLID principles, asyncio patterns, type hints, testing, and production-quality code. Load this skill when writing, reviewing, or refactoring Python code to apply strict coding standards directly in the current context without spawning a subagent.
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, and rules evolved over 10+ months of intensive daily use
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
Comprehensive PR review agents specializing in comments, tests, error handling, type design, code quality, and code simplification
Tools to maintain and improve CLAUDE.md files - audit quality, capture session learnings, and keep project memory current.
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques