Back to Plugins
ClaudePluginHub

Community directory for discovering and installing Claude Code plugins.

Product

  • Browse Plugins
  • Marketplaces
  • Pricing
  • About
  • Contact

Resources

  • Learning Center
  • Claude Code Docs
  • Plugin Guide
  • Plugin Reference
  • Plugin Marketplaces

Community

  • Browse on GitHub
  • Get Support

Legal

  • Terms of Service
  • Privacy Policy
  • Admin Panel

© 2025 ClaudePluginHub

Community Maintained · Not affiliated with Anthropic

ClaudePluginHub
ClaudePluginHub
ToolsLearnPricing
Search everything...
Back to Plugins
Plugin
Official

pr-review-toolkit

By anthropics

Run comprehensive AI-powered PR reviews on git diffs or pull requests, using specialized agents to analyze code quality, test coverage, error handling, type design, comments accuracy, and simplification opportunities. Receive categorized summaries of critical issues, important fixes, suggestions, strengths, and action plans.

Git
Github
Bash
code-quality
git-workflow
testing

Component Overview

1
/review-pr
Commands
6
code-reviewer, code-simplifier +4
Agents
0
Skills
0
Hooks
0
MCP Servers
0
LSP Servers
0
Output Styles

Install

$
npx claudepluginhub anthropics/claude-plugins-official --plugin pr-review-toolkit

Component Details

Commands (1)

Comprehensive PR Review
/review-pr

Comprehensive PR review using specialized agents

Agents (6)

code-reviewer
/code-reviewer

Use this agent when you need to review code for adherence to project guidelines, style guides, and best practices. This agent should be used proactively after writing or modifying code, especially before committing changes or creating pull requests. It will check for style violations, potential issues, and ensure code follows the established patterns in CLAUDE.md. Also the agent needs to know which files to focus on for the review. In most cases this will recently completed work which is unstaged in git (can be retrieved by doing a git diff). However there can be cases where this is different, make sure to specify this as the agent input when calling the agent. Examples: <example> Context: The user has just implemented a new feature with several TypeScript files. user: "I've added the new authentication feature. Can you check if everything looks good?" assistant: "I'll use the Task tool to launch the code-reviewer agent to review your recent changes." <commentary> Since the user has completed a feature and wants validation, use the code-reviewer agent to ensure the code meets project standards. </commentary> </example> <example> Context: The assistant has just written a new utility function. user: "Please create a function to validate email addresses" assistant: "Here's the email validation function:" <function call omitted for brevity> assistant: "Now I'll use the Task tool to launch the code-reviewer agent to review this implementation." <commentary> Proactively use the code-reviewer agent after writing new code to catch issues early. </commentary> </example> <example> Context: The user is about to create a PR. user: "I think I'm ready to create a PR for this feature" assistant: "Before creating the PR, I'll use the Task tool to launch the code-reviewer agent to ensure all code meets our standards." <commentary> Proactively review code before PR creation to avoid review comments and iterations. </commentary> </example>

code-simplifier
/code-simplifier

Use this agent when code has been written or modified and needs to be simplified for clarity, consistency, and maintainability while preserving all functionality. This agent should be triggered automatically after completing a coding task or writing a logical chunk of code. It simplifies code by following project best practices while retaining all functionality. The agent focuses only on recently modified code unless instructed otherwise. Examples: <example> Context: The assistant has just implemented a new feature that adds user authentication to an API endpoint. user: "Please add authentication to the /api/users endpoint" assistant: "I've implemented the authentication for the /api/users endpoint. Here's the code:" <function call omitted for brevity> <commentary> Since a logical chunk of code has been written (authentication feature), use the Task tool to launch the code-simplifier agent to improve the code's clarity and maintainability while preserving functionality. </commentary> assistant: "Now let me use the code-simplifier agent to refine this implementation for better clarity and maintainability" </example> <example> Context: The assistant has just fixed a bug by adding several conditional checks. user: "Fix the null pointer exception in the data processor" assistant: "I've added the necessary null checks to prevent the exception:" <function call omitted for brevity> <commentary> After modifying code to fix a bug, use the code-simplifier agent to ensure the fix follows best practices and maintains code quality. </commentary> assistant: "Let me refine this bug fix using the code-simplifier agent to ensure it follows our best practices" </example> <example> Context: The assistant has just refactored a function to improve performance. user: "Optimize the data sorting algorithm for better performance" assistant: "I've optimized the sorting algorithm. Here's the updated implementation:" <function call omitted for brevity> <commentary> After completing a performance optimization task, use the code-simplifier agent to ensure the optimized code is also clear and maintainable. </commentary> assistant: "Now I'll use the code-simplifier agent to ensure the optimized code is also clear and follows our coding standards" </example>

comment-analyzer
/comment-analyzer

Use this agent when you need to analyze code comments for accuracy, completeness, and long-term maintainability. This includes: (1) After generating large documentation comments or docstrings, (2) Before finalizing a pull request that adds or modifies comments, (3) When reviewing existing comments for potential technical debt or comment rot, (4) When you need to verify that comments accurately reflect the code they describe. <example> Context: The user is working on a pull request that adds several documentation comments to functions. user: "I've added documentation to these functions. Can you check if the comments are accurate?" assistant: "I'll use the comment-analyzer agent to thoroughly review all the comments in this pull request for accuracy and completeness." <commentary> Since the user has added documentation comments and wants them checked, use the comment-analyzer agent to verify their accuracy against the actual code. </commentary> </example> <example> Context: The user just asked to generate comprehensive documentation for a complex function. user: "Add detailed documentation for this authentication handler function" assistant: "I've added the documentation. Now let me use the comment-analyzer agent to verify that the comments are accurate and helpful for long-term maintenance." <commentary> After generating large documentation comments, proactively use the comment-analyzer to ensure quality. </commentary> </example> <example> Context: The user is preparing to create a pull request with multiple code changes and comments. user: "I think we're ready to create the PR now" assistant: "Before creating the pull request, let me use the comment-analyzer agent to review all the comments we've added or modified to ensure they're accurate and won't create technical debt." <commentary> Before finalizing a PR, use the comment-analyzer to review all comment changes. </commentary> </example>

pr-test-analyzer
/pr-test-analyzer

Use this agent when you need to review a pull request for test coverage quality and completeness. This agent should be invoked after a PR is created or updated to ensure tests adequately cover new functionality and edge cases. Examples: <example> Context: Daisy has just created a pull request with new functionality. user: "I've created the PR. Can you check if the tests are thorough?" assistant: "I'll use the pr-test-analyzer agent to review the test coverage and identify any critical gaps." <commentary> Since Daisy is asking about test thoroughness in a PR, use the Task tool to launch the pr-test-analyzer agent. </commentary> </example> <example> Context: A pull request has been updated with new code changes. user: "The PR is ready for review - I added the new validation logic we discussed" assistant: "Let me analyze the PR to ensure the tests adequately cover the new validation logic and edge cases." <commentary> The PR has new functionality that needs test coverage analysis, so use the pr-test-analyzer agent. </commentary> </example> <example> Context: Reviewing PR feedback before marking as ready. user: "Before I mark this PR as ready, can you double-check the test coverage?" assistant: "I'll use the pr-test-analyzer agent to thoroughly review the test coverage and identify any critical gaps before you mark it ready." <commentary> Daisy wants a final test coverage check before marking PR ready, use the pr-test-analyzer agent. </commentary> </example>

silent-failure-hunter
/silent-failure-hunter

Use this agent when reviewing code changes in a pull request to identify silent failures, inadequate error handling, and inappropriate fallback behavior. This agent should be invoked proactively after completing a logical chunk of work that involves error handling, catch blocks, fallback logic, or any code that could potentially suppress errors. Examples: <example> Context: Daisy has just finished implementing a new feature that fetches data from an API with fallback behavior. Daisy: "I've added error handling to the API client. Can you review it?" Assistant: "Let me use the silent-failure-hunter agent to thoroughly examine the error handling in your changes." <Task tool invocation to launch silent-failure-hunter agent> </example> <example> Context: Daisy has created a PR with changes that include try-catch blocks. Daisy: "Please review PR #1234" Assistant: "I'll use the silent-failure-hunter agent to check for any silent failures or inadequate error handling in this PR." <Task tool invocation to launch silent-failure-hunter agent> </example> <example> Context: Daisy has just refactored error handling code. Daisy: "I've updated the error handling in the authentication module" Assistant: "Let me proactively use the silent-failure-hunter agent to ensure the error handling changes don't introduce silent failures." <Task tool invocation to launch silent-failure-hunter agent> </example>

type-design-analyzer
/type-design-analyzer

Use this agent when you need expert analysis of type design in your codebase. Specifically use it: (1) when introducing a new type to ensure it follows best practices for encapsulation and invariant expression, (2) during pull request creation to review all types being added, (3) when refactoring existing types to improve their design quality. The agent will provide both qualitative feedback and quantitative ratings on encapsulation, invariant expression, usefulness, and enforcement. <example> Context: Daisy is writing code that introduces a new UserAccount type and wants to ensure it has well-designed invariants. user: "I've just created a new UserAccount type that handles user authentication and permissions" assistant: "I'll use the type-design-analyzer agent to review the UserAccount type design" <commentary> Since a new type is being introduced, use the type-design-analyzer to ensure it has strong invariants and proper encapsulation. </commentary> </example> <example> Context: Daisy is creating a pull request and wants to review all newly added types. user: "I'm about to create a PR with several new data model types" assistant: "Let me use the type-design-analyzer agent to review all the types being added in this PR" <commentary> During PR creation with new types, use the type-design-analyzer to review their design quality. </commentary> </example>

README

Similar Plugins

dotnet-skills

1
·
42

Comprehensive .NET development skills for modern C#, ASP.NET, MAUI, Blazor, Aspire, EF Core, Native AOT, testing, security, performance optimization, CI/CD, and cloud-native applications

1mo
v2.0.0
View dotnet-skills

fullstack-dev-skills

8.4k
·
165

Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.

today
v0.4.12
View fullstack-dev-skills

context7-plugin

51.8k
·
167

Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.

1mo
vctx7@0.3.9
View context7-plugin

startup-business-analyst

32.9k
·
78

Comprehensive startup business analysis with market sizing (TAM/SAM/SOM), financial modeling, team planning, and strategic research

3w
v1.0.5
View startup-business-analyst

everything-claude-code

0
·
45

Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use

2w
v1.9.0
View everything-claude-code

episodic-memory

342
·
40

Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns.

3w
v1.0.15
View episodic-memory
Stats
Version1.0.0
Parent Repo Stars15906
Parent Repo Forks1774
Installs192
MaintenanceFair
AddedFeb 5, 2026
Actions
View on GitHubView READMEPlugin Marketplace JSON
Available In
claude-plugins-official17,495
Safety Signals
Caution

Uses power tools

Uses Bash, Write, or Edit tools

Tags
Technologies
Git
Github
Bash
Typescript
React
Javascript
Topics
code-quality
git-workflow
testing
Categories
testing
deployment
Keywords
pr-review
code-review
code-quality
testing
error-handling
type-design
code-simplification
comments
Stats
Version1.0.0
Parent Repo Stars15906
Parent Repo Forks1774
Installs192
MaintenanceFair
AddedFeb 5, 2026
Actions
View on GitHubView READMEPlugin Marketplace JSON
Available In
claude-plugins-official17,495
Safety Signals
Caution

Uses power tools

Uses Bash, Write, or Edit tools

Tags
Technologies
Git
Github
Bash
Typescript
React
Javascript
Topics
code-quality
git-workflow
testing
Categories
testing
deployment
Keywords
pr-review
code-review
code-quality
testing
error-handling
type-design
code-simplification
comments

Claude Code Plugins Directory

A curated directory of high-quality plugins for Claude Code.

⚠️ Important: Make sure you trust a plugin before installing, updating, or using it. Anthropic does not control what MCP servers, files, or other software are included in plugins and cannot verify that they will work as intended or that they won't change. See each plugin's homepage for more information.

Structure

  • /plugins - Internal plugins developed and maintained by Anthropic
  • /external_plugins - Third-party plugins from partners and the community

Installation

Plugins can be installed directly from this marketplace via Claude Code's plugin system.

To install, run /plugin install {plugin-name}@claude-plugins-official

or browse for the plugin in /plugin > Discover

Contributing

Internal Plugins

Internal plugins are developed by Anthropic team members. See /plugins/example-plugin for a reference implementation.

External Plugins

Third-party partners can submit plugins for inclusion in the marketplace. External plugins must meet quality and security standards for approval. To submit a new plugin, use the plugin directory submission form.

Plugin Structure

Each plugin follows a standard structure:

plugin-name/
├── .claude-plugin/
│   └── plugin.json      # Plugin metadata (required)
├── .mcp.json            # MCP server configuration (optional)
├── commands/            # Slash commands (optional)
├── agents/              # Agent definitions (optional)
├── skills/              # Skill definitions (optional)
└── README.md            # Documentation

License

Please see each linked plugin for the relevant LICENSE file.

Documentation

For more information on developing Claude Code plugins, see the official documentation.