From jeredblu-tools
Evaluates MCP servers from GitHub repos for security vulnerabilities, privacy risks, code quality, community feedback, and reliability with risk scoring and recommendations. Activate on safety queries or assessments.
npx claudepluginhub jeredblu/jeredblu-marketplace --plugin jeredblu-toolsThis skill uses the workspace's default tool permissions.
Automatically evaluate the security, privacy, and reliability of MCP (Model Context Protocol) servers from GitHub repositories. This skill performs comprehensive assessments including code analysis, community feedback research, security vulnerability detection, and risk scoring to provide actionable recommendations.
Evaluates MCP servers from GitHub, npm, PyPI, or repo URLs for safety, functionality, legal compliance, and user fit before installation.
Audits git repositories, AI skills, and MCP servers for security risks including dependencies, prompt injection, credential theft, runtime dynamism, manifest drift, CVEs, and exploited vulns.
Handles Claude Code MCP integration: installs/manages servers (HTTP/SSE/stdio), scopes, enterprise configs, OAuth auth, resources/@mentions, prompts, limits, security; delegates to docs-management.
Share bugs, ideas, or general feedback.
Automatically evaluate the security, privacy, and reliability of MCP (Model Context Protocol) servers from GitHub repositories. This skill performs comprehensive assessments including code analysis, community feedback research, security vulnerability detection, and risk scoring to provide actionable recommendations.
Use this skill when users:
This skill works with or without MCP servers through a graceful degradation approach:
For GitHub repositories:
For web search and community validation:
Ask the user their preferred output format:
Acknowledge receipt and inform user that evaluation is beginning. Parse the GitHub URL to extract owner and repository name.
Check which tools are available and plan the evaluation approach:
Use built-in create_file tool to create assessment file in /mnt/user-data/outputs/:
MCP_Security_Assessment_{owner}_{repo_name}.mdWith GitHub MCP (Priority):
list_commits for activity analysissearch_repositories for similar MCP serversWith Bright Data MCP (Alternative):
scrape_as_markdown to retrieve:
https://github.com/{owner}/{repo}https://github.com/{owner}/{repo}/blob/main/README.mdhttps://raw.githubusercontent.com/{owner}/{repo}/main/{filepath}Fallback Without MCP:
Document each file examined with code snippets of important sections.
Execute evaluation in this order, updating assessment file after each step:
Search for alternative MCP servers with similar functionality:
Analyze codebase for:
Reference the security patterns documentation: Review references/mcp_security_patterns.md to identify known vulnerability patterns, and references/safe_mcp_examples.md to avoid false positives from legitimate patterns.
Be specific: Include actual code snippets as evidence. Categorize findings by severity (Critical, High, Medium, Low). Focus on concrete vulnerabilities, not generic statements.
Document in "Code Analysis" section.
Perform specific web searches using Bright Data MCP or web search:
For each search:
Document all findings in "Community Feedback" section with clear source attribution.
Analyze all collected information and evaluate across dimensions:
| Dimension | Evaluation Criteria |
|---|---|
| Security | Protection against attacks, credential handling, code vulnerabilities |
| Privacy | Data collection practices, data minimization, transmission security |
| Reliability | Code quality, maintenance activity, error handling |
| Transparency | Documentation quality, purpose clarity, open source practices |
| Usability | Setup complexity, integration quality, user experience |
For each dimension:
Scoring Guidelines:
Create "Risk Assessment" section with scoring table and "Final Verdict" with definitive recommendation.
Evaluate practical aspects:
Document in "Usability Assessment" section with specific examples.
Provide definitive recommendations. Avoid hedging. Be clear about:
/mnt/user-data/outputs/Create assessment with this exact structure:
# Security Assessment: [MCP Server Name]
## Evaluation Overview
- Repository URL: [GitHub URL]
- Evaluation Date: [Current Date]
- Evaluator: Claude AI
- Repository Owner: [Username/Organization]
- Evaluation Methods: [Tools used]
- Tool Availability: [Which MCP servers were available]
- Executive Summary: [1-2 paragraphs on safety and key risks/benefits]
## GitHub Repository Assessment
[Repository stats, contributor analysis, activity patterns]
## Server Purpose
[Functionality description, external services, permissions, creator info]
## Expected Functionality
[Detailed explanation of capabilities, APIs, typical usage, limitations, examples]
## Alternative MCP Servers
[List of alternatives with comparisons]
## Code Analysis
[Security review findings categorized by severity with code snippets]
## Community Feedback
[External references, user reviews, discussions with source attribution]
## Risk Assessment
[Comprehensive evaluation across all dimensions]
## Usability Assessment
[Practical evaluation of setup, documentation, integration]
### Scoring
| Dimension | Score (0-100) | Justification |
|-----------|---------------|--------------|
| Security | [Score] | [Specific evidence] |
| Privacy | [Score] | [Specific evidence] |
| Reliability | [Score] | [Specific evidence] |
| Transparency | [Score] | [Specific evidence] |
| Usability | [Score] | [Specific evidence] |
| **OVERALL RATING** | [Score] | [Summary] |
### Final Verdict
[Clear statement on whether to use this MCP server, with specific use cases]
### Evaluation Limitations
[If applicable, note any limitations due to unavailable tools]
If issues occur during evaluation:
Keep user informed at key milestones:
Show exactly what tools/functions being called and their results. If evaluation requires extended time, provide interim updates.
Be Specific, Not Generic:
Make Confident Judgments:
Include Evidence: Always back up scores and recommendations with specific code examples, community feedback quotes, or measurable metrics.
Adapt to Available Tools: Use the best tools available but continue evaluation even without ideal tools. Document what methods were used and any resulting limitations.
This skill includes reference documentation in the references/ directory:
mcp_security_patterns.md - Comprehensive catalog of security vulnerabilities and attack patterns specific to MCP serverssafe_mcp_examples.md - Examples of legitimate MCP patterns that might look suspicious but are safeRead these references as needed during code analysis to improve detection accuracy and reduce false positives.