Scores MCP servers, APIs, or CLIs for agent-readiness using Clarvia AEO across accessibility, structuring, compatibility, and trust. Searches 15k+ tools for top options.
From antigravity-awesome-skillsnpx claudepluginhub sickn33/antigravity-awesome-skills --plugin antigravity-awesome-skillsThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Before adding any MCP server, API, or CLI tool to your agent workflow, use Clarvia to score its agent-readiness. Clarvia evaluates 15,400+ AI tools across four AEO dimensions: API accessibility, data structuring, agent compatibility, and trust signals.
Add Clarvia MCP server to your config:
{
"mcpServers": {
"clarvia": {
"command": "npx",
"args": ["-y", "clarvia-mcp-server"]
}
}
}
Ask Claude to score any tool by URL or name:
Score https://github.com/example/my-mcp-server for agent-readiness
Clarvia returns a 0-100 AEO score with breakdown across four dimensions.
Find the top-rated database MCP servers using Clarvia
Returns ranked results from 15,400+ indexed tools.
Compare supabase-mcp vs firebase-mcp using Clarvia
Returns side-by-side score breakdown with a recommendation.
Show me the top 10 MCP servers for authentication using Clarvia
Before I add this MCP server to my config, score it:
https://github.com/example/new-tool
Use the clarvia aeo_score tool and tell me if it's agent-ready.
I need an MCP server for web scraping. Use Clarvia to find the
top-rated options and compare the top 3.
Add to your CI pipeline using the GitHub Action:
- uses: clarvia-project/clarvia-action@v1
with:
url: https://your-api.com
fail-under: 70
| Score | Rating | Meaning |
|---|---|---|
| 90-100 | Agent Native | Built specifically for agent use |
| 70-89 | Agent Friendly | Works well, minor gaps |
| 50-69 | Agent Compatible | Works but needs improvement |
| 30-49 | Agent Partial | Significant limitations |
| 0-29 | Not Agent Ready | Avoid for agentic workflows |
Problem: Clarvia returns "not found" for a tool
Solution: Try scanning by URL directly with aeo_score — Clarvia will score it on-demand
Problem: Score seems low for a tool I trust
Solution: Use get_score_breakdown to see which dimensions are weak and decide if they matter for your use case
@mcp-builder - Build a new MCP server that scores well on AEO@agent-evaluation - Broader agent quality evaluation framework