Know your prompt quality with chess-style Elo ratings
npx claudepluginhub bicro/prompteloAnalyze and score your prompts with an Elo rating system
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Production-ready workflow orchestration with 79 focused plugins, 184 specialized agents, and 150 skills - optimized for granular installation and minimal token usage
Browser automation for AI agents
A Claude Code extension that analyzes and scores user prompts with chess-style Elo ratings.
# Add the marketplace
claude plugin marketplace add bicro/PromptElo
# Install the plugin
claude plugin install promptelo@promptelo
That's it! Elo badges will appear automatically on every prompt.
/prompt-elo for visual HTML reports with radar chartsAfter installation, every prompt you submit will show an Elo badge:
[PromptElo: 1847 ⭐ | Top 15% Novelty 🌟]
Run the skill for a comprehensive breakdown:
/prompt-elo
This opens an HTML report with:
| Rating | Tier | Description |
|---|---|---|
| 2200+ | 🏆 Grandmaster | Exceptional prompts |
| 2000-2199 | ⭐ Master | Outstanding quality |
| 1800-1999 | 🌟 Expert | High quality |
| 1500-1799 | ✨ Advanced | Above average |
| 1200-1499 | 📝 Intermediate | Average |
| 0-1199 | 📋 Beginner | Room for improvement |
| Criterion | Weight | What It Measures |
|---|---|---|
| Clarity | 25% | Clear intent, specific verbs, good structure |
| Specificity | 25% | Technical details, file/function names, code snippets |
| Context | 20% | Background info, constraints, error messages |
| Creativity | 15% | Novel framing, exploratory questions |
| Novelty | 15% | Uniqueness compared to all prompts (via embeddings) |
If you want to run your own embedding server:
cd server
# Copy environment template
cp .env.example .env
# Add your OpenAI API key and database URL
# Edit .env with your values
# Run with Docker Compose
docker-compose up -d
Then configure the client to use your server:
# Set environment variable
export PROMPTELO_SERVER_URL="https://your-server.com"
# Or edit config file
echo '{"server_url": "https://your-server.com"}' > ~/.promptelo/config.json
Configuration file: ~/.promptelo/config.json
{
"server_url": "https://promptelo-api.example.com",
"user_id": null,
"timeout": 5.0
}
Environment variables (take precedence over config file):
PROMPTELO_SERVER_URL - Server URLPROMPTELO_USER_ID - User ID for personal statsThe community server exposes these endpoints:
Score a prompt for novelty.
Request:
{
"prompt": "Your prompt text",
"user_id": "optional-user-id"
}
Response:
{
"novelty": {
"novelty_score": 0.73,
"percentile": 68.5,
"similar_count": 12,
"is_novel": false
},
"total_prompts": 12847,
"timestamp": "2024-01-15T10:30:00Z"
}
Get global statistics.
Health check endpoint.
PromptElo is designed with privacy in mind:
promptelo/
├── .claude-plugin/ # Plugin configuration
│ ├── plugin.json
│ └── marketplace.json
├── hooks/
│ └── hooks.json # UserPromptSubmit hook
├── skills/
│ └── prompt-elo/
│ ├── SKILL.md # Detailed analysis skill
│ └── templates/
│ └── report.html # Visual report template
├── client/ # Scoring client
│ ├── scorer.py # Main scoring logic
│ ├── api.py # Server API client
│ ├── config.py # Configuration
│ └── report_generator.py # HTML report generator
├── server/ # Community server
│ ├── main.py # FastAPI application
│ ├── embeddings.py # OpenAI integration
│ ├── database.py # PostgreSQL + pgvector
│ ├── models.py # Pydantic models
│ ├── requirements.txt
│ ├── Dockerfile
│ └── docker-compose.yml
├── scripts/
│ └── install.sh # Installation script
└── README.md
# Client tests
cd client
python -m pytest
# Server tests
cd server
python -m pytest
Contributions are welcome! Please open an issue or submit a pull request.
MIT License - see LICENSE for details.