By jack-michaud
Hooks to enforce explicit skill evaluation before implementation
npx claudepluginhub jack-michaud/faire --plugin skills-forced-evalfaire is a Claude Plugin Marketplace of tested and evaluated software development plugins that help Claude create high quality software.
[!IMPORTANT] Anything post 1.0 is tested and evaluated. Anything 0.X is in active development.
See this section for more details on the testing and evaluation strategy.
Add the faire marketplace to your Claude Code configuration:
/plugin marketplace add jack-michaud/faire
Then install the faire plugin:
/plugin install faire@faire
Or browse and install interactively:
/plugin
Configure your .claude/settings.json to automatically add the marketplace:
{
"extraKnownMarketplaces": {
"faire": {
"source": {
"source": "github",
"repo": "jack-michaud/faire"
}
}
}
}
[!NOTE] "this is a gamechanger, trust me bro" Creating evaluations to systematically measure the performance of AI systems is how we stay objective to the real impact of AI tools. I'm inspired by people and teams like:
- Cognition (which created Devin) (their eval for devin)
- ARC Prize Foundation
- spences10 (who created svelte-claude-skills with an eval for hooks)
These people and teams are data driven and transparent about the quality of AI systems.
I will fill this out as I become more opinionated about this.
I'm currently working on a python service writing skill with an eval here.
MIT License - see LICENSE file for details
Forces skill evaluation before every response
Share bugs, ideas, or general feedback.
Skill evaluation and benchmarking - test skill effectiveness with behavioral eval cases, grade results, and track quality improvements
Meta-skills infrastructure for Claude Code plugin ecosystem - skill authoring, hook development, modular design patterns, and evaluation frameworks
Create new skills, improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, update or optimize an existing skill, run evals to test a skill, or benchmark skill performance with variance analysis.
No description provided.
Create, test, measure, and iteratively improve Claude Code skills with category-aware design, gotchas-driven development, progressive disclosure coaching, and automated description optimization.