A marketplace of software engineering skills for test-driven development, code review, debugging, and best practices
npx claudepluginhub jack-michaud/faireA collection of development skills that help you follow best practices for test-driven development, code review, debugging, systematic problem-solving, and more.
Using cmux
Browser testing and automation plugin with Playwright MCP integration for end-to-end testing.
Supabase operations plugin with specialized agent for database migrations, RLS policies, Edge Functions, and Realtime features.
Logging hooks and commands for tracking Claude Code tool usage
Hooks to enforce explicit skill evaluation before implementation
Semantic code search MCP server using embeddings for prior art discovery and pattern matching.
Autonomous ticket-to-production agent with customizable promptlet-driven phases. Reads tickets, plans, implements with TDD, creates PRs, monitors CI, and validates production.
Set up and manage remote dev environments using sprites with credential-free git sync
Two-way Claude Code channel using named pipes (mkfifo). Write to the inbound pipe to send messages to Claude; read from the outbound pipe to receive replies.
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Production-ready workflow orchestration with 79 focused plugins, 184 specialized agents, and 150 skills - optimized for granular installation and minimal token usage
Directory of popular Claude Code extensions including development tools, productivity plugins, and MCP integrations
Share bugs, ideas, or general feedback.
faire is a Claude Plugin Marketplace of tested and evaluated software development plugins that help Claude create high quality software.
[!IMPORTANT] Anything post 1.0 is tested and evaluated. Anything 0.X is in active development.
See this section for more details on the testing and evaluation strategy.
Add the faire marketplace to your Claude Code configuration:
/plugin marketplace add jack-michaud/faire
Then install the faire plugin:
/plugin install faire@faire
Or browse and install interactively:
/plugin
Configure your .claude/settings.json to automatically add the marketplace:
{
"extraKnownMarketplaces": {
"faire": {
"source": {
"source": "github",
"repo": "jack-michaud/faire"
}
}
}
}
[!NOTE] "this is a gamechanger, trust me bro" Creating evaluations to systematically measure the performance of AI systems is how we stay objective to the real impact of AI tools. I'm inspired by people and teams like:
- Cognition (which created Devin) (their eval for devin)
- ARC Prize Foundation
- spences10 (who created svelte-claude-skills with an eval for hooks)
These people and teams are data driven and transparent about the quality of AI systems.
I will fill this out as I become more opinionated about this.
I'm currently working on a python service writing skill with an eval here.
MIT License - see LICENSE file for details