By jack-michaud
A collection of development skills that help you follow best practices for test-driven development, code review, debugging, systematic problem-solving, and more.
npx claudepluginhub jack-michaud/faire --plugin jack-softwareAdd a procedure to the appropriate skill, or create a new one. Works from any project.
Comprehensive security and quality review of uncommitted changes:
Generate and run end-to-end tests with Playwright. Creates test journeys, runs tests, captures screenshots/videos/traces, and uploads artifacts.
Restate requirements, assess risks, and create step-by-step implementation plan. WAIT for user CONFIRM before touching any code.
Create a procedure skill - a step-by-step process document that teaches Claude how to perform a specific workflow. Use when the user says "make a procedure skill".
Use this procedure when you want to confirm that the `/ship-skill` command works correctly — including workspace creation, content writing, version bumping, committing, and pushing.
Ship a procedure, skill, or command to the faire marketplace or local project. Handles workspace isolation, validation, version bumping, commit, and push.
Enforce test-driven development workflow. Scaffold interfaces, generate tests FIRST, then implement minimal code to pass. Ensure 80%+ coverage.
Analyze test coverage and generate missing tests:
Save a successful workflow into a skill's reference files
Delegate specialized reviews to subagents and synthesize their findings. Use when reviewing code.
Identify and flag unnecessary test artifacts and temporary files in pull requests. Use when reviewing pull requests to ensure only production-relevant files are committed.
Review AI prompts and instructions for conciseness and clarity. Use when reviewing skills, CLAUDE.md, slash commands, or any LLM prompt content.
Review Python code ensuring strict type safety with Python 3.12+ conventions. Use when reviewing Python code, writing new Python code, or refactoring existing Python code.
Review Python code for adherence to SOLID principles (SRP, OCP, LSP, ISP, DIP). Use when reviewing Python code for maintainability, testability, and design quality.
Any time committing changes is mentioned, use this commit flow.
Step-by-step procedure for building a Claude Code channel integration. Use when creating a new channel plugin that connects Claude Code to external systems via MCP, named pipes, webhooks, or chat platforms.
Build event-driven hooks in Claude Code for validation, setup, and automation. Use when you need to validate inputs, check environment state, or automate tasks at specific lifecycle events.
Build custom slash commands in Claude Code with YAML frontmatter, permissions, and best practices. Use when creating automation workflows, project-specific commands, or standardizing repetitive tasks.
Use when merging a stack of stacked PRs with jj. Triggered by phrases like "merge the PR stack", "merge the stacked PRs", "land these PRs", or "merge the stack into main".
Modal.com knowledge.
My planning skill that has additional instructions. Always use when entering plan mode.
Use when writing python code. Can be used for code review.
Use when addressing PR review comments on stacked jj branches. Triggered by phrases like "address review comments", "fix PR feedback", "respond to review on stacked branch", or "update the PR with reviewer suggestions".
Writing a class with encapsulated logic that interfaces with an external system. Logging, APIs, etc.
faire is a Claude Plugin Marketplace of tested and evaluated software development plugins that help Claude create high quality software.
[!IMPORTANT] Anything post 1.0 is tested and evaluated. Anything 0.X is in active development.
See this section for more details on the testing and evaluation strategy.
Add the faire marketplace to your Claude Code configuration:
/plugin marketplace add jack-michaud/faire
Then install the faire plugin:
/plugin install faire@faire
Or browse and install interactively:
/plugin
Configure your .claude/settings.json to automatically add the marketplace:
{
"extraKnownMarketplaces": {
"faire": {
"source": {
"source": "github",
"repo": "jack-michaud/faire"
}
}
}
}
[!NOTE] "this is a gamechanger, trust me bro" Creating evaluations to systematically measure the performance of AI systems is how we stay objective to the real impact of AI tools. I'm inspired by people and teams like:
- Cognition (which created Devin) (their eval for devin)
- ARC Prize Foundation
- spences10 (who created svelte-claude-skills with an eval for hooks)
These people and teams are data driven and transparent about the quality of AI systems.
I will fill this out as I become more opinionated about this.
I'm currently working on a python service writing skill with an eval here.
MIT License - see LICENSE file for details
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Executes bash commands
Hook triggers when Bash tool is used
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Meta-skills infrastructure for Claude Code plugin ecosystem - skill authoring, hook development, modular design patterns, and evaluation frameworks
Skill evaluation and benchmarking - test skill effectiveness with behavioral eval cases, grade results, and track quality improvements
Comprehensive skills library for Claude Code: planning, design, TDD, debugging, collaboration patterns, and proven techniques
Create, test, measure, and iteratively improve Claude Code skills with category-aware design, gotchas-driven development, progressive disclosure coaching, and automated description optimization.
Battle-tested skills, agents, and commands for Claude Code — TDD workflows, systematic debugging, code review, and parallel task execution.