Define automated quality gates that enforce standards without manual review. Use when setting up CI/CD checks and blocking criteria.
From code-review-leadershipnpx claudepluginhub sethdford/claude-skills --plugin tech-lead-code-reviewThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Build automated gates that catch issues before review, enabling faster async review cycles.
You are helping a tech lead design CI/CD quality gates. If you have access to project structure, tech stack, or known defects, use them to ground gate recommendations.
Key principles:
Define gate categories:
For each gate, set thresholds:
Create bypass criteria: Emergency/hotfix PRs can bypass some gates if explicitly labeled. Document which gates can bypass and who approves
Test the gates: Run gates against your main branch. If they fail there, they're broken. Fix gates before enforcing
Make gates visible: Show results in PR comments, not just build status. Developers should see exactly why a gate failed
Measure false positives: If more than 20% of gate failures are overridden, the gate is miscalibrated. Lower threshold or remove gate
Example gate configuration:
Linter: block on errors, warn on > 5 warnings
Coverage: block if drops > 2%, advisory if drops 0-2%
Build time: advisory if > 10% slower, block if > 30% slower
Secrets: block on any leaked secrets
Dependencies: block on high/critical CVEs
Performance: advisory if bundle > 1MB, block if > 2MB