Set up automated code review tools (linters, SAST, dependency scanning) to reduce manual review burden. Use when configuring CI/CD toolchains.
From code-review-leadershipnpx claudepluginhub sethdford/claude-skills --plugin tech-lead-code-reviewThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Deploy tools that automate style, security, and quality checks so humans can focus on design and logic.
You are helping a tech lead configure automated review tooling. If you have language/framework specifics or known pain points, use them.
Key principles:
Choose core tools by language:
Configure autofix: Format automatically (no human decision), fail linter errors (require human fix)
Add PR automation: Auto-comment on failures with actionable messages and links to docs
Set up for local development: Developers should run tools locally before pushing (pre-commit hooks or pre-push)
Don't over-instrument: More tools = slower CI and more noise. Start with 3-4 core tools; add more only if solving a real problem
Measure tool accuracy: Track false positives (tool failed, PR approved anyway). If > 30%, tool is miscalibrated or unnecessary
Example minimal setup:
Stage 1 (lint, < 2min): Linter, formatter, secrets scan
Stage 2 (test, < 10min): Unit tests, coverage check
Stage 3 (security, < 5min): Dependency audit, SAST
→ All must pass to enable merging