Personal developer toolkit - Git worktree management, PR workflows, code quality, issue triage, plugin packaging, web research, humanizer, UI shell migration, and codebase navigation
npx claudepluginhub ai-builder-team/ai-builder-plugin-marketplace --plugin mUse this agent when a senior engineer has identified a bug or issue in the codebase and provided comments describing the problem. The agent will investigate the issue, propose a fix plan, wait for approval, and then implement the fix.\n\nExamples:\n\n<example>\nContext: A senior engineer has left comments about a race condition in the WebSocket handler.\nuser: "There's a race condition in the WebSocket message handler in back-end/app/services/ws_handler.py - when two clients send messages simultaneously, the session state gets corrupted because we're not locking around the state update. See lines 145-167."\nassistant: "I'll launch the bug-fix-engineer agent to investigate this race condition and propose a fix."\n<commentary>\nSince the user has described a specific bug with engineer-level detail, use the Task tool to launch the bug-fix-engineer agent to investigate, propose a fix plan, and implement it upon approval.\n</commentary>\n</example>\n\n<example>\nContext: A senior engineer has identified a data integrity issue.\nuser: "The renewal calculation in renewals/calculator.py is wrong - it's using the contract start date instead of the renewal date when computing the pro-rata amount. This causes incorrect invoices for mid-cycle renewals. The issue is in calculate_prorate() around line 89."\nassistant: "I'll use the bug-fix-engineer agent to analyze this calculation bug and propose a fix."\n<commentary>\nThe user has provided detailed engineer comments about a specific bug. Use the Task tool to launch the bug-fix-engineer agent to trace through the code, understand the issue, propose a fix, and implement it after approval.\n</commentary>\n</example>\n\n<example>\nContext: A senior engineer reports an issue found during code review.\nuser: "Found a bug in the frontend - the usePortfolioData hook in klair-client/src/hooks/usePortfolioData.ts has a stale closure issue. The filter callback on line 52 captures the old filterState but doesn't include it in the useCallback dependency array. This means filters don't actually apply until the user triggers a re-render."\nassistant: "I'll launch the bug-fix-engineer agent to investigate this stale closure issue and propose a fix."\n<commentary>\nSince the user is reporting a specific frontend bug with detailed engineer analysis, use the Task tool to launch the bug-fix-engineer agent to examine the hook, understand the closure issue, and propose a targeted fix.\n</commentary>\n</example>
Use this agent when you need to narrow down the files and code snippets involved in a particular bug or behavior, especially when it stems from changes having been made during the current session. It will run a verbatim command, and filter up the important bits. `git diff --staged`, comparing branches (`git diff main`), or `git diff` for unstaged changes.
Use this agent when the user wants to compare two documents to understand their differences, similarities, and relationship. This includes identifying whether documents are derived from the same base, finding what one document covers that the other doesn't, spotting differences of opinion, and summarizing commonalities.\n\nExamples:\n\n- User: "Compare these two CLAUDE.md files and tell me what's different"\n Assistant: "I'll use the doc-comparator agent to analyze both documents and provide a detailed comparison."\n [Launches doc-comparator agent via Task tool with both file paths]\n\n- User: "I have two API design specs - can you figure out what the second one is missing compared to the first?"\n Assistant: "Let me launch the doc-comparator agent to do a thorough comparison of the two specs."\n [Launches doc-comparator agent via Task tool]\n\n- User: "These two architecture docs look similar but I'm not sure what changed. Can you check?"\n Assistant: "I'll use the doc-comparator agent to determine if these are derived from the same base and identify all the changes."\n [Launches doc-comparator agent via Task tool]\n\n- User: "I found two competing design proposals - help me understand where they agree and disagree"\n Assistant: "I'll launch the doc-comparator agent to analyze both proposals and surface agreements and disagreements."\n [Launches doc-comparator agent via Task tool]
Use this agent when the user wants to investigate potential bugs, issues, or code quality concerns across the codebase. This agent launches parallel exploration tasks to trace through files, functions, and patterns involved in a reported issue, then evaluates whether each concern is valid or invalid. It marks findings with ✅ for valid issues and ❌ for invalid/non-issues. If provided a file as input, the prompt will specify which lines contain the bug reports to evaluate — the bot must read the file at those lines to understand each concern, then investigate and use the Edit tool to mark up the file in-place with QC_BOT_COMMENTS: annotations. IMPORTANT: When launching this agent with a file path, your Task prompt MUST instruct it to edit the file in-place using the Edit tool — do NOT tell it to 'return' findings as text. Use this agent when the user provides a bug report, a list of concerns, a file with comments/annotations about potential issues, or asks to validate whether something is actually broken.\n\nExamples:\n\n<example>\nContext: The user reports a potential bug in the renewal processing logic.\nuser: "I think there's a bug in how we calculate renewal dates - it seems to skip February in leap years"\nassistant: "Let me launch the qc-bugs agent to investigate this potential bug across the codebase and determine if it's valid."\n<commentary>\nSince the user reported a potential bug, use the Task tool to launch the qc-bugs agent to investigate the issue, trace the relevant code paths, and evaluate whether the bug is real.\n</commentary>\n</example>\n\n<example>\nContext: The user provides a file with review comments marking potential issues.\nuser: "Can you check if the issues flagged in this code review are actually valid? Here's the file: services/billing.py"\nassistant: "I'll use the qc-bugs agent to investigate each flagged issue and mark them as valid or invalid directly in the comments."\n<commentary>\nSince the user wants to validate review comments in a specific file, use the Task tool to launch the qc-bugs agent. It will investigate each comment, update them in-place with ✅ or ❌ markers.\n</commentary>\n</example>\n\n<example>\nContext: The user has a list of suspected issues they want validated.\nuser: "Here are 5 things I think might be wrong with the auth flow: 1) tokens aren't refreshed, 2) CORS is misconfigured, 3) rate limiting is missing, 4) sessions leak, 5) passwords stored in plaintext"\nassistant: "I'll launch the qc-bugs agent to investigate all 5 concerns in parallel and give you a validated summary."\n<commentary>\nSince the user provided a list of potential issues, use the Task tool to launch the qc-bugs agent which will spawn parallel exploration tasks for each concern and produce a summary with ✅/❌ markers.\n</commentary>\n</example>\n\n<example>\nContext: The user asks about a runtime error they encountered.\nuser: "We're getting a KeyError in the dashboard aggregation - is this a real bug or just bad test data?"\nassistant: "Let me use the qc-bugs agent to trace through the aggregation code paths and determine the root cause."\n<commentary>\nSince the user is asking about a potential bug, use the Task tool to launch the qc-bugs agent to investigate the code paths, data flow, and determine validity.\n</commentary>\n</example>
Use this agent when you have received reviewer comments on a PR and want to independently validate whether each comment is technically correct, materially significant, and worth acting on before responding or making changes. This agent triages reviewer feedback to separate genuine issues from noise.\n\nExamples:\n\n- User: "I just got review comments on my PR #1030, can you check if they're valid?"\n Assistant: "Let me use the PR Comment QC agent to independently validate each reviewer comment against the actual diff."\n [Launches pr-comment-qc agent via Task tool to analyze the PR diff and reviewer comments]\n\n- User: "Someone left 15 comments on my PR. Can you tell me which ones actually matter?"\n Assistant: "I'll use the PR Comment QC agent to triage all 15 comments and categorize them by significance."\n [Launches pr-comment-qc agent via Task tool]\n\n- User: "I think some of these review comments are wrong. Can you verify?"\n Assistant: "Let me launch the PR Comment QC agent to independently verify each comment's technical correctness against the actual code changes."\n [Launches pr-comment-qc agent via Task tool]\n\n- Context: User has just received a code review with mixed quality feedback including nits, duplicates, and potentially incorrect observations.\n User: "Got review feedback on PR 945, please QC it"\n Assistant: "I'll use the PR Comment QC agent to validate each comment and recommend how to handle them."\n [Launches pr-comment-qc agent via Task tool]
Skill invocation agent for fetching GitHub PR comments via the /pr-reader skill workflow. Invokes the Skill tool with skill='m:pr-reader' — does NOT fetch PR data itself. Pass a PR number, branch name, or nothing for current branch. Saves output to .scratch/outputs/ and returns a summary. By default only shows unresolved threads (passes --unresolved). Ask for 'all comments' or 'include resolved' to see everything. Examples: <example> Context: The user wants to review feedback on a PR before making changes. user: "Can you pull the comments from PR #932?" assistant: "I'll fetch and organize all comments from PR #932." <commentary> Launch the alt agent which invokes the /pr-reader skill to fetch comments, save to file, and return summary. </commentary> </example> <example> Context: The user wants to see reviewer feedback on the current branch's PR. user: "What feedback did reviewers leave on my PR?" assistant: "Let me pull all review comments and threads from your PR." <commentary> Launch the alt agent with no arguments so it auto-detects the current branch's PR. </commentary> </example> <example> Context: The user wants to see all comments including resolved threads. user: "Pull all comments from PR #1790 including resolved" assistant: "I'll fetch all comments including resolved threads from PR #1790." <commentary> Launch the alt agent with --all flag since user asked for resolved threads too. </commentary> </example>
Use this agent to visually verify frontend UI after theme, styling, or component changes. Caller provides the route(s) to check — the agent opens each route in Playwright, screenshots light and dark modes, saves all evidence to .scratch/ui-checks/, captures accessibility snapshots, checks console for errors, and returns a structured report. Assumes dev server is running at localhost:3001.
Create Claude Code agents using production-tested patterns and techniques
Relentlessly interview about a plan until shared understanding
Git worktree management with Git Town sync. Create worktrees, sync branches, push, propose PRs. Start/stop frontend+backend dev servers per worktree.
Parse a PR_comments file and launch parallel m:issue-qc agents for each unresolved issue giving each agent the PR_comments filepath and line numbers relevant to the issue it will investigate
Parse QC-annotated PR comments and launch bugfix agents in phased batches grouped by file domain
Load a feature's FEATURE.md and all its specs into context
Plan migration of a legacy screen to the new-ui shell system
Package skills, agents, commands into a Claude Code plugin from natural language instructions
Fetch and display unresolved PR comments by default. Pass a PR number, branch name, or nothing for current branch. Add --all to include resolved threads.
Format, lint, typecheck, commit, and push changed files
Execute a command exactly as written and return full output
Parse and audit Claude Code session transcripts
Create new Claude Code skills with proper structure and documentation
Create new Claude Code skills with proper structure and documentation (lazy-loaded steps)
Assess a skill for portability and apply generalization patterns
Sync main + all worktree branches using Git Town. Handles conflicts, iterates worktrees, reports results.
Manage Claude Code sessions across 8, 10, or 12 tmux panes in a single workstation
Web search, research, and page scraping via Perplexity + Firecrawl
In Claude Code, run:
/plugin marketplace add https://github.com/AI-Builder-Team/claude-code-plugin-marketplace
/plugin install m@klair-marketplace
Skills are now available as /m:gtr, /m:push, /m:pr-reader, etc.
Have skills, agents, or commands in ~/.claude/ you want to package? Use the plugin packager with plain English:
/m:plugin-packager package my ~/.claude skills and agents into a plugin called my-tools and register it in this marketplace
Cherry-pick specific components:
/m:plugin-packager package just the review and deploy skills from ~/.claude into my-tools plugin, register in this marketplace
The packager parses your request, scans the source directory, copies components into plugins/my-tools/, generates the plugin manifest, and registers it in the marketplace. It auto-detects version bumps when updating existing plugins. Commit and push to distribute.
Bump the version in the plugin's .claude-plugin/plugin.json, commit, and push. Users pull updates with:
/plugin marketplace update
Test a plugin directly without installing through the marketplace:
claude --plugin-dir ./plugins/m
Tools to maintain and improve CLAUDE.md files - audit quality, capture session learnings, and keep project memory current.
Uses power tools
Uses Bash, Write, or Edit tools
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Manus-style persistent markdown files for planning, progress tracking, and knowledge storage. Works with Claude Code, Kiro, Clawd CLI, Gemini CLI, Cursor, Continue, Hermes, and 17+ AI coding assistants. Now with Arabic, German, Spanish, and Chinese (Simplified & Traditional) support.
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Team-oriented workflow plugin with role agents, 27 specialist agents, ECC-inspired commands, layered rules, and hooks skeleton.
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.