By doctormozg
Essential development agents, rules, and skills for everyday coding workflows
npx claudepluginhub doctormozg/claude-pipelines --plugin mz-dev-basePipeline-only collector agent dispatched by branch-reviewer. Runs all git metadata commands, scans for prior review reports, and writes a structured branch_info.md artifact. All git content is wrapped in untrusted-content delimiters. Never user-triggered.
Use this agent when the user asks to review the current git branch, check what has changed since main/master, or audit all commits on a feature branch before opening a PR. Triggers include "review my branch", "what have I changed locally", "audit this branch", or "check the branch before I push". Examples: <example> Context: User has finished a feature on a local branch and wants a deep review before opening a PR. user: "Review my current branch before I push — I want to make sure I didn't miss anything." assistant: "I'll use the branch-reviewer agent to diff against the base, analyze each changed file, and produce a structured report in .mz/reviews/." <commentary> Explicit branch-scope review request — branch-reviewer's primary trigger. </commentary> </example> <example> Context: User is on a long-running feature branch with many commits and wants architecture-level feedback across the whole branch. user: "This feature branch has grown big — can you go over all of it and tell me what needs fixing?" assistant: "I'll use the branch-reviewer agent to walk every changed file, check architecture and test coverage, and save a report." <commentary> Whole-branch audit on a large change set — exactly what branch-reviewer is for, as opposed to single-file code-reviewer. </commentary> </example> <example> Context: Assistant has just finished a multi-commit implementation on a feature branch and the user is about to push. user: "Looks good, let's push it" assistant: "Before pushing, I'll use the branch-reviewer agent to do a full branch review against main and flag anything worth fixing first." <commentary> Proactive trigger: meaningful branch completion, reviewer should run before history leaves the local machine. </commentary> </example>
Pipeline-only lens agent dispatched by branch-reviewer. Scans a PR/branch diff exclusively for architecture and design-pattern defects: SOLID violations, excessive coupling, misplaced responsibilities, broken abstractions, god classes/functions, layering violations, pattern drift vs. existing similar code. Never user-triggered. When NOT to use: do not dispatch standalone, do not dispatch from pr-reviewer, do not dispatch for correctness, security, performance, or maintainability concerns — those belong to other code-lens-* agents.
Pipeline-only lens agent dispatched by branch-reviewer. Scans a PR/branch diff exclusively for bugs and correctness defects: logic errors, off-by-one, null/None access, race conditions, resource leaks, unhandled error paths, copy-paste errors. Never user-triggered. When NOT to use: do not dispatch standalone, do not dispatch from pr-reviewer, do not dispatch for style or architecture concerns — those belong to the other code-lens-* agents.
Pipeline-only lens agent dispatched by branch-reviewer. Scans a PR/branch diff exclusively for code-quality and maintainability defects: unclear naming, misleading comments, excessive complexity, hard-to-test code, magic numbers/strings, dead code, unused imports, duplication, insufficient typing. Never user-triggered. When NOT to use: do not dispatch standalone, do not dispatch from pr-reviewer, do not dispatch for correctness, security, architecture, or performance concerns — those belong to other code-lens-* agents.
Pipeline-only lens agent dispatched by branch-reviewer. Scans a PR/branch diff exclusively for performance and efficiency defects: N+1 queries, unnecessary allocations in hot paths, blocking I/O in async context, missing indexes on new DB queries, O(n^2) where O(n) is achievable, memory churn, inefficient serialization. Never user-triggered. When NOT to use: do not dispatch standalone, do not dispatch from pr-reviewer, do not dispatch for correctness, security, architecture, or maintainability concerns — those belong to other code-lens-* agents.
Pipeline-only lens agent dispatched by branch-reviewer. Scans a PR/branch diff exclusively for security and privacy defects: injection (SQL, command, XSS, prototype), auth bypass, secret exposure in logs/errors/responses, unsafe deserialization, weak crypto, SSRF, IDOR, path traversal, open redirects, rate-limit gaps, privacy leaks. Never user-triggered. When NOT to use: do not dispatch standalone, do not dispatch from pr-reviewer, do not dispatch for correctness, architecture, performance, or maintainability concerns — those belong to other code-lens-* agents.
Use this agent when the user asks for a code review, wants staged changes or specific files checked for bugs, security, or maintainability issues, or says things like "review this", "check this diff", or "look over my changes before I commit". Examples: <example> Context: User has just finished implementing a new authentication handler and wants it checked before committing. user: "I just wrote the login handler in auth.ts — can you review it before I push?" assistant: "I'll use the code-reviewer agent to check the handler for auth bugs, input validation, and security issues." <commentary> Explicit review request on freshly written security-sensitive code — code-reviewer's primary trigger. </commentary> </example> <example> Context: User has staged a multi-file change and wants a sanity pass before opening a PR. user: "Here's the diff for the caching refactor, check it for anything I missed" assistant: "I'll use the code-reviewer agent to scan the staged diff for logic errors, missing error handling, and maintainability concerns." <commentary> User explicitly asked for a diff review — hands off to code-reviewer rather than attempting ad-hoc inspection. </commentary> </example> <example> Context: The main assistant has just finished implementing a non-trivial feature that touches parsing and error handling. user: "Great, now commit it" assistant: "Before committing, I'll use the code-reviewer agent to check the new parsing code for edge cases and error-handling gaps." <commentary> Proactive trigger: non-trivial change just landed, reviewer should run before a commit lands in history. </commentary> </example>
Use this agent when the user asks to research a topic, investigate a library or API, look up current best practices, or gather evidence-based findings from multiple sources. Triggers include "research X", "look into Y", "what's the current state of Z", or "find out how <tool/library/protocol> works". Examples: <example> Context: User is evaluating a new library before adopting it and wants grounded information, not a guess. user: "Can you research the current state of Polars vs Pandas for large-dataset analytics?" assistant: "I'll use the domain-researcher agent to gather findings from official docs, benchmarks, and release notes, and synthesize a comparison." <commentary> Explicit multi-source research request — domain-researcher's primary trigger. </commentary> </example> <example> Context: User is about to implement a protocol and wants authoritative references first. user: "Look into how OAuth 2.1 differs from 2.0 before I start writing the client" assistant: "I'll use the domain-researcher agent to pull the differences from the IETF draft and official vendor guides." <commentary> Domain research against official sources — matches the domain-researcher's source-hierarchy discipline. </commentary> </example> <example> Context: Assistant has been asked to design a feature that touches an unfamiliar specialized domain (e.g., specific ML model architecture). user: "Add support for Qwen3-Omni to the model loader" assistant: "Before designing the loader change, I'll use the domain-researcher agent to gather Qwen3-Omni's architecture details and loading requirements from the official model card and repo." <commentary> Proactive trigger: specialized domain where guessing is risky, domain-researcher should verify before implementation begins. </commentary> </example>
Pipeline-only collector agent dispatched by pr-scanner. Queries GitHub for all PRs needing attention across multiple repos, deduplicates, checks review and comment status, and writes a structured pr_data.md artifact. PR titles and bodies are wrapped in untrusted-content delimiters. Never user-triggered.
Pipeline-only collector agent dispatched by pr-scanner. Given a single PR, gathers lightweight metadata (title, author, age, labels), complexity signals (files changed, ±LOC), and answered/unanswered state, then classifies into a triage tier and writes a scored artifact. Never user-triggered.
Use this agent when the user asks to review a specific GitHub pull request by URL or `owner/repo#number`, wants a second-pair-of-eyes read on a PR, or needs existing PR feedback cross-referenced with a fresh review. Triggers include "review this PR", "take a look at <github PR URL>", or "what do you think about <owner/repo>#<n>". Examples: <example> Context: User pastes a GitHub PR URL and asks for a thorough review. user: "Review https://github.com/acme/widgets/pull/482 for me" assistant: "I'll use the pr-reviewer agent to check out the PR in an isolated worktree, analyze the diff, cross-reference existing comments, and save a report in .mz/reviews/." <commentary> Direct GitHub PR URL with explicit review request — pr-reviewer's primary trigger (not branch-reviewer, which handles local branches). </commentary> </example> <example> Context: User references a PR via short form and wants an independent read because CoPilot/Codacy already weighed in. user: "acme/widgets#482 — CoPilot already reviewed it but I want an independent look" assistant: "I'll use the pr-reviewer agent to do an independent review and cross-reference CoPilot's existing comments." <commentary> PR review with cross-reference discipline — pr-reviewer's strength over a generic read. </commentary> </example> <example> Context: User asks about a re-review after new commits have been pushed to a PR they previously reviewed. user: "I already reviewed PR #482 last week but new commits landed — can you re-review?" assistant: "I'll use the pr-reviewer agent to re-read the PR, compare against the prior review report in .mz/reviews/, and flag what's new." <commentary> Re-review request on a specific PR URL — pr-reviewer handles the history and prior-report diffing. </commentary> </example>
Use this agent when the user wants to triage or scan multiple GitHub repositories for PRs needing attention — PRs where review is requested, the user is mentioned or assigned, or the user's own PRs have changes requested. Triggers include "check my PRs", "scan these repos for stuff I need to review", "what PRs need my attention today", or "do my daily PR review". Examples: <example> Context: Start of the workday — user wants to know which PRs across multiple repos need their attention. user: "Scan acme/widgets and acme/gears for anything needing my review today" assistant: "I'll use the pr-scanner agent to triage both repos, fan out per-PR haiku scorers in parallel, and save a prioritized report to .mz/reviews/." <commentary> Multi-repo PR triage across a provided list — pr-scanner's primary trigger. Deep reviews are a separate skill (/review-pr); pr-scanner produces a lightweight prioritized inbox only. </commentary> </example> <example> Context: User wants a morning digest of open PRs across their team's repos. user: "What open PRs need my attention this morning?" assistant: "I'll use the pr-scanner agent to list the repos to scan and produce a prioritized digest of PRs needing your review, reply, or action." <commentary> Daily triage framing — pr-scanner classifies every PR into one of three tiers (directly-asked-unanswered, review-requested, informational) and ranks them. </commentary> </example> <example> Context: User asks to check for unanswered feedback on their own PRs across multiple repos. user: "Check my own PRs in acme/widgets, acme/gears, acme/cogs for unanswered review comments" assistant: "I'll use the pr-scanner agent to scan all three repos for changes-requested and unanswered-discussion items on your PRs." <commentary> Cross-repo scan of user's own PRs for unanswered threads — exactly the "Tier 1 — directly asked, unanswered" signal the scanner surfaces first. </commentary> </example>
Use this agent when the user asks to write or improve technical documentation — README files, API references, SDK docs, integration guides, tutorials, getting-started material, or troubleshooting/FAQ sections. Triggers include "write a README for X", "document this module", "draft API docs", or "write a getting-started guide". Examples: <example> Context: User has finished a library and now needs a README for it. user: "Write a README for the http-retry package" assistant: "I'll use the technical-writer agent to read the package source, build an example-driven README, and verify snippets against the actual code." <commentary> Explicit README request grounded in existing code — technical-writer's primary trigger. </commentary> </example> <example> Context: User wants an API reference generated from actual source, not guessed. user: "Draft API docs for the public functions in auth/" assistant: "I'll use the technical-writer agent to enumerate public symbols, write signatures, parameters, and examples from real usage in tests." <commentary> Source-grounded API reference — technical-writer reads code first, then writes, avoiding fabrication. </commentary> </example> <example> Context: Existing README is stale and the user wants it rewritten to match current code. user: "The README is out of date compared to what the CLI actually does now — rewrite it" assistant: "I'll use the technical-writer agent to read the current CLI, compare against the stale README, and rewrite with verified examples." <commentary> Doc reconciliation against code as source of truth — matches technical-writer's verify-before-writing discipline. </commentary> </example>
ALWAYS invoke when the user wants exhaustive multi-source research on any topic. Triggers:"research X","deep dive into","comprehensive analysis of","what is the state of". Provide a topic as the argument.
ALWAYS invoke when the user wants to install development rules for a project or globally. Triggers:"init rules","set up rules","install rules","configure coding rules","onboard project".
ALWAYS invoke when the user wants to review all changes on the current git branch. Triggers: "review branch", "review my changes", "check my branch", "what did I change", "branch review".
ALWAYS invoke when the user wants to review a GitHub pull request. Triggers:"review PR","review pull request","check this PR","PR review". Provide a PR URL or owner/repo#number as argument.
ALWAYS invoke when the user wants to check which PRs need attention across repositories. Triggers:"scan PRs","check PRs","what PRs need attention","PR inbox","daily PR report".
ALWAYS invoke when user asks which mozg skill/plugin to use, says 'what plugins do I have', 'which pipeline fits', 'route this'. Maps task phrases to skills across mz-dev-base, mz-dev-pipe, mz-dev-hooks, mz-memory, mz-biz-outreach, mz-creative.
ALWAYS invoke when the user asks to "write a skill", "author a new skill", "create a SKILL.md", or "add a skill to the plugin". Enforces SKILL_GUIDELINES.md via a TDD-style authoring workflow.
Complete developer toolkit for Claude Code
Meta-prompting and spec-driven development system for Claude Code. Productivity framework for structured AI-assisted development.
Reliable automation, in-depth debugging, and performance analysis in Chrome using Chrome DevTools and Puppeteer
Manus-style persistent markdown files for planning, progress tracking, and knowledge storage. Works with Claude Code, Kiro, Clawd CLI, Gemini CLI, Cursor, Continue, Hermes, and 17+ AI coding assistants. Now with Arabic, German, Spanish, and Chinese (Simplified & Traditional) support.
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Multi-agent plugins for Claude Code. Autonomous development pipelines, code review, deep research, and business intelligence — all as slash commands.
[!TIP] Skills compose. Most real tasks span 3–4 commands across multiple plugins — chain them end-to-end instead of reaching for any single skill in isolation. Click any workflow below to see the flow.
mz-dev-base + mz-dev-pipeflowchart LR
A["/deep-research"]:::base --> B["/build"]:::pipe
B --> C["/verify"]:::pipe
C --> D["/review-branch"]:::base
classDef base fill:#ddf4ff,stroke:#0969da,color:#0969da
classDef pipe fill:#dafbe1,stroke:#1a7f37,color:#1a7f37
/deep-research — survey trade-offs, cite references, pick an approach/build — research → plan (approval gate) → parallel code → review → test/verify — lint + types + tests + coverage, diagnose failures/review-branch — independent final pass before opening the PRmz-dev-pipeflowchart LR
A["/debug"]:::pipe --> B["/blast-radius"]:::pipe
B --> C["/investigate"]:::pipe
C --> D["/polish"]:::pipe
classDef pipe fill:#dafbe1,stroke:#1a7f37,color:#1a7f37
/debug — reproduce → regression test → diagnose → fix → verify/blast-radius — map every caller, test, and type at risk of the patch/investigate — prove or disprove "does the same race exist on the refund path" without touching code/polish — fix-test-review loop until the criteria are metmz-dev-base + mz-dev-pipeflowchart LR
A["/init-rules"]:::base --> B["/explain"]:::pipe
B --> C["/audit"]:::pipe
C --> D["/blast-radius"]:::pipe
classDef base fill:#ddf4ff,stroke:#0969da,color:#0969da
classDef pipe fill:#dafbe1,stroke:#1a7f37,color:#1a7f37
/init-rules — install curated coding rules for the detected languages/explain — multi-angle walkthrough with Mermaid diagrams of the module/audit — ranked list of landmines and tech-debt hotspots/blast-radius — know what shatters before the first refactor commitmz-dev-pipe + mz-dev-baseflowchart LR
A["/audit"]:::pipe --> B["/investigate"]:::pipe
B --> C["/debug"]:::pipe
C --> D["/review-branch"]:::base
classDef base fill:#ddf4ff,stroke:#0969da,color:#0969da
classDef pipe fill:#dafbe1,stroke:#1a7f37,color:#1a7f37
/audit — prioritized vulnerabilities with file:line evidence/investigate — verify top critical findings, drop false positives/debug — TDD-style fix anchored on a regression test/review-branch — catch any fallout the fix introduced elsewheremz-creative + mz-design + mz-dev-pipe 3 pluginsflowchart LR
A["/brainstorm"]:::creative --> B["/expert"]:::creative
B --> C["/design-document"]:::design
C --> D["/build"]:::pipe
classDef creative fill:#fbefff,stroke:#8250df,color:#8250df
classDef design fill:#ffebf0,stroke:#bf3989,color:#bf3989
classDef pipe fill:#dafbe1,stroke:#1a7f37,color:#1a7f37
/brainstorm — 5 lens personas → parallel ideation → vote-to-consensus/expert — Delphi critique (3 rounds) with dedicated report writer/design-document — draft → 4-critic loop → WCAG 2.2 AA hard gate/build — plan → code → review → test against the approved specmz-biz-outreach + mz-dev-base + mz-creative 3 pluginsflowchart LR
A["/lead-gen"]:::outreach --> B["/deep-research"]:::base
B --> C["/brainstorm"]:::creative
classDef outreach fill:#fff1e5,stroke:#bc4c00,color:#bc4c00
classDef base fill:#ddf4ff,stroke:#0969da,color:#0969da
classDef creative fill:#fbefff,stroke:#8250df,color:#8250df
/lead-gen — strategy → source research → scout → enrich → score → report/deep-research — domain context to ground outreach in current regulation/brainstorm — multi-lens positioning ideas tied back to the lead-gen reportmz-dev-pipeOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim