Workflow packs for the 7 most common Claude Code tasks — codebase exploration, bug fixing, safe refactoring, TDD, repo review before merge, CLAUDE.md generation, and migration planning. Each pack has a start prompt, verification steps, subagent opportunities, failure modes, and completion checklist.
From claude-code-expertnpx claudepluginhub markus41/claude --plugin claude-code-expertThis skill is limited to using the following tools:
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Seven battle-tested workflow packs that mirror the official Claude Code common-workflows guide. Each pack is a complete playbook, not generic advice.
When: First time in an unfamiliar repo, onboarding a new project, or before making large changes.
You are a senior engineer exploring this codebase for the first time.
Phase 1 — Structure mapping:
1. Read CLAUDE.md, README.md, and package.json/pyproject.toml
2. Map the top-level directory structure (2 levels deep)
3. Identify: entry points, main business logic, data layer, API layer, test layer
4. Identify: tech stack, framework version, package manager
Phase 2 — Architecture extraction:
5. Find the 3 most important source files (highest import count or central routing)
6. Read each one and extract: purpose, key abstractions, dependencies
7. Map data flow: where does data enter, transform, and exit?
8. Identify patterns: are there service layers, repositories, DTOs, event buses?
Phase 3 — Output:
9. Write a 1-page architecture summary to .claude/context-snapshot.md with:
- Tech stack table
- 3-sentence architecture description
- Key file map (path → purpose, one line each)
- Entry points and their routes/triggers
- Known complexity hotspots (files > 300 lines or deeply nested logic)
.claude/context-snapshot.mdsrc/auth/ while you explore src/api/.claude/context-snapshot.md writtenWhen: You have a stack trace, error message, or failing test and need to find and fix the root cause.
You are debugging the following error:
{PASTE ERROR TRACE HERE}
Evidence-based debugging protocol:
Phase 1 — Parse:
1. Extract: error type, message, file, line number, call stack
2. Identify the proximate cause (where it failed) vs. likely root cause (why)
Phase 2 — Locate:
3. Read the file at the error line + 20 lines of context
4. Trace the call stack: read each frame's function to understand data flow
5. Find where the problematic value was created or last mutated
Phase 3 — Hypothesize:
6. Form 2-3 specific hypotheses about root cause
7. For each hypothesis: identify a code path that would produce this error
8. Pick the most likely hypothesis based on evidence
Phase 4 — Fix:
9. Write the minimal fix for the chosen hypothesis
10. Explain why the fix works
11. Identify if there are other call sites with the same bug
Phase 5 — Verify:
12. Run the failing test: {test_cmd}
13. Run the full test suite to check for regressions: {test_cmd}
14. If tests pass: commit with message "fix({scope}): {one-line description}"
Grep "same_pattern" --allWhen: You need to restructure code without changing behavior. Extract a function, rename a module, split a large file, introduce an abstraction.
You are a refactoring engineer. Goal: {DESCRIBE REFACTOR GOAL}.
Safety protocol:
Phase 1 — Baseline:
1. Identify the exact scope of change (files, functions, types affected)
2. Run the test suite and confirm it passes: {test_cmd}
3. Count the test coverage for the affected code: check coverage report
4. If coverage < 80% for the target code: write missing tests first
Phase 2 — Characterize:
5. List every call site for the code being changed: Grep for function/class name
6. List every import of the modules being changed
7. Identify any reflection, dynamic dispatch, or string-based lookups that might miss a grep
Phase 3 — Refactor (small steps):
8. Make one structural change at a time
9. After each change: compile/typecheck and run affected tests
10. Never have the codebase in a broken state between steps
11. Commit each step separately with message "refactor({scope}): {step description}"
Phase 4 — Verify:
12. Run the full test suite
13. Check that all call sites compile correctly
14. Run linter/formatter on changed files
15. Read the final version: does it actually read better than before?
When: Building a new feature or fixing a bug with a test-first approach.
You are implementing {FEATURE DESCRIPTION} using TDD.
Red-Green-Refactor protocol:
Phase 1 — RED (write failing test):
1. Write the smallest possible test that captures the desired behavior
2. Run it: {test_cmd} — confirm it FAILS
3. If it passes without implementation, the test is wrong — fix the test
Phase 2 — GREEN (minimal implementation):
4. Write the simplest code that makes the test pass
5. No gold-plating: no error handling, no edge cases, no abstractions yet
6. Run tests: confirm it PASSES
7. Commit: "test({scope}): add test for {behavior}" + "feat({scope}): minimal implementation"
Phase 3 — REFACTOR:
8. Now improve the implementation: add error handling, extract functions, add types
9. After each change: run tests
10. When satisfied: commit "refactor({scope}): clean up {thing}"
Phase 4 — Edge cases:
11. List 3-5 edge cases: empty input, null, max values, concurrent access, etc.
12. Write a test for each
13. Make each pass with minimal changes
14. Commit per case
Phase 5 — Integration:
15. Write an integration test that exercises the full flow
16. Run the complete suite: {test_cmd}
When: Reviewing a PR or branch before merging to main. Catch issues a human reviewer might miss.
You are a senior code reviewer. Review branch {BRANCH_NAME} before merge.
Phase 1 — Diff analysis:
1. git diff main...{branch} --stat — get the scope of change
2. git diff main...{branch} — read the full diff
3. Classify each changed file: new feature / bug fix / refactor / config / test
Phase 2 — Systematic review:
For each changed source file:
a. Does the change do what the PR description says?
b. Are there any obvious bugs (null deref, off-by-one, race condition)?
c. Is there missing error handling at I/O or network boundaries?
d. Are there hardcoded values that should be config?
e. Are there any security concerns (injection, auth bypass, secret exposure)?
f. Is the change covered by tests?
Phase 3 — Cross-cutting checks:
5. Are there any new dependencies? Check their license and security posture.
6. Does the diff include any changes to .env files or secrets? BLOCK if yes.
7. Does the migration (if any) have a rollback plan?
8. Does the API change break existing consumers?
Phase 4 — Report:
9. Output structured report with sections:
BLOCK: must fix before merge
REQUEST: should fix before merge
SUGGEST: optional improvements
PRAISE: good patterns worth keeping
/cc-council for adversarial multi-perspective review (security + performance + architecture)When: A project has no CLAUDE.md and you need to bootstrap one from the existing codebase.
You are a Claude Code architect. Generate a CLAUDE.md for this repository.
Phase 1 — Detect:
1. Read: package.json / pyproject.toml / Cargo.toml / go.mod (whichever exists)
2. Check for: .eslintrc, .prettierrc, .editorconfig, tsconfig.json, pyproject.toml [tool.black]
3. Find test runner: jest.config*, vitest.config*, pytest.ini, conftest.py
4. Find CI: .github/workflows/, .gitlab-ci.yml, Jenkinsfile
5. Read README.md (first 50 lines)
Phase 2 — Extract:
6. Install command (from package.json scripts.install or README)
7. Build command (from scripts.build or Makefile)
8. Test command (from scripts.test or CI config)
9. Lint command (from scripts.lint or linter config)
10. Key directories (src/, lib/, app/, tests/, docs/)
Phase 3 — Write CLAUDE.md:
Output a CLAUDE.md under 150 lines that is a routing file, not a knowledge dump:
- Build & Test section (all commands)
- Tech stack table (5-10 rows max)
- Key paths (3-6 entries)
- Architecture (1 paragraph, repo-specific — not generic)
- Decision trees (3-5 entries pointing to specific directories)
- Conventions (extracted from linter configs, not invented)
- Don't Touch (lock files, generated dirs, build output)
Write to: CLAUDE.md
/cc-setup --audit after generating CLAUDE.md to get a setup scoreWhen: About to make large structural changes (database schema, API refactor, framework upgrade, module reorganization). Plan before touching code.
You are a migration architect. Before any code changes, produce a complete migration plan for: {DESCRIBE MIGRATION}.
Phase 1 — Inventory:
1. Map everything affected: files, tables, API endpoints, consumers, configs
2. List all dependencies (what depends on what you're changing)
3. Identify the blast radius: small (1-3 files) / medium (4-20 files) / large (20+)
Phase 2 — Risk assessment:
4. Identify data-loss risks (especially for database migrations)
5. Identify breaking changes (API changes that affect consumers)
6. Identify rollback complexity: can this be undone in < 30 minutes?
Phase 3 — Sequencing:
7. Break the migration into atomic phases that each leave the system working
8. Phase ordering rule: never leave the system broken between phases
9. Identify which phases can be deployed independently vs. require coordinated deployment
Phase 4 — Write the plan:
Output to .claude/migration-plan.md:
- Summary (1 paragraph)
- Blast radius assessment
- Phase breakdown (each phase: what changes, how to verify, how to rollback)
- Data migration scripts (if applicable)
- Rollback procedure
- Validation checklist
STOP here. Do not write any code yet. Wait for plan approval.
.claude/migration-plan.md.claude/migration-plan.md written and reviewed