Step-by-step tutorials and worked examples for common Claude Code workflows — setup, debugging, code review, agent teams, hooks, memory, optimization, and CI/CD
From claude-code-expertnpx claudepluginhub markus41/claude --plugin claude-code-expertThis skill is limited to using the following tools:
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Step-by-step walkthroughs of real Claude Code workflows. Each tutorial shows the exact commands, expected output, and decision points.
Goal: Configure Claude Code from scratch for a TypeScript/React project.
Steps:
/init
Claude analyzes your codebase and generates CLAUDE.md with build commands, test instructions, and detected conventions.
pnpm install, pnpm test, npx tsc --noEmitmkdir -p .claude/rules
Create code-style.md with paths frontmatter for **/*.ts, **/*.tsx.
Create testing.md with paths frontmatter for **/*.test.*.
/cc-mcp add context7 # Library documentation
/cc-mcp add perplexity # Web research
/cc-hooks create auto-format # Format on file write
/cc-hooks create security-guard # Block dangerous commands
/cc-setup --audit
Check the audit score. Fix any warnings.
Expected outcome: Audit score > 80, all checks green.
Goal: Create a hook that blocks commits containing secrets.
Steps:
.claude/hooks/scripts/secret-scanner.sh:#!/usr/bin/env bash
set -euo pipefail
input=$(cat)
tool_name=$(echo "$input" | jq -r '.tool_name // empty')
if [[ "$tool_name" != "Bash" ]]; then
echo '{"decision":"passthrough"}'
exit 0
fi
command=$(echo "$input" | jq -r '.tool_input.command // empty')
# Block git commits that might contain secrets
if echo "$command" | grep -qE 'git (add|commit)'; then
# Check staged files for secret patterns
if git diff --cached --name-only 2>/dev/null | xargs grep -lE '(API_KEY|SECRET|PASSWORD|TOKEN)=[^$]' 2>/dev/null; then
echo '{"decision":"block","reason":"Staged files contain potential secrets. Remove them before committing."}'
exit 0
fi
fi
echo '{"decision":"passthrough"}'
chmod +x .claude/hooks/scripts/secret-scanner.sh
.claude/settings.json under hooks.PreToolUse:{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"command": ".claude/hooks/scripts/secret-scanner.sh"
}
]
}
}
/cc-hooks test secret-scanner
Expected outcome: Hook blocks commits with hardcoded secrets, passes clean commits.
Goal: Review a PR using the council with expert-panel protocol.
Steps:
/cc-council src/auth/ --preset security --depth deep
Security: 7.2/10 (2 HIGH, 1 MEDIUM findings)
Quality: 8.5/10 (1 MEDIUM finding)
Performance: 9.1/10 (no significant findings)
/cc-council src/auth/ --preset security --changed-only
Expected outcome: Security findings identified and resolved, final score > 8.0.
Goal: Build a specialized agent for researching library APIs.
Steps:
/cc-agent create library-researcher
.claude/agents/library-researcher.md:---
name: library-researcher
description: Researches library APIs and produces usage guides
model: claude-haiku-4-5-20251001
tools:
- Read
- Grep
- Glob
- WebFetch
---
Use the library-researcher agent to research the latest Prisma ORM query patterns
Expected outcome: Agent returns a focused summary of Prisma query patterns with code examples.
Goal: Complete a large task without hitting context limits.
Steps:
/cc-budget audit
See how much context is consumed by CLAUDE.md, rules, skills, MCP schemas.
/mcpResearch the auth module architecture using a Haiku subagent
The subagent explores files in its own context, returns only a summary.
/compact Focus on the auth refactor: keep the API contract changes, test plan, and file paths
/model claude-haiku-4-5-20251001 # For file searches
/model claude-sonnet-4-6 # For implementation
/cost
/cc-perf tips
Expected outcome: Complex task completed within budget, no auto-compact interruptions.
Goal: Stop Claude from repeating the same error across sessions.
Steps:
# Review current lessons-learned.md
/cc-help lessons-learned
### Error: Read failure
- Tool: Read
- Input: /path/to/directory
- Error: EISDIR: illegal operation on a directory
- Status: NEEDS_FIX
- Status: RESOLVED
- Fix: Use `ls` or `Glob` for directories, `Read` for files only
- Prevention: Always check if path is a file before using Read tool
Promote patterns to rules
If the same error appears 3+ times:
Create a new rule in .claude/rules/ with the prevention strategy.
Verify in next session The fix is loaded as a rule. Claude reads it and avoids the mistake.
Expected outcome: Error never recurs. Lessons-learned.md grows into a project-specific knowledge base.
Goal: Configure the three-tier memory system for cross-session learning.
Steps:
Tier 1: CLAUDE.md and Rules Already set up in Tutorial 1. These are your explicit, team-shared instructions.
Tier 2: Auto Memory Enable in settings (on by default since v2.1.59):
{ "autoMemoryEnabled": true }
Claude automatically saves useful findings to ~/.claude/projects/<project>/memory/.
Remember that the API tests require a local Redis instance on port 6380
Claude saves this to auto memory. Next session, it knows.
/memory
Browse saved memories. Edit or delete as needed.
/cc-mcp add memory
Expected outcome: Knowledge persists across sessions. Build commands, preferences, and project quirks are remembered.
Goal: Set up Claude Code as an automated PR reviewer in CI.
Steps:
/cc-cicd generate github-actions --template pr-review
.github/workflows/claude-review.yml:name: Claude Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
review:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- uses: actions/checkout@v4
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
model: claude-haiku-4-5-20251001
prompt: |
Review this PR for security issues, code quality, and test coverage.
Focus on: input validation, error handling, and edge cases.
Output a structured review with severity ratings.
Add secrets
In GitHub repo settings → Secrets → Add ANTHROPIC_API_KEY.
Test with a PR Create a test PR. The action runs and posts review comments.
Tune the prompt Adjust the review prompt based on results. Add project-specific guidance.
Expected outcome: Every PR gets an automated Claude review within minutes.
| Tutorial | Topic | Difficulty | Time |
|---|---|---|---|
| 1 | Project setup | Beginner | 10 min |
| 2 | Security hooks | Intermediate | 15 min |
| 3 | Code review council | Intermediate | 10 min |
| 4 | Custom agents | Intermediate | 15 min |
| 5 | Context optimization | Advanced | 20 min |
| 6 | Self-healing | Intermediate | 10 min |
| 7 | Persistent memory | Beginner | 10 min |
| 8 | CI/CD integration | Advanced | 20 min |