Complete guide for writing Claude Code and SDK hooks with security-first design. Triggers: hook creation, hook writing, PreToolUse, PostToolUse, UserPromptSubmit, tool validation, logging hooks, context injection, workflow automation Use when: creating new hooks for tool validation, logging operations for audit, injecting context before prompts, enforcing project-specific workflows, preventing dangerous operations in production DO NOT use when: logic belongs in core skill - use Skills instead. DO NOT use when: complex multi-step workflows needed - use Agents instead. DO NOT use when: behavior better suited for custom tool. Use this skill BEFORE writing any hook. Check even if unsure.
/plugin marketplace add athola/claude-night-market/plugin install abstract@claude-night-marketThis skill inherits all available tools. When active, it can use any tool Claude has access to.
modules/hook-types.mdmodules/performance-guidelines.mdmodules/scope-selection.mdmodules/sdk-callbacks.mdmodules/security-patterns.mdmodules/testing-hooks.mdscripts/README.mdscripts/hook_validator.pyHooks are event interceptors that allow you to extend Claude Code and Claude Agent SDK behavior by executing custom logic at specific points in the agent lifecycle. They enable validation before tool use, logging after actions, context injection, workflow automation, and security enforcement.
This skill teaches you how to write effective, secure, and performant hooks for both declarative JSON (Claude Code) and programmatic Python (Claude Agent SDK) use cases.
Create a simple logging hook in .claude/settings.json:
{
"hooks": {
"PostToolUse": [
{
"matcher": { "toolName": "Bash" },
"hooks": [{
"type": "command",
"command": "echo \"$(date): Executed $CLAUDE_TOOL_NAME\" >> ~/.claude/audit.log"
}]
}
]
}
}
This logs every Bash command execution with a timestamp.
Create a validation hook using the SDK:
from claude_agent_sdk import AgentHooks
class ValidationHooks(AgentHooks):
async def on_pre_tool_use(self, tool_name: str, tool_input: dict) -> dict | None:
"""Validate tool inputs before execution."""
if tool_name == "Bash":
command = tool_input.get("command", "")
if "rm -rf /" in command:
raise ValueError("Dangerous command blocked by hook")
# Return None to proceed unchanged, or modified dict to transform
return None
Quick reference for all supported hook events:
| Event | Trigger Point | Parameters | Common Use Cases |
|---|---|---|---|
| PreToolUse | Before tool execution | tool_name, tool_input | Validation, filtering, input transformation |
| PostToolUse | After tool execution | tool_name, tool_input, tool_output | Logging, metrics, output transformation |
| UserPromptSubmit | User sends message | message | Context injection, content filtering |
| Stop | Agent completes | reason, result | Final cleanup, summary reports |
| SubagentStop | Subagent completes | subagent_id, result | Result processing, aggregation |
| PreCompact | Before context compact | context_size | State preservation, checkpointing |
Declarative configuration in .claude/settings.json, project .claude/settings.json, or plugin hooks/hooks.json:
{
"hooks": {
"PreToolUse": [
{
"matcher": { "toolName": "Edit", "inputPattern": "production" },
"hooks": [{
"type": "command",
"command": "echo 'WARNING: Editing production file' >&2"
}]
}
]
}
}
Pros: Simple, no code required, easy to version control Cons: Limited logic capabilities, shell command only
Programmatic callbacks using AgentHooks base class:
from claude_agent_sdk import AgentHooks
class MyHooks(AgentHooks):
async def on_pre_tool_use(self, tool_name: str, tool_input: dict) -> dict | None:
# Complex validation logic
if self._is_dangerous(tool_input):
raise ValueError("Operation blocked")
return None # or return modified input
Pros: Full Python capabilities, complex logic, state management Cons: Requires Python, more complex setup
import re
from claude_agent_sdk import AgentHooks
class SecureLoggingHooks(AgentHooks):
# Patterns that might contain secrets
SECRET_PATTERNS = [
r'api[_-]?key',
r'password',
r'token',
r'secret',
r'credential',
r'auth',
]
def _sanitize_output(self, text: str) -> str:
"""Remove potential secrets from log output."""
for pattern in self.SECRET_PATTERNS:
text = re.sub(
rf'({pattern}["\s:=]+)([^\s,}}]+)',
r'\1***REDACTED***',
text,
flags=re.IGNORECASE
)
return text
async def on_post_tool_use(
self, tool_name: str, tool_input: dict, tool_output: str
) -> str | None:
"""Log tool use with sanitization."""
safe_output = self._sanitize_output(tool_output)
# Log safe_output...
return None # Don't modify output
See modules/security-patterns.md for comprehensive security guidance.
async/await properly, don't block the event loopimport asyncio
from claude_agent_sdk import AgentHooks
class EfficientHooks(AgentHooks):
def __init__(self):
self._log_queue = asyncio.Queue()
self._log_task = None
async def on_pre_tool_use(self, tool_name: str, tool_input: dict) -> dict | None:
# Quick validation only
if not self._is_valid_input(tool_input):
raise ValueError("Invalid input")
return None
async def on_post_tool_use(
self, tool_name: str, tool_input: dict, tool_output: str
) -> str | None:
# Queue log entry without blocking
await self._log_queue.put({
'tool': tool_name,
'timestamp': time.time()
})
return None
def _is_valid_input(self, tool_input: dict) -> bool:
"""Fast validation check."""
# Simple checks only, < 10ms
return len(str(tool_input)) < 1_000_000
See modules/performance-guidelines.md for detailed optimization techniques.
Choose the right location for your hooks based on audience and purpose:
Is this hook part of a plugin's core functionality?
├─ YES → Plugin hooks (hooks/hooks.json in plugin)
└─ NO ↓
Should all team members on this project have this hook?
├─ YES → Project hooks (.claude/settings.json)
└─ NO ↓
Should this hook apply to all my Claude sessions?
├─ YES → Global hooks (~/.claude/settings.json)
└─ NO → Reconsider if you need a hook at all
| Scope | Location | Audience | Committed? | Example Use Case |
|---|---|---|---|---|
| Plugin | hooks/hooks.json | Plugin users | Yes (with plugin) | YAML validation in YAML plugin |
| Project | .claude/settings.json | Team members | Yes (in repo) | Block production config edits |
| Global | ~/.claude/settings.json | Only you | Never | Personal audit logging |
See modules/scope-selection.md for comprehensive scope decision guidance.
Block dangerous operations before execution:
async def on_pre_tool_use(self, tool_name: str, tool_input: dict) -> dict | None:
if tool_name == "Bash":
command = tool_input.get("command", "")
# Block dangerous patterns
if any(pattern in command for pattern in ["rm -rf /", ":(){ :|:& };:"]):
raise ValueError(f"Dangerous command blocked: {command}")
# Block production access
if "production" in command and not self._has_approval():
raise ValueError("Production access requires approval")
return None
Audit all tool operations:
async def on_post_tool_use(
self, tool_name: str, tool_input: dict, tool_output: str
) -> str | None:
await self._log_entry({
'timestamp': datetime.now().isoformat(),
'tool': tool_name,
'input_size': len(str(tool_input)),
'output_size': len(tool_output),
'success': True
})
return None
Add relevant context before user prompts:
async def on_user_prompt_submit(self, message: str) -> str | None:
# Inject project-specific context
context = await self._load_project_context()
enhanced_message = f"{context}\n\n{message}"
return enhanced_message
import pytest
from my_hooks import ValidationHooks
@pytest.mark.asyncio
async def test_dangerous_command_blocked():
hooks = ValidationHooks()
with pytest.raises(ValueError, match="Dangerous command"):
await hooks.on_pre_tool_use("Bash", {"command": "rm -rf /"})
@pytest.mark.asyncio
async def test_safe_command_allowed():
hooks = ValidationHooks()
result = await hooks.on_pre_tool_use("Bash", {"command": "ls -la"})
assert result is None # Allows execution
See modules/testing-hooks.md for comprehensive testing strategies.
For detailed guidance on specific topics:
modules/hook-types.md - Detailed event signatures and parametersmodules/sdk-callbacks.md - Python SDK implementation patternsmodules/security-patterns.md - Comprehensive security guidancemodules/performance-guidelines.md - Optimization techniquesmodules/scope-selection.md - Choosing plugin/project/globalmodules/testing-hooks.md - Testing strategies and fixturesscripts/)hook_validator.py before deploymentThis skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.