From plugin-creator
Provides recipes and examples for Claude Code hooks: plugin hooks in JSON, frontmatter hooks in skills/agents YAML, prompt-based LLM hooks, Python/Node.js code. For hook scripting and plugin integration.
npx claudepluginhub jamie-bitflight/claude_skills --plugin plugin-creatorThis skill uses the workspace's default tool permissions.
Working examples and recipes for building hooks. For hook system fundamentals, activate `Skill(skill: "plugin-creator:hooks-core-reference")`. For JSON I/O schemas, activate `Skill(skill: "plugin-creator:hooks-io-api")`.
Develops Claude Code plugin hooks for event-driven automation, validating tool use with prompt-based, command, and agent types for events like PreToolUse, Stop, and SessionStart.
Author Claude Code hooks for events like PreToolUse and PostToolUse using command, prompt, or agent types to automate workflows and validate operations.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Working examples and recipes for building hooks. For hook system fundamentals, activate Skill(skill: "plugin-creator:hooks-core-reference"). For JSON I/O schemas, activate Skill(skill: "plugin-creator:hooks-io-api").
Plugins can provide hooks that integrate with user and project hooks. For complete plugin documentation including plugin.json schema, directory structure, and component integration, see Skill(skill: "plugin-creator:claude-plugins-reference-2026").
hooks/hooks.json or custom path via hooks field in plugin.jsonHooks can be configured in hooks/hooks.json or inline in plugin.json:
{
"description": "Automatic code formatting",
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/scripts/format.sh",
"timeout": 30
}
]
}
]
}
}
Reference in plugin.json:
{
"name": "my-plugin",
"hooks": "./hooks/hooks.json"
}
Or define inline:
{
"name": "my-plugin",
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/scripts/format.sh"
}
]
}
]
}
}
${CLAUDE_PLUGIN_ROOT}: Absolute path to the plugin directory${CLAUDE_PROJECT_DIR}: Project root directoryHooks can be defined in frontmatter. These are scoped to the component's lifecycle. For complete skill documentation, see Skill(skill: "plugin-creator:claude-skills-overview-2026").
Supported events: All hook events are supported in skill and agent frontmatter. The most common for subagents are PreToolUse, PostToolUse, and Stop (which is automatically converted to SubagentStop in agent context).
---
description: Perform operations with security checks
hooks:
PreToolUse:
- matcher: "Bash"
hooks:
- type: command
command: "./scripts/security-check.sh"
---
---
description: Review code changes
hooks:
PostToolUse:
- matcher: "Edit|Write"
hooks:
- type: command
command: "./scripts/run-linter.sh"
---
once OptionSet once: true to run hook only once per session. After first successful execution, hook is removed.
hooks:
PreToolUse:
- matcher: "Bash"
hooks:
- type: command
command: "./scripts/one-time-setup.sh"
once: true
Note: Only supported for skills and slash commands, not agents.
LLM-evaluated decisions using a fast model (Haiku). Also known as "agent hooks" for complex verification tasks.
{
"type": "prompt",
"prompt": "Evaluate if Claude should stop: <USER_ARGUMENTS>. Check if all tasks are complete.",
"timeout": 30
}
Note:
<USER_ARGUMENTS>above represents the`$ARGUMENTS`placeholder — replace it with the literal token$ARGUMENTSin your actual hook configuration. The placeholder is used here because skill files undergo argument substitution at load time.
Alternatively, use "type": "agent" for complex verification tasks that require tool access.
| Field | Required | Description |
|---|---|---|
type | Yes | "prompt" for LLM evaluation, "agent" for tools |
prompt | Yes | Prompt text sent to LLM |
timeout | No | Seconds (default: 30 for prompt, 60 for agent) |
The LLM must respond with JSON:
{
"ok": true,
"reason": "Explanation for the decision"
}
| Field | Type | Description |
|---|---|---|
ok | boolean | true allows the action, false prevents it |
reason | string | Required when ok is false. Shown to Claude |
Use `$ARGUMENTS` in prompt to include hook input JSON. If omitted, input is appended to the prompt.
{
"hooks": {
"Stop": [
{
"hooks": [
{
"type": "prompt",
"prompt": "You are evaluating whether Claude should stop working. Context: <USER_ARGUMENTS>\n\nAnalyze the conversation and determine if:\n1. All user-requested tasks are complete\n2. Any errors need to be addressed\n3. Follow-up work is needed\n\nRespond with JSON: {\"ok\": true} to allow stopping, or {\"ok\": false, \"reason\": \"your explanation\"} to continue working.",
"timeout": 30
}
]
}
]
}
}
Note:
<USER_ARGUMENTS>above represents the`$ARGUMENTS`placeholder — replace it with the literal token$ARGUMENTSin your actual hook configuration. The placeholder is used here because skill files undergo argument substitution at load time.
{
"hooks": {
"SubagentStop": [
{
"hooks": [
{
"type": "prompt",
"prompt": "Evaluate if this subagent should stop. Input: <USER_ARGUMENTS>\n\nCheck if:\n- The subagent completed its assigned task\n- Any errors occurred that need fixing\n- Additional context gathering is needed\n\nReturn: {\"ok\": true} to allow stopping, or {\"ok\": false, \"reason\": \"explanation\"} to continue."
}
]
}
]
}
}
| Event | Use Case |
|---|---|
Stop | Intelligent task completion detection |
SubagentStop | Verify subagent completed task |
UserPromptSubmit | Context-aware prompt validation |
PreToolUse | Complex permission decisions |
PermissionRequest | Intelligent allow/deny dialogs |
| Feature | Command Hooks | Prompt Hooks |
|---|---|---|
| Execution | Runs bash script | Queries LLM |
| Decision logic | You implement in code | LLM evaluates context |
| Setup complexity | Requires script file | Configure prompt only |
| Context awareness | Limited to script | Natural language understanding |
| Performance | Fast (local) | Slower (API call) |
| Use case | Deterministic rules | Context-aware decisions |
#!/usr/bin/env python3
import json
import re
import sys
# Define validation rules as (regex pattern, message) tuples
VALIDATION_RULES = [
(
r"\bgrep\b(?!.*\|)",
"Use 'rg' (ripgrep) instead of 'grep' for better performance",
),
(
r"\bfind\s+\S+\s+-name\b",
"Use 'rg --files -g pattern' instead of 'find -name'",
),
]
def validate_command(command: str) -> list[str]:
issues = []
for pattern, message in VALIDATION_RULES:
if re.search(pattern, command):
issues.append(message)
return issues
try:
input_data = json.load(sys.stdin)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON input: {e}", file=sys.stderr)
sys.exit(1)
tool_name = input_data.get("tool_name", "")
tool_input = input_data.get("tool_input", {})
command = tool_input.get("command", "")
if tool_name != "Bash" or not command:
sys.exit(0)
issues = validate_command(command)
if issues:
for message in issues:
print(f"\u2022 {message}", file=sys.stderr)
# Exit code 2 blocks tool call and shows stderr to Claude
sys.exit(2)
#!/usr/bin/env python3
import json
import sys
import re
import datetime
try:
input_data = json.load(sys.stdin)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON input: {e}", file=sys.stderr)
sys.exit(1)
prompt = input_data.get("prompt", "")
# Check for sensitive patterns
sensitive_patterns = [
(r"(?i)\b(password|secret|key|token)\s*[:=]", "Prompt contains potential secrets"),
]
for pattern, message in sensitive_patterns:
if re.search(pattern, prompt):
# Use JSON output to block with a specific reason
output = {
"decision": "block",
"reason": f"Security policy violation: {message}. Please rephrase without sensitive information."
}
print(json.dumps(output))
sys.exit(0)
# Add current time to context
context = f"Current time: {datetime.datetime.now()}"
print(context)
# Equivalent JSON approach:
# print(json.dumps({
# "hookSpecificOutput": {
# "hookEventName": "UserPromptSubmit",
# "additionalContext": context,
# },
# }))
sys.exit(0)
The two-layer pattern separates evaluation from execution: the hook script wraps the prompt
with lightweight evaluation instructions and emits them as additionalContext. Claude then
evaluates the prompt inline — proceeding immediately for clear prompts, or invoking a skill
only when the prompt is vague. This keeps skill-load overhead off the common (clear) path.
Token overhead: Clear prompts — ~189 tokens (evaluation wrapper only). Vague prompts — 189 tokens + skill load. ~31% reduction vs. embedding evaluation logic in the hook directly (prompt-improver v0.4.0, 2026-02-14).
#!/usr/bin/env python3
import json
import sys
input_data = json.load(sys.stdin)
original_prompt = input_data.get("prompt", "")
# bypass: strip * prefix and skip evaluation
if original_prompt.startswith("*"):
print(original_prompt[1:].lstrip())
sys.exit(0)
# bypass: slash commands and memorize pass through unchanged
if original_prompt.startswith("/") or original_prompt.startswith("#"):
print(original_prompt)
sys.exit(0)
# ~189 tokens; instructs Claude to evaluate clarity,
# invoke skill only when vague
evaluation_context = (
f"Evaluate the following prompt for clarity and specificity.\n"
f"...\n"
f"PROCEED IMMEDIATELY if the prompt is clear and specific.\n"
f"If vague: use Skill(skill='your-plugin:your-skill') to clarify before proceeding.\n"
f"\nUser prompt: {original_prompt}"
)
output = {
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": evaluation_context,
}
}
print(json.dumps(output))
sys.exit(0)
Hook configuration (hooks.json):
{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "python3 /path/to/your-hook.py"
}
]
}
]
}
}
Skill contract: The skill invoked via Skill(skill='your-plugin:your-skill') must assume
the hook has already evaluated the prompt for clarity. The skill must not re-evaluate whether
the prompt is vague — that decision has already been made by the hook. The skill should
proceed directly to its task (research, clarifying questions, enrichment, or any other
domain-specific work). Re-evaluating clarity in the skill defeats the two-layer separation
and doubles the token overhead for vague prompts.
#!/usr/bin/env python3
import json
import sys
try:
input_data = json.load(sys.stdin)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON input: {e}", file=sys.stderr)
sys.exit(1)
tool_name = input_data.get("tool_name", "")
tool_input = input_data.get("tool_input", {})
# Auto-approve file reads for documentation files
if tool_name == "Read":
file_path = tool_input.get("file_path", "")
if file_path.endswith((".md", ".mdx", ".txt", ".json")):
output = {
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "allow",
"permissionDecisionReason": "Documentation file auto-approved"
},
"suppressOutput": True
}
print(json.dumps(output))
sys.exit(0)
# Let normal permission flow proceed
sys.exit(0)
#!/usr/bin/env node
const output = {
hookSpecificOutput: {
hookEventName: "SessionStart",
additionalContext: `<project-context>
Environment: ${process.env.NODE_ENV || "development"}
Node version: ${process.version}
Working directory: ${process.cwd()}
</project-context>`,
},
};
console.log(JSON.stringify(output));
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "command",
"command": "prettier --write \"$CLAUDE_PROJECT_DIR\"/**/*.{js,ts,json}"
}
]
}
]
}
}
{
"hooks": {
"PreToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "command",
"command": "./scripts/check-protected-files.sh"
}
]
}
]
}
}
{
"hooks": {
"Notification": [
{
"matcher": "permission_prompt",
"hooks": [
{
"type": "command",
"command": "/path/to/permission-alert.sh"
}
]
},
{
"matcher": "idle_prompt",
"hooks": [
{
"type": "command",
"command": "/path/to/idle-notification.sh"
}
]
}
]
}
}
{
"hooks": {
"Stop": [
{
"hooks": [
{
"type": "prompt",
"prompt": "Verify task completion. Check edge cases. Return {\"ok\": true} or {\"ok\": false, \"reason\": \"...\"}."
}
]
}
]
}
}