npx claudepluginhub melodic-software/claude-code-plugins --plugin tacThis skill is limited to using the following tools:
Implement security and governance controls for custom agents using hooks.
Enforces tiered runtime guardrails on Claude Code agent actions: auto-approves reads/routines, notifies on writes/commits, requires approval for installs/emails/deletes, blocks credential leaks/sudo. Logs audits.
Guides authoring secure, performant hooks for Claude Code (JSON) and Claude Agent SDK (Python) for validation, logging, policy enforcement, and automation.
Guides creation and configuration of autonomous agents for Claude Code plugins, covering frontmatter, triggering descriptions, system prompts, tools, teams, permissions, and best practices.
Share bugs, ideas, or general feedback.
Implement security and governance controls for custom agents using hooks.
Design and implement hook-based governance that controls agent permissions, blocks dangerous operations, and provides audit trails.
Documentation Verification: Hook event types (PreToolUse, PostToolUse, etc.) are Claude Code internal types. For authoritative current types, verify via
hook-managementskill →docs-management.
| Hook | When | Use Case |
|---|---|---|
PreToolUse | Before tool executes | Block, validate, log |
PostToolUse | After tool executes | Log results, audit |
async def hook_function(
input_data: dict, # Tool call information
tool_use_id: str, # Unique tool call ID
context: HookContext # Session context
) -> dict:
# Return empty dict to allow
# Return with permissionDecision to block
pass
Questions to answer:
from claude_agent_sdk import HookMatcher
hooks = {
"PreToolUse": [
# Match specific tool
HookMatcher(matcher="Read", hooks=[block_sensitive_files]),
# Match all tools
HookMatcher(hooks=[log_all_tool_usage]),
],
"PostToolUse": [
HookMatcher(hooks=[audit_tool_results]),
],
}
Security Hook (Block Pattern):
BLOCKED_PATTERNS = [".env", "credentials", "secrets", ".pem", ".key"]
async def block_sensitive_files(
input_data: dict,
tool_use_id: str,
context: HookContext
) -> dict:
tool_name = input_data.get("tool_name", "")
tool_input = input_data.get("tool_input", {})
# Only check file operations
if tool_name not in ["Read", "Write", "Edit"]:
return {}
file_path = tool_input.get("file_path", "")
# Check for blocked patterns
for pattern in BLOCKED_PATTERNS:
if pattern in file_path.lower():
return {
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "deny",
"permissionDecisionReason": f"Security: Access to {pattern} files blocked",
}
}
return {} # Allow
Audit Hook (Log Pattern):
async def log_all_tool_usage(
input_data: dict,
tool_use_id: str,
context: HookContext
) -> dict:
tool_name = input_data.get("tool_name", "")
tool_input = input_data.get("tool_input", {})
session_id = input_data.get("session_id", "unknown")
log_entry = {
"timestamp": datetime.now().isoformat(),
"session_id": session_id,
"tool": tool_name,
"input": tool_input,
}
# Write to audit log
log_file = Path("audit_logs") / f"{session_id}.jsonl"
log_file.parent.mkdir(exist_ok=True)
with open(log_file, "a") as f:
f.write(json.dumps(log_entry) + "\n")
return {} # Always allow (logging only)
Validation Hook (Conditional Pattern):
async def validate_bash_commands(
input_data: dict,
tool_use_id: str,
context: HookContext
) -> dict:
tool_name = input_data.get("tool_name", "")
if tool_name != "Bash":
return {}
command = input_data.get("tool_input", {}).get("command", "")
DANGEROUS_PATTERNS = [
r"rm\s+-rf\s+/",
r"sudo\s+rm",
r":(){ :|:& };:", # Fork bomb
]
for pattern in DANGEROUS_PATTERNS:
if re.search(pattern, command):
return {
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "deny",
"permissionDecisionReason": f"Security: Dangerous command blocked",
}
}
return {}
hooks = {
"PreToolUse": [
HookMatcher(matcher="Read", hooks=[block_sensitive_files]),
HookMatcher(matcher="Bash", hooks=[validate_bash_commands]),
HookMatcher(hooks=[log_all_tool_usage]),
],
"PostToolUse": [
HookMatcher(hooks=[audit_tool_results]),
],
}
options = ClaudeAgentOptions(
system_prompt=system_prompt,
model="opus",
hooks=hooks,
)
ALLOWED_DIRECTORIES = ["src/", "docs/", "tests/"]
async def restrict_file_access(input_data, tool_use_id, context) -> dict:
file_path = input_data.get("tool_input", {}).get("file_path", "")
if not any(file_path.startswith(d) for d in ALLOWED_DIRECTORIES):
return deny_response("Access restricted to allowed directories")
return {}
tool_call_counts = defaultdict(int)
RATE_LIMITS = {"WebFetch": 10, "Bash": 50}
async def rate_limit_tools(input_data, tool_use_id, context) -> dict:
tool_name = input_data.get("tool_name", "")
if tool_name in RATE_LIMITS:
tool_call_counts[tool_name] += 1
if tool_call_counts[tool_name] > RATE_LIMITS[tool_name]:
return deny_response(f"Rate limit exceeded for {tool_name}")
return {}
BLOCKED_CONTENT = ["api_key", "password", "secret"]
async def filter_output_content(input_data, tool_use_id, context) -> dict:
tool_output = input_data.get("tool_output", "")
for blocked in BLOCKED_CONTENT:
if blocked.lower() in tool_output.lower():
return deny_response("Output contains sensitive content")
return {}
When designing governance:
## Governance Design
**Agent:** [agent name]
**Security Level:** [low/medium/high]
### Requirements
- [ ] Requirement 1
- [ ] Requirement 2
### Hooks
**PreToolUse:**
| Matcher | Hook | Purpose |
| --- | --- | --- |
| Read | block_sensitive_files | Block .env, credentials |
| Bash | validate_commands | Block dangerous commands |
| * | log_usage | Audit all tool calls |
**PostToolUse:**
| Matcher | Hook | Purpose |
| --- | --- | --- |
| * | audit_results | Log tool outputs |
### Implementation
[Hook function implementations]
### Test Scenarios
| Scenario | Expected | Actual |
| --- | --- | --- |
| Read .env file | Blocked | |
| Read src/main.py | Allowed | |
| rm -rf / | Blocked | |
"Hooks enable governance and permission checks in custom agents."
Hooks work for both main agent and subagents spawned via Task tool.
Date: 2025-12-26 Model: claude-opus-4-5-20251101