From langchain-skills
Provides LangChain middleware for human-in-the-loop approval of tool calls, custom hooks for logging/error handling, command resume after decisions, and structured output with Pydantic/Zod.
npx claudepluginhub langchain-ai/langchain-skills --plugin langchain-skillsThis skill uses the workspace's default tool permissions.
<overview>
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Requirements: Checkpointer + thread_id config for all HITL workflows.
@tool def send_email(to: str, subject: str, body: str) -> str: """Send an email.""" return f"Email sent to {to}"
agent = create_agent( model="gpt-4.1", tools=[send_email], checkpointer=MemorySaver(), # Required for HITL middleware=[ HumanInTheLoopMiddleware( interrupt_on={ "send_email": {"allowed_decisions": ["approve", "edit", "reject"]}, } ) ], )
</python>
<typescript>
Set up an agent with HITL that pauses before sending emails for human approval.
```typescript
import { createAgent, humanInTheLoopMiddleware } from "langchain";
import { MemorySaver } from "@langchain/langgraph";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const sendEmail = tool(
async ({ to, subject, body }) => `Email sent to ${to}`,
{
name: "send_email",
description: "Send an email",
schema: z.object({ to: z.string(), subject: z.string(), body: z.string() }),
}
);
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5",
tools: [sendEmail],
checkpointer: new MemorySaver(),
middleware: [
humanInTheLoopMiddleware({
interruptOn: { send_email: { allowedDecisions: ["approve", "edit", "reject"] } },
}),
],
});
Run the agent, detect an interrupt, then resume execution after human approval.
```python
from langgraph.types import Command
config = {"configurable": {"thread_id": "session-1"}}
result1 = agent.invoke({ "messages": [{"role": "user", "content": "Send email to john@example.com"}] }, config=config)
if "interrupt" in result1: print(f"Waiting for approval: {result1['interrupt']}")
result2 = agent.invoke( Command(resume={"decisions": [{"type": "approve"}]}), config=config )
</python>
<typescript>
Run the agent, detect an interrupt, then resume execution after human approval.
```typescript
import { Command } from "@langchain/langgraph";
const config = { configurable: { thread_id: "session-1" } };
// Step 1: Agent runs until it needs to call tool
const result1 = await agent.invoke({
messages: [{ role: "user", content: "Send email to john@example.com" }]
}, config);
// Check for interrupt
if (result1.__interrupt__) {
console.log(`Waiting for approval: ${result1.__interrupt__}`);
}
// Step 2: Human approves
const result2 = await agent.invoke(
new Command({ resume: { decisions: [{ type: "approve" }] } }),
config
);
Edit the tool arguments before approving when the original values need correction.
```python
# Human edits the arguments — edited_action must include name + args
result2 = agent.invoke(
Command(resume={
"decisions": [{
"type": "edit",
"edited_action": {
"name": "send_email",
"args": {
"to": "alice@company.com", # Fixed email
"subject": "Project Meeting - Updated",
"body": "...",
},
},
}]
}),
config=config
)
```
Edit the tool arguments before approving when the original values need correction.
```typescript
// Human edits the arguments — editedAction must include name + args
const result2 = await agent.invoke(
new Command({
resume: {
decisions: [{
type: "edit",
editedAction: {
name: "send_email",
args: {
to: "alice@company.com", // Fixed email
subject: "Project Meeting - Updated",
body: "...",
},
},
}]
}
}),
config
);
```
Reject a tool call and provide feedback explaining why it was rejected.
```python
# Human rejects
result2 = agent.invoke(
Command(resume={
"decisions": [{
"type": "reject",
"feedback": "Cannot delete customer data without manager approval",
}]
}),
config=config
)
```
Configure different HITL policies for each tool based on risk level.
```python
agent = create_agent(
model="gpt-4.1",
tools=[send_email, read_email, delete_email],
checkpointer=MemorySaver(),
middleware=[
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {"allowed_decisions": ["approve", "edit", "reject"]},
"delete_email": {"allowed_decisions": ["approve", "reject"]}, # No edit
"read_email": False, # No HITL for reading
}
)
],
)
```
### What You CAN Configure
before_model, after_model, wrap_tool_call, before_agent, after_agentSix decorator hooks are available. Two patterns:
wrap_tool_call, wrap_model_call): (request, handler) — call handler(request) to proceed, or return early to short-circuit.before_model, after_model, before_agent, after_agent): (state, runtime) — inspect or modify state. Return None or a dict of state updates.from langchain.agents.middleware import wrap_tool_call
@wrap_tool_call
def retry_middleware(request, handler):
for attempt in range(3):
try:
return handler(request)
except Exception:
if attempt == 2:
raise
@wrap_tool_call
def guard_middleware(request, handler):
if request.tool_call["name"] == "dangerous_tool":
return "This tool is disabled" # short-circuit
return handler(request)
`createMiddleware({ wrapToolCall })` intercepts tool execution.
import { createMiddleware } from "langchain";
const retryMiddleware = createMiddleware({
wrapToolCall: async (request, handler) => {
for (let attempt = 0; attempt < 3; attempt++) {
try { return await handler(request); }
catch (e) { if (attempt === 2) throw e; }
}
},
});
`before_model` / `after_model` / `before_agent` / `after_agent` all share `(state, runtime)` signature.
from langchain.agents.middleware import before_model, after_model
@before_model
def log_calls(state, runtime):
print(f"Calling model with {len(state['messages'])} messages")
@after_model
def check_output(state, runtime):
print(f"Model responded")
All before/after hooks share the same `(state, runtime)` signature via `createMiddleware`.
import { createMiddleware } from "langchain";
const loggingMiddleware = createMiddleware({
beforeModel: (state, runtime) => {
console.log(`Calling model with ${state.messages.length} messages`);
},
afterModel: (state, runtime) => {
console.log("Model responded");
},
});
### What You CANNOT Configure
agent = create_agent( model="gpt-4.1", tools=[send_email], checkpointer=MemorySaver(), # Required middleware=[HumanInTheLoopMiddleware({...})] )
</python>
<typescript>
HITL requires a checkpointer to persist state.
```typescript
// WRONG: No checkpointer
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5", tools: [sendEmail],
middleware: [humanInTheLoopMiddleware({ interruptOn: { send_email: true } })],
});
// CORRECT: Add checkpointer
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5", tools: [sendEmail],
checkpointer: new MemorySaver(),
middleware: [humanInTheLoopMiddleware({ interruptOn: { send_email: true } })],
});
Always provide thread_id when using HITL to track conversation state.
```python
# WRONG
agent.invoke(input) # No config!
agent.invoke(input, config={"configurable": {"thread_id": "user-123"}})
</python>
</fix-no-thread-id>
<fix-wrong-resume-syntax>
<python>
Use Command class to resume execution after an interrupt.
```python
# WRONG
agent.invoke({"resume": {"decisions": [...]}})
# CORRECT
from langgraph.types import Command
agent.invoke(Command(resume={"decisions": [{"type": "approve"}]}), config=config)
Use Command class to resume execution after an interrupt.
```typescript
// WRONG
await agent.invoke({ resume: { decisions: [...] } });
// CORRECT import { Command } from "@langchain/langgraph"; await agent.invoke(new Command({ resume: { decisions: [{ type: "approve" }] } }), config);
</typescript>
</fix-wrong-resume-syntax>