npx claudepluginhub nanmicoder/claude-code-skills --plugin langchain-useThis skill uses the workspace's default tool permissions.
LangChain 是构建 LLM 驱动的智能体和应用程序的开源框架。
references/advanced/guardrails.mdreferences/advanced/mcp.mdreferences/advanced/runtime.mdreferences/advanced/streaming.mdreferences/advanced/structured-output.mdreferences/agents/agent-basics.mdreferences/core-concepts/overview.mdreferences/core-concepts/quickstart.mdreferences/integration/messages.mdreferences/integration/models.mdreferences/integration/retrieval.mdreferences/memory/long-term-memory.mdreferences/memory/short-term-memory.mdreferences/middleware/middleware-overview.mdreferences/tools/tool-basics.mdProvides reference for LangChain, LangGraph, Deep Agents in Python 3.10+: build AI agents, RAG pipelines, tool-calling agents, retrieval chains, conversation memory, multi-agent workflows, LLM integrations.
Designs LLM applications using LangChain framework with agents, chains, memory, tool integration, and document processing. Use for building AI agents, complex workflows, and production apps.
Creates LangChain agents with create_agent, tools via @tool/tool(), middleware for human-in-the-loop/error handling, and MemorySaver persistence. Python/TS examples.
Share bugs, ideas, or general feedback.
LangChain 是构建 LLM 驱动的智能体和应用程序的开源框架。
使用 uv 安装 LangChain(推荐,需要 Python 3.10+):
# 安装核心包
uv add langchain
# 安装模型提供商集成
uv add langchain-anthropic # Anthropic/Claude
uv add langchain-openai # OpenAI
用户查询 -> create_agent() -> ReAct 循环 -> Tool 调用 -> 返回结果
详见 Agent 基础
from langchain.agents import create_agent
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[get_weather],
system_prompt="You are a helpful assistant",
)
# 运行 agent
result = agent.invoke(
{"messages": [{"role": "user", "content": "what is the weather in sf"}]}
)
详见 Tool 基础
from langchain.tools import tool
@tool
def get_weather(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
使用 ToolRuntime 访问 state、context、store:
from langchain.tools import tool, ToolRuntime
from dataclasses import dataclass
@dataclass
class Context:
user_id: str
@tool
def get_user_location(runtime: ToolRuntime[Context]) -> str:
"""Retrieve user location based on user ID."""
user_id = runtime.context.user_id
return "Florida" if user_id == "1" else "SF"
from langgraph.checkpoint.memory import InMemorySaver
agent = create_agent(
model,
tools,
checkpointer=InMemorySaver(), # 短期记忆
)
# 使用 thread_id 维护会话
config = {"configurable": {"thread_id": "1"}}
agent.invoke({"messages": [...]}, config)
详见 中间件概述
from langchain.agents.middleware import before_model, after_model
@before_model
def trim_messages(state, runtime):
# 消息修剪逻辑
return None
| 主题 | 文档 | 说明 |
|---|---|---|
| Streaming | Streaming | 实时输出流式更新 |
| Structured Output | 结构化输出 | Pydantic/dataclass 输出格式 |
| Runtime | Runtime | ToolRuntime 和上下文访问 |
| Guardrails | 安全护栏 | PII 检测、内容过滤 |
| MCP | MCP | Model Context Protocol 集成 |
| 主题 | 文档 | 说明 |
|---|---|---|
| Models | 模型 | 多提供商模型初始化 |
| Messages | 消息 | 消息类型和内容块 |
| Retrieval | 检索 | RAG 和知识库构建 |
LangChain 1.0 的核心抽象,基于 LangGraph 构建。使用 create_agent() 创建。
使用 @tool 装饰器定义。可选 ToolRuntime 访问状态、上下文和存储。
装饰器风格的扩展机制:
@before_model - 模型调用前处理@after_model - 模型调用后处理@wrap_tool_call - 工具调用包装@dynamic_prompt - 动态系统提示from langchain.agents import create_agent
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[my_tool],
system_prompt="You are helpful.",
)
result = agent.invoke(
{"messages": [{"role": "user", "content": "hello"}]}
)
from langgraph.checkpoint.memory import InMemorySaver
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[my_tool],
checkpointer=InMemorySaver(),
)
# 使用 thread_id 标识会话
config = {"configurable": {"thread_id": "1"}}
agent.invoke(
{"messages": [{"role": "user", "content": "My name is Bob"}]},
config
)
agent.invoke(
{"messages": [{"role": "user", "content": "What's my name?"}]},
config
)
from langgraph.checkpoint.postgres import PostgresSaver
DB_URI = "postgresql://postgres:postgres@localhost:5432/postgres"
with PostgresSaver.from_conn_string(DB_URI) as checkpointer:
checkpointer.setup() # 自动创建表
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[my_tool],
checkpointer=checkpointer,
)
from dataclasses import dataclass
from langchain.tools import tool, ToolRuntime
@dataclass
class Context:
user_id: str
@tool
def get_user_info(runtime: ToolRuntime[Context]) -> str:
"""Get user information."""
user_id = runtime.context.user_id
return f"User ID: {user_id}"
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[get_user_info],
context_schema=Context,
)
result = agent.invoke(
{"messages": [{"role": "user", "content": "get my info"}]},
context=Context(user_id="user_123")
)
from dataclasses import dataclass
from langchain.agents.structured_output import ToolStrategy
@dataclass
class Response:
answer: str
confidence: float
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[my_tool],
response_format=ToolStrategy(Response),
)
result = agent.invoke(
{"messages": [{"role": "user", "content": "what is 2+2?"}]}
)
print(result['structured_response'])
# Response(answer="4", confidence=1.0)
from dataclasses import dataclass
from langchain.agents import create_agent
from langchain.chat_models import init_chat_model
from langchain.tools import tool, ToolRuntime
from langgraph.checkpoint.memory import InMemorySaver
from langchain.agents.structured_output import ToolStrategy
@dataclass
class Context:
user_id: str
@dataclass
class ResponseFormat:
answer: str
confidence: float | None = None
@tool
def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Sunny in {city}"
@tool
def get_user_location(runtime: ToolRuntime[Context]) -> str:
"""Get user's location."""
return "San Francisco" if runtime.context.user_id == "1" else "Unknown"
# 配置模型
model = init_chat_model(
"claude-sonnet-4-5-20250929",
temperature=0.5,
max_tokens=1000
)
# 创建 agent
agent = create_agent(
model=model,
system_prompt="You are a weather assistant.",
tools=[get_weather, get_user_location],
context_schema=Context,
response_format=ToolStrategy(ResponseFormat),
checkpointer=InMemorySaver(),
)
# 运行 agent
config = {"configurable": {"thread_id": "1"}}
result = agent.invoke(
{"messages": [{"role": "user", "content": "What's the weather?"}]},
config=config,
context=Context(user_id="1")
)