This skill should be used when the user asks to "start a deepagent project", "create a new agent", "quickstart agent", "simple agent example", "get started with deepagents", or needs a quick introduction to building agents with LangChain's DeepAgents framework. Provides minimal setup and basic patterns for rapid prototyping.
From deepagents-buildernpx claudepluginhub spulido99/claude-toolkit --plugin deepagents-builderThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Build production-ready deep agents with planning, context management, and subagent delegation in minutes.
DeepAgents (langchain-ai/deepagents) is a high-level framework for building agentic applications with planning, filesystem backends, subagent orchestration, and auto-summarization built in. The core function is create_deep_agent, which provides:
CompiledSubAgent instancesFilesystemBackend, StateBackend, StoreBackend, CompositeBackendwrite_todos, read_file, write_file, edit_file, ls, glob, grep, execute, taskinterrupt_on configuration for tool approvalpip install deepagents
from deepagents import create_deep_agent
agent = create_deep_agent(
model="anthropic:claude-sonnet-4-5-20250929",
system_prompt="You are a helpful research assistant.",
tools=[],
)
result = agent.invoke({
"messages": [{"role": "user", "content": "Research AI trends"}]
})
print(result["messages"][-1].content)
from deepagents import create_deep_agent
from langchain.tools import tool
@tool
def search_web(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
agent = create_deep_agent(
model="anthropic:claude-sonnet-4-5-20250929",
tools=[search_web],
system_prompt="You are a research assistant.",
)
Define subagents as dicts — the framework compiles them into CompiledSubAgent instances:
from deepagents import create_deep_agent
# Subagents defined as dicts (native pattern)
agent = create_deep_agent(
model="anthropic:claude-sonnet-4-5-20250929",
system_prompt="You coordinate research projects. Delegate research to the researcher and writing to the writer.",
tools=[],
subagents=[
{
"name": "researcher",
"model": "openai:gpt-4o",
"tools": [search_web],
"system_prompt": "You are an expert researcher. Summarize findings concisely.",
},
{
"name": "writer",
"tools": [write_document],
"system_prompt": "You write clear, structured documents.",
},
],
)
Use FilesystemBackend for file-first agents and AGENTS.md for declarative memory:
from deepagents import create_deep_agent
from deepagents.backends import FilesystemBackend
from langgraph.checkpoint.memory import MemorySaver
agent = create_deep_agent(
model="anthropic:claude-sonnet-4-5-20250929",
system_prompt="You are a project assistant.",
tools=[],
backend=FilesystemBackend(root_dir="./workspace"),
memory=["./AGENTS.md"],
skills=["./skills/"], # Load SKILL.md files from directory
checkpointer=MemorySaver(),
)
# Invoke with thread persistence
result = agent.invoke(
{"messages": [{"role": "user", "content": "Set up the project structure"}]},
config={"configurable": {"thread_id": "project-1"}},
)
Every create_deep_agent automatically includes these tools via default middleware:
| Tool | Middleware | Description |
|---|---|---|
write_todos | Planning | Create structured task lists |
read_todos | Planning | View current tasks |
ls | Filesystem | List directory contents |
read_file | Filesystem | Read file content with pagination |
write_file | Filesystem | Create or overwrite files |
edit_file | Filesystem | Exact string replacements |
glob | Filesystem | Find files matching patterns |
grep | Filesystem | Search text in files |
execute | Filesystem | Run commands in sandbox (if backend supports it) |
task | SubAgent | Delegate to subagents with isolated contexts |
Security Tip: Use
interrupt_onon dangerous tools to require human confirmation before execution. UseToolRuntimefromlangchain.toolsto inject user identity and permissions as secure context, rather than embedding user IDs in tool parameters.
from deepagents import create_deep_agent
from deepagents.backends import FilesystemBackend
agent = create_deep_agent(
model="anthropic:claude-sonnet-4-5-20250929",
system_prompt="""You conduct comprehensive research.
1. Plan research steps
2. Search for information
3. Synthesize into final report""",
tools=[search_tool],
backend=FilesystemBackend(root_dir="./research"),
skills=["./skills/"],
)
from deepagents import create_deep_agent
from langgraph.checkpoint.memory import MemorySaver
agent = create_deep_agent(
model="anthropic:claude-sonnet-4-5-20250929",
system_prompt="You coordinate customer support. Route inquiries to the appropriate specialist.",
tools=[],
subagents=[
{
"name": "inquiry-handler",
"tools": [knowledge_base_tool],
"system_prompt": "You answer customer questions accurately.",
},
{
"name": "issue-resolver",
"tools": [ticketing_tool],
"system_prompt": "You resolve customer problems.",
},
{
"name": "order-specialist",
"tools": [order_tool],
"system_prompt": "You manage customer orders.",
},
],
checkpointer=MemorySaver(),
interrupt_on={"tool": {"allowed_decisions": ["approve", "reject"]}},
)
from langchain.chat_models import init_chat_model
# Claude (recommended)
model = init_chat_model("anthropic:claude-sonnet-4-5-20250929")
# OpenAI
model = init_chat_model("openai:gpt-4o")
# Google
model = init_chat_model("google_genai:gemini-2.0-flash")
agent = create_deep_agent(model=model, system_prompt="...", tools=[...])
Test your agent interactively with tool call logging:
# chat.py
import uuid
from deepagents import create_deep_agent
from langgraph.checkpoint.memory import MemorySaver
def create_my_agent():
return create_deep_agent(
model="anthropic:claude-sonnet-4-5-20250929",
tools=[...],
system_prompt="Your system prompt here.",
checkpointer=MemorySaver(),
)
def main():
agent = create_my_agent()
thread_id = str(uuid.uuid4())
config = {"configurable": {"thread_id": thread_id}}
print("Chat with your agent (type 'exit' to quit, 'new' for new thread)")
while True:
user_input = input("\nYou: ").strip()
if not user_input:
continue
if user_input.lower() in ("exit", "quit", "salir"):
break
if user_input.lower() in ("new", "nuevo"):
thread_id = str(uuid.uuid4())
config = {"configurable": {"thread_id": thread_id}}
print(f" New thread: {thread_id[:8]}...")
continue
result = agent.invoke(
{"messages": [{"role": "user", "content": user_input}]},
config=config,
)
# Log tool calls
for msg in result["messages"]:
if hasattr(msg, "tool_calls") and msg.tool_calls:
for tc in msg.tool_calls:
args = ", ".join(f"{k}={v!r}" for k, v in tc["args"].items())
print(f" Tool: {tc['name']}({args})")
print(f"\nAgent: {result['messages'][-1].content}")
if __name__ == "__main__":
main()
Use /add-interactive-chat to generate a chat console tailored to your specific agent.
After basic setup, explore:
/design-tools to create your tool catalog./design-evals to get started./assess to check your agent's maturity level.create_deep_agent parametersBuild:
/new-sdk-app — Scaffold a new DeepAgents project/design-agent — Design a simple single agent (role, tools, prompt)/design-topology — Design optimal agent topology/design-tools — Design AI-friendly tool catalog/add-interactive-chat — Generate interactive chat consoleTest (EDD):
/design-evals — Scaffold eval suite from JTBD/eval — Run evals (snapshot | --smoke | --full | --report | --diagnose)/add-scenario — Add eval scenario interactively or from trace/eval-status — Eval dataset health dashboard/eval-update — Review changed snapshotsValidate & Evolve:
/validate-agent — Anti-pattern and security check/tool-status — Tool quality dashboard/add-tool — Add a single tool to existing catalog/assess — Architecture maturity assessment (80-point)/evolve — Guided refactoring to next maturity level