This skill should be used when the user asks to "design agent topology", "plan agent architecture", "create bounded contexts", "map business capabilities to agents", "organize subagents", or needs guidance on structuring multi-agent systems. Provides Team Topologies principles applied to AI agents.
From deepagents-buildernpx claudepluginhub spulido99/claude-toolkit --plugin deepagents-builderThis skill uses the workspace's default tool permissions.
references/capability-mapping.mdreferences/topology-patterns.mdDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Design production-ready agent architectures using Team Topologies principles.
Design subagents based on business capabilities, not technical implementation.
# Good: Capability-focused
{"name": "market-analyst", "description": "Analyzes market trends"}
# Bad: Implementation-focused
{"name": "postgres-agent", "description": "Queries PostgreSQL"}
Each subagent operates within its own vocabulary and mental model.
subagents = [
{
"name": "support-agent",
"system_prompt": """In support context:
- 'Ticket' = customer inquiry
- 'Resolution' = issue fix
- 'Escalation' = route to specialist"""
},
{
"name": "billing-agent",
"system_prompt": """In billing context:
- 'Invoice' = payment request
- 'Credit' = account adjustment
- 'Subscription' = recurring charge"""
}
]
| Tools | Recommended Architecture |
|---|---|
| < 10 | Single agent, no subagents |
| 10-30 | Platform subagents (capability groups) |
| > 30 | Domain-specialized subagents |
Design applications where agents are first-class citizens, not add-ons.
Whatever users can do through the UI, agents should achieve through tools.
# ❌ BAD: UI has features agent cannot access
ui_features = ["bulk_delete", "export_csv", "advanced_filters"]
agent_tools = [delete_single_item] # Missing capabilities
# ✅ GOOD: Full parity
agent_tools = [delete_items, export_data, search_with_filters]
Tools should be atomic primitives. Features emerge from agents composing tools in loops—not bundled workflows.
# ❌ BAD: Workflow bundled into single tool
@tool
def handle_order(order_id: str) -> str:
"""Validates, processes payment, ships, and emails."""
# Agent can't customize or retry individual steps
# ✅ GOOD: Atomic primitives
tools = [validate_order, process_payment, create_shipment, send_notification]
# Agent composes and handles failures at each step
With atomic tools and parity, create new features by writing prompts—no code changes needed.
# New "rush order" feature = prompt change, not code
system_prompt = """For rush orders:
1. validate_order with priority=high
2. process_payment immediately
3. create_shipment with express=True
4. send_notification with urgency=high"""
Agents accomplish unanticipated tasks by composing tools creatively. Design for discovery, not restriction.
# User asks: "Send weekly stakeholder updates every Friday"
# Agent CREATIVELY composes tools not originally designed for this:
tools = [query_db, aggregate_metrics, generate_summary, send_email]
# Agent figures out a workflow:
# 1. Query project metrics from database
# 2. Aggregate into weekly summary
# 3. Generate human-readable report
# 4. Email to stakeholder list
# No "create_weekly_report" tool needed—agent composes existing tools!
Applications enhance through accumulated context (AGENTS.md) and prompt refinement—not code rewrites.
# Example: Agent learns from user feedback
## Before feedback
Agent generates text-only reports.
## User feedback
"Include chart images in reports"
## Agent updates AGENTS.md:
### Learned Preferences
- Reports should include visual charts
- Use generate_chart tool before send_email
- Chart style: bar charts for comparisons, line charts for trends
| Use Files For | Use Databases For |
|---|---|
| Content users should read/edit | High-volume structured data |
| Configuration (version control) | Complex relational queries |
| Agent-generated reports | Ephemeral session state |
| Large text/markdown content | Data requiring indexes |
AGENTS.md files are injected into the system prompt at session start via the memory parameter. This is the file-first approach to providing persistent context.
from deepagents import create_deep_agent
from deepagents.backends import FilesystemBackend
agent = create_deep_agent(
backend=FilesystemBackend(root_dir="./"), # Scope to project directory
memory=[
"~/.deepagents/AGENTS.md", # Global preferences
"./.deepagents/AGENTS.md", # Project-specific context
],
system_prompt="You are a project assistant." # Minimal, AGENTS.md has the rest
)
Security Warning: Never use
root_dir="/"— it grants the agent read/write access to your entire filesystem. Always scope to the project directory or a dedicated workspace.
Two levels of AGENTS.md:
| File | Purpose |
|---|---|
~/.deepagents/AGENTS.md | Global: personality, style, universal preferences |
./.deepagents/AGENTS.md | Project: architecture, conventions, team guidelines |
Global AGENTS.md example (~/.deepagents/AGENTS.md):
# Global Preferences
## Communication Style
- Tone: Professional, concise
- Format output as Markdown tables when showing data
- Always cite sources for claims
## Universal Coding Preferences
- Use type hints in Python
- Prefer functional patterns where appropriate
- Write tests for new functionality
Project AGENTS.md example (.deepagents/AGENTS.md):
# Project Context
## Architecture
- FastAPI backend in /api
- React frontend in /web
- PostgreSQL database
## Conventions
- API endpoints follow REST naming
- Use Pydantic for validation
- Run `pytest` before committing
## Available Resources
- /data/reports/ - Historical reports
- /config/sources.json - Approved data sources
For internal/trusted agents only: The agent can update these files using edit_file when learning new preferences or receiving feedback. By default, treat AGENTS.md as read-only.
Security Note: Writable
AGENTS.mdis appropriate for internal/trusted agents only. For customer-facing agents, see the Security for Customer-Facing Agents section in patterns/SKILL.md to prevent Persistent Prompt Injection attacks.
For persistent memory across conversations, use CompositeBackend to route specific paths to durable storage:
from deepagents import create_deep_agent
from deepagents.backends import CompositeBackend, StateBackend, StoreBackend
from langgraph.store.memory import InMemoryStore
store = InMemoryStore()
agent = create_deep_agent(
store=store,
backend=CompositeBackend(
default=StateBackend(), # Ephemeral by default
routes={"/memories/": StoreBackend(store=store)}, # Persistent for /memories/
),
memory=["./.deepagents/AGENTS.md"],
system_prompt="You have persistent memory. Write to /memories/ to remember across sessions."
)
| Path | Backend | Persistence |
|---|---|---|
/memories/* | StoreBackend | Cross-conversation |
| Everything else | StateBackend | Conversation only |
Based on Team Topologies, map to these agent types:
Primary agent that coordinates the workflow and delegates to specialists.
agent = create_deep_agent(
system_prompt="You coordinate customer operations...",
subagents=[support, billing, orders] # Delegates work
)
Self-service capabilities consumed by other agents.
{
"name": "data-platform",
"description": "Provides data access services",
"tools": [db_query, api_fetch, file_parse]
}
Deep domain expertise requiring isolation.
{
"name": "risk-analyst",
"description": "Calculates financial risk metrics",
"system_prompt": "Expert in VaR, volatility, Sharpe ratio...",
"tools": [risk_calculator, market_data]
}
Knowledge transfer, then steps back.
{
"name": "methodology-advisor",
"description": "Teaches research methods",
"system_prompt": "Guide others to self-sufficiency..."
}
Start
|
v
Clear domain boundaries?
|-- Yes --> Domain-Specialized subagents
|-- No --> Continue
|
v
How many tools?
|-- < 10 --> Single agent (no subagents)
|-- 10-30 --> Platform subagents (group by capability)
|-- > 30 --> Hierarchical decomposition
Beyond tool count and domain boundaries, consider:
| Factor | Single Agent | Subagents |
|---|---|---|
| Latency | Need sub-100ms response | Can tolerate delegation overhead |
| Failure isolation | All-or-nothing acceptable | Need independent failure domains |
| Cost | Token budget limited | Can afford summarization overhead |
| Observability | Simple tracing sufficient | Need per-domain visibility |
Note: The 10/30 tool thresholds align with cognitive load research (7±2 items). Adjust based on tool complexity—10 simple tools may be fine, while 5 complex tools might warrant decomposition.
Each subagent call creates a new LLM context (system prompt + task + tool schemas). Keep in mind:
| Pattern | Token Impact |
|---|---|
| Each subagent delegation | ~2,000-5,000 tokens overhead per call |
| Hierarchical nesting (N levels) | N x single-call latency minimum |
| Parallel subagents | Multiplies cost by number of parallel agents |
| Large system prompts on high-frequency agents | Repeated on every API call |
Tip: Use the AGENTS.md file-first approach to load context once at session start rather than repeating it in every system prompt.
Enterprise Capabilities
├── Customer Management
│ ├── Support
│ └── Retention
├── Order Fulfillment
│ ├── Processing
│ └── Shipping
└── Financial Operations
├── Billing
└── Refunds
For each capability, identify:
Map capabilities to subagents:
| Business Pattern | Agent Pattern |
|---|---|
| Single capability | No subagent needed |
| 2-3 related capabilities | Platform subagent |
| Distinct bounded contexts | Specialized subagents |
| Hierarchical capabilities | Nested subagents |
interactions = {
"x-as-a-service": "Self-service consumption",
"collaboration": "Temporary intensive work",
"facilitation": "One-time knowledge transfer"
}
# < 10 tools, linear workflows
agent = create_deep_agent(
tools=[search, summarize, report],
system_prompt="You are a research assistant..."
)
# 10-30 tools, clear capability groups
agent = create_deep_agent(
subagents=[
{"name": "data-platform", "tools": [db, api, files]},
{"name": "analysis-platform", "tools": [stats, ml, viz]}
]
)
# > 30 tools, distinct business domains
agent = create_deep_agent(
subagents=[
{"name": "billing-specialist", "tools": [billing_api]},
{"name": "support-specialist", "tools": [ticket_system]},
{"name": "fulfillment-specialist", "tools": [warehouse_api]}
]
)
Before finalizing architecture:
/add-scenario)For detailed patterns and implementation guidance:
| Command | Purpose |
|---|---|
/design-topology | Interactive full architecture design from scratch |
/add-subagent | Add a single subagent to an existing architecture (incremental) |
/add-tool | Add a single tool to an existing catalog |
/validate-agent | Full architecture scan for anti-patterns |
/evolve | Guided architecture evolution and refactoring |
/assess | Assess agent maturity level |
Incremental subagent workflow: Use /add-subagent when you already have a running agent and want to expand it with a new capability. It analyzes the existing topology, designs a new subagent matching your conventions, checks for routing ambiguity, and inserts the code — without redesigning the full architecture.