Interacts with DeerFlow AI agent platform via HTTP API to send messages for research, start threads, check health, list models/agents/skills, manage memory, upload files, and delegate complex tasks.
npx claudepluginhub joshuarweaver/cascade-content-creation-misc-1 --plugin bytedance-deer-flow-1This skill uses the workspace's default tool permissions.
Communicate with a running DeerFlow instance via its HTTP API. DeerFlow is an AI agent platform
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
Communicate with a running DeerFlow instance via its HTTP API. DeerFlow is an AI agent platform built on LangGraph that orchestrates sub-agents for research, code execution, web browsing, and more.
DeerFlow exposes two API surfaces behind an Nginx reverse proxy:
| Service | Direct Port | Via Proxy | Purpose |
|---|---|---|---|
| Gateway API | 8001 | $DEERFLOW_GATEWAY_URL | REST endpoints (models, skills, memory, uploads) |
| LangGraph API | 2024 | $DEERFLOW_LANGGRAPH_URL | Agent threads, runs, streaming |
All URLs are configurable via environment variables. Read these env vars before making any request.
| Variable | Default | Description |
|---|---|---|
DEERFLOW_URL | http://localhost:2026 | Unified proxy base URL |
DEERFLOW_GATEWAY_URL | ${DEERFLOW_URL} | Gateway API base (models, skills, memory, uploads) |
DEERFLOW_LANGGRAPH_URL | ${DEERFLOW_URL}/api/langgraph | LangGraph API base (threads, runs) |
When making curl calls, always resolve the URL like this:
# Resolve base URLs from env (do this FIRST before any API call)
DEERFLOW_URL="${DEERFLOW_URL:-http://localhost:2026}"
DEERFLOW_GATEWAY_URL="${DEERFLOW_GATEWAY_URL:-$DEERFLOW_URL}"
DEERFLOW_LANGGRAPH_URL="${DEERFLOW_LANGGRAPH_URL:-$DEERFLOW_URL/api/langgraph}"
Verify DeerFlow is running:
curl -s "$DEERFLOW_GATEWAY_URL/health"
This is the primary operation. It creates a thread and streams the agent's response.
Step 1: Create a thread
curl -s -X POST "$DEERFLOW_LANGGRAPH_URL/threads" \
-H "Content-Type: application/json" \
-d '{}'
Response: {"thread_id": "<uuid>", ...}
Step 2: Stream a run
curl -s -N -X POST "$DEERFLOW_LANGGRAPH_URL/threads/<thread_id>/runs/stream" \
-H "Content-Type: application/json" \
-d '{
"assistant_id": "lead_agent",
"input": {
"messages": [
{
"type": "human",
"content": [{"type": "text", "text": "YOUR MESSAGE HERE"}]
}
]
},
"stream_mode": ["values", "messages-tuple"],
"stream_subgraphs": true,
"config": {
"recursion_limit": 1000
},
"context": {
"thinking_enabled": true,
"is_plan_mode": true,
"subagent_enabled": true,
"thread_id": "<thread_id>"
}
}'
The response is an SSE stream. Each event has the format:
event: <event_type>
data: <json_data>
Key event types:
metadata — run metadata including run_idvalues — full state snapshot with messages arraymessages-tuple — incremental message updates (AI text chunks, tool calls, tool results)end — stream is completeContext modes (set via context):
thinking_enabled: false, is_plan_mode: false, subagent_enabled: falsethinking_enabled: true, is_plan_mode: false, subagent_enabled: falsethinking_enabled: true, is_plan_mode: true, subagent_enabled: falsethinking_enabled: true, is_plan_mode: true, subagent_enabled: trueTo send follow-up messages, reuse the same thread_id from step 2 and POST another run
with the new message.
curl -s "$DEERFLOW_GATEWAY_URL/api/models"
Returns: {"models": [{"name": "...", "provider": "...", ...}, ...]}
curl -s "$DEERFLOW_GATEWAY_URL/api/skills"
Returns: {"skills": [{"name": "...", "enabled": true, ...}, ...]}
curl -s -X PUT "$DEERFLOW_GATEWAY_URL/api/skills/<skill_name>" \
-H "Content-Type: application/json" \
-d '{"enabled": true}'
curl -s "$DEERFLOW_GATEWAY_URL/api/agents"
Returns: {"agents": [{"name": "...", ...}, ...]}
curl -s "$DEERFLOW_GATEWAY_URL/api/memory"
Returns user context, facts, and conversation history summaries.
curl -s -X POST "$DEERFLOW_GATEWAY_URL/api/threads/<thread_id>/uploads" \
-F "files=@/path/to/file.pdf"
Supports PDF, PPTX, XLSX, DOCX — automatically converts to Markdown.
curl -s "$DEERFLOW_GATEWAY_URL/api/threads/<thread_id>/uploads/list"
curl -s "$DEERFLOW_LANGGRAPH_URL/threads/<thread_id>/history"
curl -s -X POST "$DEERFLOW_LANGGRAPH_URL/threads/search" \
-H "Content-Type: application/json" \
-d '{"limit": 20, "sort_by": "updated_at", "sort_order": "desc"}'
For sending messages and collecting the full response, use the helper script:
bash /path/to/skills/claude-to-deerflow/scripts/chat.sh "Your question here"
See scripts/chat.sh for the implementation. The script:
The stream returns SSE events. To extract the final AI response from a values event:
event: values blockdata JSONmessages array contains all messages; the last one with type: "ai" is the responsecontent field of that message is the AI's text reply