From workflow-skill
Generates importable Dify workflow DSL YAML/JSON files from natural language descriptions, including node schemas, edges, and layout positions.
npx claudepluginhub twwch/workflow-skillThis skill uses the workspace's default tool permissions.
This skill generates Dify workflow DSL files that can be directly imported into a Dify instance. Given a natural language description of a desired workflow, it produces a complete YAML (default) or JSON file containing all nodes, edges, layout positions, and configuration.
examples/rag-with-rerank.ymlexamples/simple-chatbot.ymlreferences/dsl-format.mdreferences/edge-and-layout.mdreferences/nodes/answer.mdreferences/nodes/code.mdreferences/nodes/end.mdreferences/nodes/http-request.mdreferences/nodes/if-else.mdreferences/nodes/iteration.mdreferences/nodes/knowledge-retrieval.mdreferences/nodes/llm.mdreferences/nodes/parameter-extractor.mdreferences/nodes/question-classifier.mdreferences/nodes/start.mdreferences/nodes/template-transform.mdreferences/nodes/tool.mdreferences/nodes/variable-aggregator.mdreferences/templates/agent.ymlreferences/templates/chatbot.ymlGenerates Coze workflow YAML files packaged as ZIP for direct import into coze.cn from natural language descriptions, handling nodes like llm, loop, plugin, edges, and layout.
Composes valid looplia v0.7.0 workflow YAML/Markdown files from skill recommendations and user preferences. Final step for /build commands, workflow creation, or automation pipelines.
Migrates n8n/Make workflows to Claude Code ecosystem by analyzing JSON exports, mapping nodes, proposing implementations like skills/crons/web apps/dashboards, and detecting improvements. Activate on n8n JSON paste or migration mentions.
Share bugs, ideas, or general feedback.
This skill generates Dify workflow DSL files that can be directly imported into a Dify instance. Given a natural language description of a desired workflow, it produces a complete YAML (default) or JSON file containing all nodes, edges, layout positions, and configuration.
The skill triggers when the user asks to:
Output format is YAML by default (.dify.yml), with JSON (.dify.json) available on request.
Before generating, assess whether the user's description is sufficient:
Proceed directly if the description includes:
Ask clarifying questions (max 3 rounds) if unclear:
Once requirements are clear, proceed to generation.
| Node | Type Key | Purpose | Key Params | Schema Path |
|---|---|---|---|---|
| Start | start | Entry point; defines input variables | variables | references/nodes/start.md |
| End | end | Terminal node for Workflow mode; declares outputs | outputs | references/nodes/end.md |
| Answer | answer | Streams response in Chatflow mode | answer, variables | references/nodes/answer.md |
| LLM | llm | Invokes a large language model | model, prompt_template, context, vision | references/nodes/llm.md |
| Knowledge Retrieval | knowledge-retrieval | Searches knowledge bases for relevant chunks | query_variable_selector, dataset_ids, retrieval_mode | references/nodes/knowledge-retrieval.md |
| Code | code | Executes Python3/JavaScript/JSON code | code_language, code, variables, outputs | references/nodes/code.md |
| HTTP Request | http-request | Makes HTTP API calls | method, url, headers, body, authorization | references/nodes/http-request.md |
| If/Else | if-else | Conditional branching (IF/ELIF/ELSE) | cases | references/nodes/if-else.md |
| Variable Aggregator | variable-aggregator | Merges variables from multiple branches | output_type, variables | references/nodes/variable-aggregator.md |
| Iteration | iteration | Loops over array, runs sub-graph per element | iterator_selector, output_selector, start_node_id | references/nodes/iteration.md |
| Template Transform | template-transform | Renders Jinja2 templates with variables | template, variables | references/nodes/template-transform.md |
| Question Classifier | question-classifier | Routes by classifying input into categories via LLM | query_variable_selector, model, classes | references/nodes/question-classifier.md |
| Parameter Extractor | parameter-extractor | Extracts structured params from text via LLM | query, model, parameters, reasoning_mode | references/nodes/parameter-extractor.md |
| Tool | tool | Invokes external tools (built-in, API, MCP) | provider_id, provider_type, tool_name, tool_parameters | references/nodes/tool.md |
Follow these steps to produce a valid DSL file:
Parse requirement -- Identify the app mode (workflow or advanced-chat), needed nodes, and data flow.
workflow mode with Start/End nodes for batch processing tasks.advanced-chat mode with Start/Answer nodes for conversational chatbots.Select nodes from the router table above. Load the corresponding schema file for each selected node to get the full field specification.
Check template match -- If the requirement closely matches a known pattern, start from a template (see Template Matching below). Adapt fields as needed.
Assemble from schemas -- If no template matches, build nodes individually. For each node:
"1711536487001").{{#nodeId.variableName#}} syntax for variable references.{{#sys.query#}} for system query variable in chatflow mode.Generate edges -- Connect nodes following the rules in references/edge-and-layout.md:
{sourceId}-{sourceHandle}-{targetId}-{targetHandle}sourceHandle: "source" for most nodes"true" (first case), case_id (elif), "false" (else)id as sourceHandletargetHandle is always "target"type: "custom" and zIndex: 0 (or 1002 inside iterations)Calculate layout positions -- Place nodes on a left-to-right grid:
{x: 80, y: 282}NODE_WIDTH 240 + X_OFFSET 60)Output file -- Render as YAML (default) or JSON. Validate structure completeness.
version: "0.6.0"
kind: app
app:
name: "Workflow Name"
mode: "advanced-chat" # or "workflow"
description: "..."
icon: "\U0001F916"
icon_background: "#FFEAD5"
icon_type: emoji
use_icon_as_answer_icon: false
dependencies: []
workflow:
environment_variables: []
conversation_variables: []
features:
file_upload:
enabled: false
opening_statement: "" # chatflow only
retriever_resource:
enabled: false
sensitive_word_avoidance:
enabled: false
speech_to_text:
enabled: false
suggested_questions: [] # chatflow only
suggested_questions_after_answer:
enabled: false
text_to_speech:
enabled: false
graph:
nodes: [] # Node objects
edges: [] # Edge objects
viewport:
x: 0
y: 0
zoom: 0.7
For the complete field-level specification, see references/dsl-format.md.
.dify.yml / .dify.json) go to current working directory. Any intermediate/temp files go to /tmp/dify-workflow/.<kebab-case-name>.dify.yml (or .dify.json for JSON output)version, kind, app, workflow (with graph, features)"1711536487001"). Increment by a few thousand between nodes to simulate realistic IDs.{x: 80, y: 282}. Each subsequent column at +300 on x-axis. Parallel branches offset on y-axis by +200.{{#nodeId.variableName#}} syntax. System variables use sys prefix: {{#sys.query#}}, {{#sys.user_id#}}."langgenius/<provider>/<provider>" (e.g., "langgenius/openai/openai")Use a template when the user's request closely matches one of these patterns. Load the template, then customize fields (model, prompts, variables) to fit the specific requirement.
| Template | Path | Matches When |
|---|---|---|
| Chatbot | references/templates/chatbot.yml | Simple conversational bot: Start -> LLM -> Answer |
| RAG | references/templates/rag.yml | Knowledge-base Q&A: Start -> Knowledge Retrieval -> LLM -> Answer |
| Agent | references/templates/agent.yml | Tool-using agent with question classification or parameter extraction |
| Translation | references/templates/translation.yml | Text transformation/translation: Start -> LLM (with specific system prompt) -> Answer/End |
If the requirement partially matches, use the closest template as a starting point and add/remove nodes as needed.
A minimal chatflow (Start -> LLM -> Answer):
version: "0.6.0"
kind: app
app:
name: "Simple Chatbot"
mode: advanced-chat
icon: "\U0001F916"
icon_background: "#FFEAD5"
icon_type: emoji
description: "A minimal chatbot using GPT-4o-mini."
use_icon_as_answer_icon: false
dependencies: []
workflow:
environment_variables: []
conversation_variables: []
features:
file_upload:
enabled: false
opening_statement: "Hello! How can I help you today?"
retriever_resource:
enabled: false
sensitive_word_avoidance:
enabled: false
speech_to_text:
enabled: false
suggested_questions:
- "What can you help me with?"
suggested_questions_after_answer:
enabled: false
text_to_speech:
enabled: false
graph:
edges:
- id: "1711536487001-source-1711536522001-target"
source: "1711536487001"
sourceHandle: source
target: "1711536522001"
targetHandle: target
type: custom
zIndex: 0
data:
sourceType: start
targetType: llm
- id: "1711536522001-source-1711536558001-target"
source: "1711536522001"
sourceHandle: source
target: "1711536558001"
targetHandle: target
type: custom
zIndex: 0
data:
sourceType: llm
targetType: answer
nodes:
- id: "1711536487001"
type: custom
position: { x: 80, y: 282 }
data:
type: start
title: Start
desc: ""
variables: []
- id: "1711536522001"
type: custom
position: { x: 380, y: 282 }
data:
type: llm
title: LLM
desc: ""
model:
provider: "langgenius/openai/openai"
name: "gpt-4o-mini"
mode: "chat"
completion_params:
temperature: 0.7
prompt_template:
- role: "system"
text: "You are a helpful assistant."
variables: []
context:
enabled: false
variable_selector: []
vision:
enabled: false
memory:
query_prompt_template: "{{#sys.query#}}"
window:
enabled: false
size: 10
- id: "1711536558001"
type: custom
position: { x: 680, y: 282 }
data:
type: answer
title: Answer
desc: ""
answer: "{{#1711536522001.text#}}"
variables: []
viewport:
x: 0
y: 0
zoom: 0.7
For a complete version with all optional fields, see examples/simple-chatbot.yml. For a RAG workflow example, see examples/rag-with-rerank.yml.
.dify.yml. Use standard YAML formatting with 2-space indentation. Quote all node ID strings. This is the preferred format for readability and Dify import..dify.json when the user explicitly requests JSON. Use the same structure with standard JSON formatting. Useful for programmatic consumption or API-based import.Both formats are fully supported by Dify's import functionality.