From aradotso-trending-skills-37
Builds AI systems with Weft, Rust-based language wiring LLMs, humans, APIs, infra as typed primitives with durable Restate execution. Covers setup, dev, debugging.
npx claudepluginhub joshuarweaver/cascade-ai-ml-agents-misc-1 --plugin aradotso-trending-skills-37This skill uses the workspace's default tool permissions.
> Skill by [ara.so](https://ara.so) — Daily 2026 Skills collection.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Skill by ara.so — Daily 2026 Skills collection.
Weft is a programming language (implemented in Rust) for AI systems where LLMs, humans, APIs, databases, and agents are base language primitives. You wire nodes together, the compiler type-checks every connection, and the program runs with durable execution backed by Restate (survives crashes, supports multi-day human-in-the-loop pauses). A visual graph view is generated automatically from code.
brew install bash (Bash 4+ required)dev.shgit clone https://github.com/WeaveMindAI/weft.git
cd weft
cp .env.example .env
# Edit .env — add your API keys
.env)OPENROUTER_API_KEY= # Required for LLM nodes
TAVILY_API_KEY= # Required for Web Search nodes
ELEVENLABS_API_KEY= # Required for Speech-to-Text nodes
APOLLO_API_KEY= # Required for Apollo enrichment nodes
DISCORD_BOT_TOKEN= # Required for Discord nodes
All keys are optional at startup — missing keys surface as runtime errors only when the relevant node executes.
# Terminal 1 — backend (PostgreSQL, Restate, all services)
./dev.sh server
# Terminal 2 — dashboard (SvelteKit at http://localhost:5173)
./dev.sh dashboard
# Or both at once
./dev.sh all
Use the Dev Local All task to start server + dashboard in split terminals.
./dev.sh server # Start backend services
./dev.sh dashboard # Start frontend
./dev.sh all # Start everything
./dev.sh extension # Build browser extension
./cleanup.sh # Stop everything, wipe Restate + DB
./cleanup.sh --no-db # Stop services, keep database
./cleanup.sh --services # Stop services only
./cleanup.sh --db-destroy # Remove PostgreSQL container entirely
cargo build # Build without running PostgreSQL (uses .sqlx snapshots)
cargo test # Test without running PostgreSQL
Only needed if using nodes like Postgres Database that provision K8s resources:
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.31.0/kind-$(uname -s | tr '[:upper:]' '[:lower:]')-amd64
chmod +x ./kind && sudo mv ./kind /usr/local/bin/kind
INFRASTRUCTURE_TARGET=local ./dev.sh server
node_name = NodeType -> (output_port: OutputType) {
label: "Human-readable name"
config_key: "value"
}
node_name.input_port = other_node.output_port
# Project: Poem Generator
# Description: Writes a short poem about any topic
topic = Text {
label: "Topic"
value: "the silence between stars"
}
llm_config = LlmConfig {
label: "Config"
model: "anthropic/claude-sonnet-4.6"
systemPrompt: "Write a short, beautiful poem (4-6 lines) about the given topic."
temperature: "0.8"
}
poet = LlmInference -> (response: String) {
label: "Poet"
}
poet.prompt = topic.value
poet.config = llm_config.config
output = Debug { label: "Poem" }
output.data = poet.response
| Node | Purpose |
|---|---|
LlmConfig | Configure model, system prompt, temperature |
LlmInference | Call an LLM, returns response: String |
| Node | Purpose |
|---|---|
Text | Static or dynamic text value |
Number | Numeric value |
Dict | Key-value map |
List | Ordered list |
Pack / Unpack | Bundle/unbundle multiple values |
| Node | Purpose |
|---|---|
Gate | Conditional branching |
HumanQuery | Pause execution, send form to human, resume on response |
HumanTrigger | Start a program from a human action |
Discord, Slack, Telegram, WhatsApp, Email, X
Postgres, Memory
Apollo, WebSearch, SpeechToText
Cron, webhooks, polling
Debug, Template, HTTP, Code (Python execution)
# Project: Content Summarizer
# Description: Summarizes a webpage given a URL
url_input = Text {
label: "URL"
value: "https://example.com/article"
}
search = WebSearch -> (results: String) {
label: "Fetch Content"
}
search.query = url_input.value
summarizer_config = LlmConfig {
label: "Summarizer Config"
model: "anthropic/claude-sonnet-4.6"
systemPrompt: "Summarize the following content in 3 bullet points."
temperature: "0.3"
}
summarizer = LlmInference -> (response: String) {
label: "Summarizer"
}
summarizer.prompt = search.results
summarizer.config = summarizer_config.config
output = Debug { label: "Summary" }
output.data = summarizer.response
# Project: Content Approval Pipeline
# Description: AI drafts content, human approves before publishing
draft_config = LlmConfig {
label: "Drafter Config"
model: "openai/gpt-4o"
systemPrompt: "Write a Twitter thread about the given topic. Be engaging."
temperature: "0.7"
}
topic = Text {
label: "Topic"
value: "distributed systems"
}
drafter = LlmInference -> (response: String) {
label: "Content Drafter"
}
drafter.prompt = topic.value
drafter.config = draft_config.config
# Pauses execution indefinitely until a human responds
approval = HumanQuery -> (approved: Boolean, feedback: String) {
label: "Human Approval"
question: "Do you approve this draft for publishing?"
}
approval.content = drafter.response
gate = Gate -> (passed: String) {
label: "Approval Gate"
}
gate.condition = approval.approved
gate.value = drafter.response
publisher = Discord {
label: "Publish to Discord"
channel: "announcements"
}
publisher.message = gate.passed
# Project: Sentiment Router
# Description: Routes messages based on sentiment analysis
message = Text {
label: "Input Message"
value: "This product is absolutely terrible!"
}
sentiment_config = LlmConfig {
label: "Sentiment Config"
model: "anthropic/claude-haiku-3.5"
systemPrompt: "Classify sentiment as 'positive' or 'negative'. Respond with one word only."
temperature: "0.0"
}
classifier = LlmInference -> (response: String) {
label: "Sentiment Classifier"
}
classifier.prompt = message.value
classifier.config = sentiment_config.config
is_negative = Gate -> (passed: String) {
label: "Is Negative?"
}
is_negative.condition = classifier.response
is_negative.value = message.value
alert = Slack {
label: "Alert Team"
channel: "customer-issues"
}
alert.message = is_negative.passed
# Project: Daily Digest
# Description: Sends a daily news digest every morning
schedule = Cron {
label: "Daily Trigger"
expression: "0 8 * * *"
}
news = WebSearch -> (results: String) {
label: "Fetch News"
}
news.query = "AI and technology news today"
digest_config = LlmConfig {
label: "Digest Config"
model: "openai/gpt-4o-mini"
systemPrompt: "Summarize these news items into a concise morning digest."
temperature: "0.4"
}
digest = LlmInference -> (response: String) {
label: "Digest Writer"
}
digest.prompt = news.results
digest.config = digest_config.config
send = Email {
label: "Send Digest"
to: "team@example.com"
subject: "Your Daily AI Digest"
}
send.body = digest.response
# Project: Research Agent
# Description: Researches a topic and produces a structured report
query = Text {
label: "Research Query"
value: "latest advances in protein folding"
}
search = WebSearch -> (results: String) {
label: "Search"
}
search.query = query.value
# Enrich with professional data
enrichment = Apollo -> (data: String) {
label: "Enrichment"
}
analyst_config = LlmConfig {
label: "Analyst Config"
model: "anthropic/claude-sonnet-4.6"
systemPrompt: "You are a research analyst. Given search results, produce a structured report with: Executive Summary, Key Findings, Implications, and Further Reading."
temperature: "0.2"
}
pack_inputs = Pack -> (bundle: Dict) {
label: "Combine Sources"
}
pack_inputs.search_results = search.results
analyst = LlmInference -> (response: String) {
label: "Research Analyst"
}
analyst.prompt = pack_inputs.bundle
analyst.config = analyst_config.config
store = Postgres {
label: "Store Report"
table: "research_reports"
}
store.data = analyst.response
notify = Slack {
label: "Notify Team"
channel: "research"
}
notify.message = analyst.response
New nodes are two files in the catalog/ directory. No registration needed — the inventory crate auto-discovers nodes at startup.
catalog/
└── my_category/
└── my_node/
├── backend.rs
└── frontend.ts
backend.rsuse weft_nodes::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize)]
pub struct MyNodeConfig {
pub label: String,
pub my_setting: String,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct MyNodeInputs {
pub text: String,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct MyNodeOutputs {
pub result: String,
}
pub struct MyNode;
#[async_trait]
impl Node for MyNode {
type Config = MyNodeConfig;
type Inputs = MyNodeInputs;
type Outputs = MyNodeOutputs;
async fn run(
config: Self::Config,
inputs: Self::Inputs,
) -> Result<Self::Outputs, NodeError> {
let result = format!("{}: {}", config.my_setting, inputs.text);
Ok(MyNodeOutputs { result })
}
}
frontend.tsimport type { NodeDefinition } from "@/types/nodes";
export const MyNode: NodeDefinition = {
type: "MyNode",
label: "My Node",
icon: "sparkles", // Lucide icon name
category: "my_category",
inputs: [
{ name: "text", type: "String", required: true },
],
outputs: [
{ name: "result", type: "String" },
],
config: [
{ name: "label", type: "string", default: "My Node" },
{ name: "my_setting", type: "string", default: "prefix" },
],
};
After adding both files, run ./dev.sh server — the node is available immediately.
Weft has generics, unions, type variables, and null propagation.
| Type | Description |
|---|---|
String | Text value |
Number | Numeric value |
Boolean | True/false |
Dict | Key-value map |
List | Ordered collection |
T? | Nullable type — null propagates through the graph |
T | U | Union type |
The compiler validates every connection before execution. Type mismatches, missing required connections, and broken architecture are caught at compile time.
weft/
├── catalog/ # Node definitions — source of truth
│ ├── ai/ # LlmConfig, LlmInference
│ ├── code/ # Python execution
│ ├── communication/ # Discord, Slack, Telegram, WhatsApp, Email, X
│ ├── data/ # Text, Number, Dict, List, Pack, Unpack
│ ├── enrichment/ # Apollo, WebSearch, SpeechToText
│ ├── flow/ # Gate, HumanQuery, HumanTrigger
│ ├── storage/ # Postgres, Memory
│ └── triggers/ # Cron, webhooks, polling
├── crates/
│ ├── weft-core/ # Type system, compiler, executor, Restate objects
│ ├── weft-nodes/ # Node trait, registry, sandbox, node runner
│ ├── weft-api/ # REST API (triggers, files, infra, usage)
│ └── weft-orchestrator/ # Restate services + Axum project executor
├── dashboard/ # Web UI (SvelteKit + Svelte 5)
├── extension/ # Browser extension for human-in-the-loop (WXT)
└── scripts/
└── catalog-link.sh # Symlinks catalog into crates + dashboard
./dev.sh server fails immediatelydocker ps)./cleanup.sh then retry.env./cleanup.sh --services && ./dev.sh server)Pack to bundle multiple values into a Dict before passing to a node that expects DictT?) can propagate null — use a Gate node to guard against null before consuming./cleanup.sh # Wipes Restate journal and DB, full reset
cargo build fails without running PostgresThe .sqlx directory is committed — offline mode is supported:
cargo build # Works without a running database
cargo test # Works without a running database
backend.rs and frontend.ts exist under catalog/<category>/<node>/./scripts/catalog-link.sh manually then restart the serverDESIGN.md in the repo rootROADMAP.md in the repo rootCONTRIBUTING.md in the repo root