From assistant-ui
Guide for assistant-stream package and streaming protocols. Use when implementing streaming backends, custom protocols, or debugging stream issues.
npx claudepluginhub assistant-ui/skills --plugin assistant-uiThis skill uses the workspace's default tool permissions.
**Always consult [assistant-ui.com/llms.txt](https://assistant-ui.com/llms.txt) for latest API.**
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Always consult assistant-ui.com/llms.txt for latest API.
The assistant-stream package handles streaming from AI backends.
Using Vercel AI SDK?
├─ Yes → toUIMessageStreamResponse() (no assistant-stream needed)
└─ No → assistant-stream for custom backends
npm install assistant-stream
import { createAssistantStreamResponse } from "assistant-stream";
export async function POST(req: Request) {
return createAssistantStreamResponse(async (stream) => {
stream.appendText("Hello ");
stream.appendText("world!");
// Tool call example
const tool = stream.addToolCallPart({ toolCallId: "1", toolName: "get_weather" });
tool.argsText.append('{"city":"NYC"}');
tool.argsText.close();
tool.setResponse({ result: { temperature: 22 } });
stream.close();
});
}
useLocalRuntime expects ChatModelRunResult chunks. Yield content parts for streaming:
import { useLocalRuntime } from "@assistant-ui/react";
const runtime = useLocalRuntime({
model: {
async *run({ messages, abortSignal }) {
const response = await fetch("/api/chat", {
method: "POST",
body: JSON.stringify({ messages }),
signal: abortSignal,
});
const reader = response.body?.getReader();
const decoder = new TextDecoder();
let buffer = "";
while (reader) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const parts = buffer.split("\n");
buffer = parts.pop() ?? "";
for (const chunk of parts.filter(Boolean)) {
yield { content: [{ type: "text", text: chunk }] };
}
}
},
},
});
import { AssistantStream, DataStreamDecoder } from "assistant-stream";
const stream = AssistantStream.fromResponse(response, new DataStreamDecoder());
for await (const event of stream) {
console.log("Event:", JSON.stringify(event, null, 2));
}
part-start with part.type = "text" | "reasoning" | "tool-call" | "source" | "file"text-delta with streamed textresult with tool resultsstep-start, step-finish, message-finisherror stringsStream not updating UI
text/event-streamTool calls not rendering
addToolCallPart needs both toolCallId and toolNamemakeAssistantToolUIPartial text not showing
text-delta events for streaming