npx claudepluginhub existential-birds/beagle --plugin beagle-aiThis skill uses the workspace's default tool permissions.
The Vercel AI SDK provides React hooks and server utilities for building streaming chat interfaces with support for tool calls, file attachments, and multi-step reasoning.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
The Vercel AI SDK provides React hooks and server utilities for building streaming chat interfaces with support for tool calls, file attachments, and multi-step reasoning.
import { useChat } from '@ai-sdk/react';
const { messages, status, sendMessage, stop, regenerate } = useChat({
id: 'chat-id',
messages: initialMessages,
onFinish: ({ message, messages, isAbort, isError }) => {
console.log('Chat finished');
},
onError: (error) => {
console.error('Chat error:', error);
}
});
// Send a message
sendMessage({ text: 'Hello', metadata: { createdAt: Date.now() } });
// Send with files
sendMessage({
text: 'Analyze this',
files: fileList // FileList or FileUIPart[]
});
The status field indicates the current state of the chat:
ready: Chat is idle and ready to accept new messagessubmitted: Message sent to API, awaiting response stream startstreaming: Response actively streaming from the APIerror: An error occurred during the requestMessages use the UIMessage type with a parts-based structure:
interface UIMessage {
id: string;
role: 'system' | 'user' | 'assistant';
metadata?: unknown;
parts: Array<UIMessagePart>; // text, file, tool-*, reasoning, etc.
}
Part types include:
text: Text content with optional streaming statefile: File attachments (images, documents)tool-{toolName}: Tool invocations with state machinereasoning: AI reasoning tracesdata-{typeName}: Custom data partsimport { streamText } from 'ai';
import { convertToModelMessages } from 'ai';
const result = streamText({
model: openai('gpt-4'),
messages: convertToModelMessages(uiMessages),
tools: {
getWeather: tool({
description: 'Get weather',
inputSchema: z.object({ city: z.string() }),
execute: async ({ city }) => {
return { temperature: 72, weather: 'sunny' };
}
})
}
});
return result.toUIMessageStreamResponse({
originalMessages: uiMessages,
onFinish: ({ messages }) => {
// Save to database
}
});
Client-Side Tool Execution:
const { addToolOutput } = useChat({
onToolCall: async ({ toolCall }) => {
if (toolCall.toolName === 'getLocation') {
addToolOutput({
tool: 'getLocation',
toolCallId: toolCall.toolCallId,
output: 'San Francisco'
});
}
}
});
Rendering Tool States:
{message.parts.map(part => {
if (part.type === 'tool-getWeather') {
switch (part.state) {
case 'input-streaming':
return <pre>{JSON.stringify(part.input, null, 2)}</pre>;
case 'input-available':
return <div>Getting weather for {part.input.city}...</div>;
case 'output-available':
return <div>Weather: {part.output.weather}</div>;
case 'output-error':
return <div>Error: {part.errorText}</div>;
}
}
})}
Detailed documentation on specific aspects:
const { error, clearError } = useChat({
onError: (error) => {
toast.error(error.message);
}
});
// Clear error and reset to ready state
if (error) {
clearError();
}
const { regenerate } = useChat();
// Regenerate last assistant message
await regenerate();
// Regenerate specific message
await regenerate({ messageId: 'msg-123' });
import { DefaultChatTransport } from 'ai';
const { messages } = useChat({
transport: new DefaultChatTransport({
api: '/api/chat',
prepareSendMessagesRequest: ({ id, messages, trigger, messageId }) => ({
body: {
chatId: id,
lastMessage: messages[messages.length - 1],
trigger,
messageId
}
})
})
});
// Throttle UI updates to reduce re-renders
const chat = useChat({
experimental_throttle: 100 // Update max once per 100ms
});
import { lastAssistantMessageIsCompleteWithToolCalls } from 'ai';
const chat = useChat({
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls
// Automatically resend when all tool calls have outputs
});
The SDK provides full type inference for tools and messages:
import { InferUITools, UIMessage } from 'ai';
const tools = {
getWeather: tool({
inputSchema: z.object({ city: z.string() }),
execute: async ({ city }) => ({ weather: 'sunny' })
})
};
type MyMessage = UIMessage<
{ createdAt: number }, // Metadata type
UIDataTypes,
InferUITools<typeof tools> // Tool types
>;
const { messages } = useChat<MyMessage>();
Messages use a parts array instead of a single content field. This allows:
Tool parts progress through states:
input-streaming: Tool input streaming (optional)input-available: Tool input completeapproval-requested: Waiting for user approval (optional)approval-responded: User approved/denied (optional)output-available: Tool execution completeoutput-error: Tool execution failedoutput-denied: User denied approvalThe SDK uses Server-Sent Events (SSE) with UIMessageChunk types:
text-start, text-delta, text-endtool-input-available, tool-output-availablereasoning-start, reasoning-delta, reasoning-endstart, finish, abortServer-side tools have an execute function and run on the API route.
Client-side tools omit execute and are handled via onToolCall and addToolOutput.
error state and provide user feedbackexperimental_throttle for high-frequency updatesstatussendAutomaticallyWhen for multi-turn tool workflowsstop() to allow users to cancel long-running requestsvalidateUIMessages on the server