From floating-ui-vue
Provides @tanstack/ai-vue Vue hooks references, API changes, migration guides, and best practices. Use when importing the package for coding, debugging, or modifying.
npx claudepluginhub skilld-dev/vue-ecosystem-skills --plugin floating-ui-vueThis skill uses the workspace's default tool permissions.
**Tags:** latest: 0.6.12
references/discussions/_INDEX.mdreferences/discussions/discussion-108.mdreferences/discussions/discussion-111.mdreferences/discussions/discussion-127.mdreferences/discussions/discussion-139.mdreferences/discussions/discussion-191.mdreferences/discussions/discussion-192.mdreferences/discussions/discussion-207.mdreferences/discussions/discussion-212.mdreferences/discussions/discussion-232.mdreferences/discussions/discussion-280.mdreferences/discussions/discussion-86.mdreferences/discussions/discussion-98.mdreferences/docs/_INDEX.mdreferences/docs/adapters/anthropic.mdreferences/docs/adapters/fal.mdreferences/docs/adapters/gemini.mdreferences/docs/adapters/grok.mdreferences/docs/adapters/groq.mdreferences/docs/adapters/ollama.mdDelivers provider-agnostic type-safe LLM chat SDK with streaming, tools, agent loops for OpenAI, Anthropic, Gemini, Ollama in React, Solid, Next.js apps.
Provides Vercel AI SDK v5 React hooks (useChat, useCompletion, useObject) for streaming AI chat UIs in Next.js apps. Fixes parse stream errors, no responses, and streaming issues.
Provides API reference, breaking changes, new features, and migration guides for TanStack Vue Query v5. Use when importing '@tanstack/vue-query' for data management, caching, and debugging.
Share bugs, ideas, or general feedback.
@tanstack/ai-vue@0.6.12Tags: latest: 0.6.12
References: Docs
This section documents version-specific API changes for @tanstack/ai-vue v0.6.1 (current v0.x series). This library is pre-1.0 — all v0.x releases are in scope.
BREAKING: Monolithic adapter factories removed — openai(), anthropic(), etc. replaced by activity-specific functions: openaiText('gpt-5.2'), openaiSummarize('gpt-5-mini'), openaiImage('dall-e-3'), etc. Model name is now passed to the adapter factory, not to chat(). source
BREAKING: model parameter removed from chat() — model is now embedded in the adapter argument (e.g., adapter: openaiText('gpt-5.2') instead of adapter: openai(), model: 'gpt-4'). Passing model at the call site is silently ignored. source
BREAKING: Nested options object flattened — chat({ options: { temperature, maxTokens, topP } }) must be changed to chat({ temperature, maxTokens, topP }). Nested options are silently discarded. source
BREAKING: providerOptions renamed to modelOptions — chat({ providerOptions: { ... } }) must be updated to chat({ modelOptions: { ... } }). Silently ignored if not updated. source
BREAKING: toResponseStream renamed to toServerSentEventsStream and now returns ReadableStream instead of Response — must manually create new Response(stream, { headers }). AbortController is now a separate parameter: toServerSentEventsStream(stream, abortController). source
BREAKING: embedding() function removed — embeddings support eliminated entirely. Use provider SDKs directly or vector DB native embedding APIs. source
BREAKING: chat({ as: 'promise' }) replaced by separate chatCompletion() function — as option removed from chat(). chat({ as: 'stream' }) is now just chat(). chat({ as: 'response' }) is now chat() + toServerSentEventsStream(). source
NEW: useChat returns status reactive ref — tracks lifecycle as 'ready' | 'submitted' | 'streaming' | 'error'. Previously there was no generation lifecycle state. source
NEW: sendMessage() accepts MultimodalContent object — sendMessage({ content: [{ type: 'text', content: '...' }, { type: 'image', source: { type: 'url', value: '...' } }] }) enables image/audio/video/document content alongside text. Added in v0.5.0. source
NEW: agentLoopStrategy parameter replaces bare maxIterations: number — use agentLoopStrategy: maxIterations(5), untilFinishReason(['stop']), or combineStrategies([...]). Old maxIterations number is converted automatically but deprecated. source
NEW: toolDefinition({ name, description, inputSchema, outputSchema?, needsApproval? }) — creates isomorphic tool definitions. Call .server(fn) for server-side execution or .client(fn) for client-side execution. Replaces ad-hoc tool objects. source
NEW: @tanstack/ai-client package — ChatClient class provides framework-agnostic headless chat state management with sendMessage(), reload(), stop(), clear(), addToolResult(), addToolApprovalResponse() methods. source
NEW: Connection adapter factories — fetchServerSentEvents(url, options?), fetchHttpStream(url, options?), stream(fn) from @tanstack/ai-client. Pass to useChat({ connection: fetchServerSentEvents('/api/chat') }) instead of url: '/api/chat'. source
NEW: extendAdapter(factory, customModels) + createModel(name, modalities) — adds custom/fine-tuned model names to existing adapter factories with full type inference. Avoids as const casts. source
Also changed: clientTools(...tools) NEW (typed tool array, discriminated union narrowing) · createChatClientOptions(options) NEW · InferChatMessages<T> NEW · toServerSentEventsResponse(stream, init?) NEW (returns Response) · toHttpStream(stream) NEW · toHttpResponse(stream) NEW · assertMessages({ adapter }, messages) NEW (type-level assertion) · ThinkingStreamChunk NEW (chunk type for model reasoning)
useChat returns DeepReadonly<ShallowRef<T>> refs — never reassign messages directly; use setMessages() for manual updates. Changing connection or body options recreates the underlying ChatClient, requiring a component remount or a key prop change to take effect
Use status (added v0.4.0) instead of isLoading for granular lifecycle control — status.value tracks 'ready' | 'submitted' | 'streaming' | 'error', enabling distinct UI states for submission vs. active streaming source
Pass client tool arrays through clientTools() instead of as const — eliminates the need for const assertion while enabling full discriminated union narrowing on part.name, part.input, and part.output in message iteration source
Wrap useChat options with createChatClientOptions() and derive message types using InferChatMessages<typeof chatOptions> — this propagates tool types through the entire message type, making part.name a literal union and part.input/part.output typed from Zod schemas source
Define tools with toolDefinition() in a shared file, then call .server() in route handlers and .client() in Vue components — passing the bare definition to chat() signals the client will execute it, while passing .server() output executes it server-side automatically source
Use Zod schemas (v4.2+) over raw JSON Schema for inputSchema/outputSchema in toolDefinition() and chat({ outputSchema }) — JSON Schema infers any for tool inputs/outputs and unknown for structured output return types, losing all downstream type safety source
Set agentLoopStrategy: maxIterations(n) explicitly when tools are present — the default is 5 iterations, which is too low for multi-step agentic workflows; use untilFinishReason('stop') to exit as soon as the model finishes without hitting the limit source
Subscribe to aiEventClient with { withEventTarget: true } in production code — without this third argument the client only emits to the devtools event bus (absent in production builds); the flag also dispatches to the current EventTarget for application-level observability source
Prefer fetchServerSentEvents over fetchHttpStream for client connections — SSE provides automatic reconnection; pass URL and options as functions (not static values) when headers like Authorization must be re-evaluated on every request source
Use extendAdapter(baseFactory, [createModel('model-name', ['text', 'image'])]) to add TypeScript types for fine-tuned models or OpenAI-compatible proxies — this adds the model to the adapter's allowed type union with zero runtime overhead while preserving all original factory config parameters source