**Status**: Production Ready ✅
/plugin marketplace add secondsky/claude-skills/plugin install tanstack-ai@claude-skillsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
assets/api-chat-route.tsassets/tool-definitions.tsreferences/adapter-matrix.mdreferences/react-integration.mdreferences/start-vs-next-routing.mdreferences/streaming-troubleshooting.mdreferences/tanstack-ai-cheatsheet.mdreferences/tool-patterns.mdreferences/type-safety.mdscripts/check-ai-env.shStatus: Production Ready ✅
Last Updated: 2025-12-09
Dependencies: Node.js 18+, TypeScript 5+; React 18+ for @tanstack/ai-react; Solid 1.8+ for @tanstack/ai-solid
Latest Versions: @tanstack/ai@latest (alpha), @tanstack/ai-react@latest, @tanstack/ai-client@latest, adapters: @tanstack/ai-openai@latest @tanstack/ai-anthropic@latest @tanstack/ai-gemini@latest @tanstack/ai-ollama@latest
pnpm add @tanstack/ai @tanstack/ai-react @tanstack/ai-openai
# swap adapters as needed: @tanstack/ai-anthropic @tanstack/ai-gemini @tanstack/ai-ollama
pnpm add zod # recommended for tool schemas
Why this matters:
// app/api/chat/route.ts (Next.js) or src/routes/api/chat.ts (TanStack Start)
import { chat, toStreamResponse } from '@tanstack/ai'
import { openai } from '@tanstack/ai-openai'
import { tools } from '@/tools/definitions' // definitions only
export async function POST(request: Request) {
const { messages, conversationId } = await request.json()
const stream = chat({
adapter: openai(),
messages,
model: 'gpt-4o',
tools,
})
return toStreamResponse(stream)
}
CRITICAL:
useChat + SSE// components/Chat.tsx
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
import { clientTools } from '@tanstack/ai-client'
import { updateUIDef } from '@/tools/definitions'
const updateUI = updateUIDef.client(({ message }) => {
alert(message)
return { success: true }
})
export function Chat() {
const tools = clientTools(updateUI)
const { messages, sendMessage, isLoading, approval } = useChat({
connection: fetchServerSentEvents('/api/chat'),
tools,
})
return (
<form onSubmit={e => { e.preventDefault(); sendMessage(e.currentTarget.prompt.value) }}>
<textarea name="prompt" disabled={isLoading} />
{approval?.pending && (
<button type="button" onClick={() => approval.approve()}>
Approve tool
</button>
)}
</form>
)
}
CRITICAL:
fetchServerSentEvents (or matching adapter) to mirror the streaming response. citeturn0search0OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY, or Ollama host).// tools/definitions.ts
import { z, toolDefinition } from '@tanstack/ai'
export const getWeatherDef = toolDefinition({
name: 'getWeather',
description: 'Get current weather for a city',
inputSchema: z.object({ city: z.string() }),
needsApproval: true,
})
export const getWeather = getWeatherDef.server(async ({ city }) => {
const data = await fetch(`https://api.weather.gov/points?q=${city}`).then(r => r.json())
return { summary: data.properties?.relativeLocation?.properties?.city ?? city }
})
export const showToast = getWeatherDef.client(({ city }) => {
console.log(`Showing toast for ${city}`)
return { acknowledged: true }
})
Key Points:
needsApproval: true forces explicit user approval for sensitive actions. citeturn0search1toStreamResponse(stream) for HTTP streaming; toServerSentEventsStream helper for Server-Sent Events. citeturn0search3turn0search4fetchServerSentEvents('/api/chat') or a custom adapter for websockets if needed. citeturn0search0agentLoopStrategy (e.g., maxIterations(8)) to cap tool recursion. citeturn1search4✅ Stream responses; avoid waiting for full completions. citeturn0search1
✅ Pass definitions to the server and implementations to the correct runtime. citeturn0search7
✅ Use Zod schemas for tool inputs/outputs to keep type safety across providers. citeturn0search1
✅ Cap agent loops with maxIterations to prevent runaway tool calls. citeturn1search4
✅ Require needsApproval for destructive or billing-sensitive tools. citeturn0search1
❌ Mix provider adapters in a single request—instantiate one adapter per call.
❌ Throw raw errors from tools; return structured error payloads.
❌ Send client tool implementations to the server (definitions only).
❌ Hardcode model capabilities; rely on adapter typings for per-model options. citeturn0search1
❌ Skip API key checks; fail fast with helpful messages on the server. citeturn0search1
This skill prevents 3 documented issues:
Why it happens: Definitions aren’t passed to chat(); only implementations exist locally.
Prevention: Export definitions separately and include them in the server tools array; keep names stable. citeturn0search7
Why it happens: Mismatch between server response type and client adapter (HTTP chunked vs SSE).
Prevention: Use toStreamResponse on the server + fetchServerSentEvents (or matching adapter) on the client. citeturn0search1turn0search0
Why it happens: Provider-specific options (e.g., vision params) sent to unsupported models.
Prevention: Use adapter-provided types; rely on per-model option typing to surface invalid fields at compile time. citeturn1search3
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=
GEMINI_API_KEY=
OLLAMA_HOST=http://localhost:11434
AI_STREAM_STRATEGY=immediate
Why these settings:
AI_STREAM_STRATEGY is read by the sample client to pick chunk strategies (immediate vs buffered).import { chat, maxIterations } from '@tanstack/ai'
import { openai } from '@tanstack/ai-openai'
const stream = chat({
adapter: openai(),
messages,
tools,
agentLoopStrategy: maxIterations(8), // hard cap
})
When to use: Any flow where the LLM could recurse across tools (search → summarize → fetch detail). citeturn1search4
// server: data fetch
const fetchUser = fetchUserDef.server(async ({ id }) => db.user.find(id))
// client: UI update
const highlightUser = highlightUserDef.client(({ id }) => {
document.querySelector(`#user-${id}`)?.classList.add('ring')
return { highlighted: true }
})
chat({ tools: [fetchUser, highlightUser] })
When to use: When the model must both fetch data and mutate UI state in one loop. citeturn0search1
scripts/check-ai-env.sh — verifies required provider keys are present before running dev servers.Example Usage:
./scripts/check-ai-env.sh
references/tanstack-ai-cheatsheet.md — condensed server/client/tool patterns plus troubleshooting cues.When Claude should load these: When debugging tool routing, streaming issues, or recalling exact API calls.
assets/api-chat-route.ts — copy/paste API route template with streaming + tools.assets/tool-definitions.ts — ready-to-use toolDefinition examples with approval + zod schemas.Load reference files for specific implementation scenarios:
Adapter Comparison: Load references/adapter-matrix.md when choosing between OpenAI, Anthropic, Gemini, or Ollama adapters, or when debugging provider-specific quirks.
React Integration Details: Load references/react-integration.md when implementing useChat hooks, handling SSE streams in React components, or managing client-side tool state.
Routing Setup: Load references/start-vs-next-routing.md when setting up API routes in Next.js vs TanStack Start, or troubleshooting streaming response setup.
Streaming Issues: Load references/streaming-troubleshooting.md when debugging SSE connection problems, chunk delivery issues, or HTTP streaming configuration.
Quick Reference: Load references/tanstack-ai-cheatsheet.md for condensed API patterns, tool definition syntax, or rapid troubleshooting cues.
Tool Architecture: Load references/tool-patterns.md when implementing complex client/server tool workflows, approval flows, or hybrid tool patterns.
Type Safety Details: Load references/type-safety.md when working with per-model option typing, multimodal inputs, or debugging type errors across adapters.
any options on chat(). citeturn1search3parts with correct MIME types; unsupported modalities are caught at compile time. citeturn1search3approval object in useChat; render approve/reject UI and persist decision per tool call. citeturn0search1fetchServerSentEvents (SSE) for minimal setup; switch to custom adapters for websockets or HTTP chunking. citeturn0search0ImmediateStrategy in the client to emit every chunk for typing indicator UIs. citeturn0search0Required:
Optional:
{
"dependencies": {
"@tanstack/ai": "latest",
"@tanstack/ai-react": "latest",
"@tanstack/ai-client": "latest",
"@tanstack/ai-openai": "latest"
},
"devDependencies": {
"zod": "latest"
}
}
Solution: Ensure tool implementations return serializable objects; avoid returning undefined. Register client implementations via clientTools(...).
Solution: Run ./scripts/check-ai-env.sh and set the relevant provider key in .env.local. Fail fast in the route before invoking chat(). citeturn0search1
Solution: Confirm the server returns toStreamResponse(stream) (or SSE helper) and that any reverse proxy allows chunked transfer.
Use this checklist to verify your setup:
toStreamResponse(stream) with tool definitions includedfetchServerSentEvents (or matching adapter) and registers client tool implementationsneedsApproval paths render approve/reject UImaxIterations)check-ai-env.shQuestions? Issues?
references/tanstack-ai-cheatsheet.md for deeper examplesCreating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.