Generate AI-powered features using AI SDK with oRPC. Use when building chat apps, AI endpoints, or integrating LLMs.
From schema0-devnpx claudepluginhub schema0/ai-agent-plugins --plugin schema0-devThis skill is limited to using the following tools:
scripts/generate.tsscripts/scaffold-templates/chat.hbsscripts/scaffold-templates/router.hbsscripts/scaffold-templates/simple.hbsscripts/scaffold-templates/tool.hbsPrerequisite: This skill requires a schema0 template project. Before using, ensure
CLAUDE.mdexists in the project root and read it for project rules and conventions.
Generate AI-powered features using the AI SDK with oRPC. Supports full-stack chat applications with streaming or simple prompt-response endpoints.
bun run .claude/skills/ai-integration/scripts/generate.ts chat <name>
Example:
bun run .claude/skills/ai-integration/scripts/generate.ts chat assistant
This generates:
packages/api/src/routers/assistant.ts)useChat hook (apps/web/src/routes/_auth.assistant.tsx)bun run .claude/skills/ai-integration/scripts/generate.ts simple <name>
Example:
bun run .claude/skills/ai-integration/scripts/generate.ts simple summarize
This generates a simple one-shot AI endpoint without streaming or message history.
bun run .claude/skills/ai-integration/scripts/generate.ts router <name>
bun run .claude/skills/ai-integration/scripts/generate.ts tool <name>
Before using AI features, complete backend integration for your chosen AI provider:
manage-secrets skill at ../manage-secrets/SKILL.md to securely add the API key and update packages/auth/env.ts.bun add ai @ai-sdk/openai @ai-sdk/anthropic @ai-sdk/google @orpc/ai-sdk @orpc/client
Quick reference for adding API keys manually to packages/auth/env.ts:
For OpenAI:
OPENAI_API_KEY: z.string().optional(),
For Anthropic:
ANTHROPIC_API_KEY: z.string().optional(),
For Google Gemini:
GOOGLE_GENERATIVE_AI_API_KEY: z.string().optional(),
Install dependencies:
bun add ai @ai-sdk/openai @ai-sdk/anthropic @ai-sdk/google @orpc/ai-sdk @orpc/client
| Template | Output Location | Purpose |
|---|---|---|
ai-router.hbs | packages/api/src/routers/[name].ts | ORPC router with AI SDK streaming |
ai-chat-route.hbs | apps/web/src/routes/_auth.[name].tsx | Full chat UI with streaming |
ai-simple.hbs | apps/web/src/routes/_auth.[name].tsx | Simple prompt-response UI |
ai-tool.hbs | packages/api/src/tools/[name].ts | Tool definitions for function calling |
Register router in packages/api/src/routers/index.ts:
import { assistantRouter } from "./assistant";
export const appRouter = { assistant: assistantRouter, ... };
Add route to sidebar in apps/web/src/components/app-sidebar.tsx
Set the API key environment variable during build/deploy (injected by MCP/deployment process)
Type check your files with bunx oxlint --type-check --type-aware --quiet <your-files> (only your files, not project-wide)
The generated chat route uses useChat from @ai-sdk/react with streaming support:
import { useChat } from '@ai-sdk/react'
import { eventIteratorToUnproxiedDataStream } from '@orpc/client'
export function AssistantChat() {
const { messages, sendMessage, status } = useChat({
transport: {
async sendMessages(options) {
return eventIteratorToUnproxiedDataStream(
await orpc.assistant.chat({
messages: options.messages,
}, { signal: options.abortSignal })
)
},
reconnectToStream(options) {
throw new Error('Unsupported')
},
},
})
// ... UI implementation
}
The generated simple route provides a one-shot prompt-response without streaming:
const response = await orpc.summarize.prompt({
prompt: "Summarize this text...",
})
Use the implementTool helper to create AI SDK tools from ORPC contracts:
import { implementTool } from '@orpc/ai-sdk'
import { oc } from '@orpc/contract'
const getWeatherContract = oc
.meta({
[AI_SDK_TOOL_META_SYMBOL]: {
title: 'Get Weather',
},
})
.route({
summary: 'Get the weather in a location',
})
.input(z.object({
location: z.string().describe('The location to get the weather for'),
}))
.output(z.object({
location: z.string(),
temperature: z.number(),
}))
const getWeatherTool = implementTool(getWeatherContract, {
execute: async ({ location }) => ({
location,
temperature: 72,
}),
})
import { openai } from '@ai-sdk/openai'
import { env } from '@template/auth'
const result = streamText({
model: openai({ apiKey: env.OPENAI_API_KEY })('gpt-4o-mini'),
system: 'You are a helpful assistant.',
messages: await convertToModelMessages(input.messages),
})
import { anthropic } from '@ai-sdk/anthropic'
import { env } from '@template/auth'
const result = streamText({
model: anthropic({ apiKey: env.ANTHROPIC_API_KEY })('claude-3-5-sonnet-20241022'),
system: 'You are a helpful assistant.',
messages: await convertToModelMessages(input.messages),
})
import { google } from '@ai-sdk/google'
import { env } from '@template/auth'
const result = streamText({
model: google({ apiKey: env.GOOGLE_GENERATIVE_AI_API_KEY })('gemini-1.5-flash'),
system: 'You are a helpful assistant.',
messages: await convertToModelMessages(input.messages),
})
All generated code follows consistent naming patterns for improved readability:
| Pattern | Example | Usage |
|---|---|---|
llmClient | llmClient | LLM provider client (openai, anthropic, google) |
| Pattern | Example | Usage |
|---|---|---|
streamResult | streamResult | Result from streamText() for streaming responses |
textGenerationResult | textGenerationResult | Result from generateText() for one-shot responses |
| Pattern | Example | Usage |
|---|---|---|
userMessage | userMessage | User input message (avoids confusion with messages array) |
aiResponse | aiResponse | AI response from simple prompt endpoint |
LLM providers are initialized as descriptive client variables:
// Provider configuration
const llmClient = openai({
apiKey: env.OPENAI_API_KEY
})
// Usage in streaming
const streamResult = streamText({
model: llmClient('gpt-4o-mini'),
system: 'You are a helpful assistant.',
messages: convertToCoreMessages(input.messages),
})
// Usage in one-shot
const textGenerationResult = await generateText({
model: llmClient('gpt-4o-mini'),
system: input.system || 'You are a helpful assistant.',
prompt: input.prompt,
})
return {
text: textGenerationResult.text,
usage: textGenerationResult.usage,
finishReason: textGenerationResult.finishReason,
}
any type in generated code — use proper types, generics, or unknown with type narrowing// @ts-ignore, // @ts-expect-error, // @ts-nocheck, or // eslint-disable — fix the type error instead