From vercel
Provides expert guidance on Vercel Serverless Functions, Edge Functions, Fluid Compute, streaming, Cron Jobs, and runtime configuration. Use for configuring, debugging, or optimizing server-side code on Vercel.
npx claudepluginhub vercel/vercel-plugin --plugin vercel-pluginThis skill uses the workspace's default tool permissions.
You are an expert in Vercel Functions — the compute layer of the Vercel platform.
Guides deploying Next.js apps to Vercel covering environment variables setup across environments, edge vs serverless functions, build optimization, and preview deployments.
Diagnoses and fixes common Vercel errors in builds, serverless functions, and edge routing using CLI tools, logs, and API queries.
Updates LLM knowledge on Vercel platform correcting outdated info on Fluid Compute, vercel.ts config, Python/Bun/Rust support, timeouts, pricing, and Marketplace databases. Useful for Vercel projects.
Share bugs, ideas, or general feedback.
You are an expert in Vercel Functions — the compute layer of the Vercel platform.
// app/api/hello/route.ts
export async function GET() {
return Response.json({ message: 'Hello from Node.js' })
}
// app/api/hello/route.ts
export const runtime = 'edge'
export async function GET() {
return new Response('Hello from the Edge')
}
Add "bunVersion": "1.x" to vercel.json to run Node.js functions on Bun instead. ~28% lower latency for CPU-bound workloads. Supports Next.js, Express, Hono, Nitro.
Rust functions run on Fluid Compute with HTTP streaming and Active CPU pricing. Built on the community Rust runtime. Supports environment variables up to 64 KB.
Node.js 24 LTS is now GA on Vercel for both builds and functions. Features V8 13.6, global URLPattern, Undici v7 for faster fetch(), and npm v11.
| Need | Runtime | Why |
|---|---|---|
| Full Node.js APIs, npm packages | nodejs | Full compatibility |
| Lower latency, CPU-bound work | nodejs + Bun | ~28% latency reduction |
| Ultra-low latency, simple logic | edge | <1ms cold start, global |
| Database connections, heavy deps | nodejs | Edge lacks full Node.js |
| Auth/redirect at the edge | edge | Fastest response |
| AI streaming | Either | Both support streaming |
| Systems-level performance | rust (beta) | Native speed, Fluid Compute |
Fluid Compute is the unified execution model for all Vercel Functions (both Node.js and Edge).
Key benefits:
waitUntil / after for post-response tasks| Size | CPU | Memory |
|---|---|---|
| Standard (default) | 1 vCPU | 2 GB |
| Performance | 2 vCPU | 4 GB |
Hobby projects use Standard CPU. The Basic CPU instance has been removed.
waitUntil// Continue work after sending response
import { waitUntil } from '@vercel/functions'
export async function POST(req: Request) {
const data = await req.json()
// Send response immediately
const response = Response.json({ received: true })
// Continue processing in background
waitUntil(async () => {
await processAnalytics(data)
await sendNotification(data)
})
return response
}
after (equivalent)import { after } from 'next/server'
export async function POST(req: Request) {
const data = await req.json()
after(async () => {
await logToAnalytics(data)
})
return Response.json({ ok: true })
}
Zero-config streaming for both runtimes. Essential for AI applications.
export async function POST(req: Request) {
const encoder = new TextEncoder()
const stream = new ReadableStream({
async start(controller) {
for (const chunk of data) {
controller.enqueue(encoder.encode(chunk))
await new Promise(r => setTimeout(r, 100))
}
controller.close()
},
})
return new Response(stream, {
headers: { 'Content-Type': 'text/event-stream' },
})
}
For AI streaming, use the AI SDK's toUIMessageStreamResponse() (for chat UIs with useChat) which handles SSE formatting automatically.
Schedule function invocations via vercel.json:
{
"crons": [
{
"path": "/api/daily-report",
"schedule": "0 8 * * *"
},
{
"path": "/api/cleanup",
"schedule": "0 */6 * * *"
}
]
}
The cron endpoint receives a normal HTTP request. Verify it's from Vercel:
export async function GET(req: Request) {
const authHeader = req.headers.get('authorization')
if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
return new Response('Unauthorized', { status: 401 })
}
// Do scheduled work
return Response.json({ ok: true })
}
Deprecation notice: Support for the legacy now.json config file will be removed on March 31, 2026. Rename now.json to vercel.json (no content changes required).
{
"functions": {
"app/api/heavy/**": {
"maxDuration": 300,
"memory": 1024
},
"app/api/edge/**": {
"runtime": "edge"
}
}
}
All plans now default to 300s execution time with Fluid Compute.
| Plan | Default | Max |
|---|---|---|
| Hobby | 300s | 300s |
| Pro | 300s | 800s |
| Enterprise | 300s | 800s |
@neondatabase/serverless)fs, no native modules, limited crypto — use Node.js runtime if neededvercel env pull for local dev504 Gateway Timeout?
├─ All plans default to 300s with Fluid Compute
├─ Pro/Enterprise: configurable up to 800s
├─ Long-running task?
│ ├─ Under 5 min → Use Fluid Compute with streaming
│ ├─ Up to 15 min → Use Vercel Functions with `maxDuration` in vercel.json
│ └─ Hours/days → Use Workflow DevKit (DurableAgent or workflow steps)
└─ DB query slow? → Add connection pooling, check cold start, use Edge Config
500 Internal Server Error?
├─ Check Vercel Runtime Logs (Dashboard → Deployments → Functions tab)
├─ Missing env vars? → Compare `.env.local` against Vercel dashboard settings
├─ Import error? → Verify package is in `dependencies`, not `devDependencies`
└─ Uncaught exception? → Wrap handler in try/catch, use `after()` for error reporting
"FUNCTION_INVOCATION_FAILED"?
├─ Memory exceeded? → Increase `memory` in vercel.json (up to 3008 MB on Pro)
├─ Crashed during init? → Check top-level await or heavy imports at module scope
└─ Edge Function crash? → Check for Node.js APIs not available in Edge runtime
Cold start latency > 1s?
├─ Using Node.js runtime? → Consider Edge Functions for latency-sensitive routes
├─ Large function bundle? → Audit imports, use dynamic imports, tree-shake
├─ DB connection in cold start? → Use connection pooling (Neon serverless driver)
└─ Enable Fluid Compute to reuse warm instances across requests
"EDGE_FUNCTION_INVOCATION_TIMEOUT"?
├─ Edge Functions have 25s hard limit (not configurable)
├─ Move heavy computation to Node.js Serverless Functions
└─ Use streaming to start response early, process in background with `waitUntil`