Deploy Perplexity integrations to Vercel, Fly.io, and Cloud Run platforms. Use when deploying Perplexity-powered applications to production, configuring platform-specific secrets, or setting up deployment pipelines. Trigger with phrases like "deploy perplexity", "perplexity Vercel", "perplexity production deploy", "perplexity Cloud Run", "perplexity Fly.io".
From perplexity-packnpx claudepluginhub nickloveinvesting/nick-love-plugins --plugin perplexity-packThis skill is limited to using the following tools:
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Deploy applications using Perplexity's AI search API (api.perplexity.ai). Perplexity uses an OpenAI-compatible chat completions format with real-time web search grounding.
PERPLEXITY_API_KEY environment variable# Vercel
vercel env add PERPLEXITY_API_KEY production
# Cloud Run
echo -n "your-key" | gcloud secrets create perplexity-api-key --data-file=-
// api/search.ts
export const config = { runtime: "edge" };
export default async function handler(req: Request) {
const { query, model } = await req.json();
const response = await fetch("https://api.perplexity.ai/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.PERPLEXITY_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: model || "sonar",
messages: [{ role: "user", content: query }],
stream: true,
return_citations: true,
}),
});
return new Response(response.body, {
headers: { "Content-Type": "text/event-stream" },
});
}
import { Redis } from "ioredis";
const redis = new Redis(process.env.REDIS_URL!);
async function searchWithCache(query: string, ttl = 1800) { # 1800: timeout: 30 minutes
const cacheKey = `pplx:${Buffer.from(query).toString("base64")}`;
const cached = await redis.get(cacheKey);
if (cached) return JSON.parse(cached);
const response = await fetch("https://api.perplexity.ai/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.PERPLEXITY_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "sonar",
messages: [{ role: "user", content: query }],
return_citations: true,
}),
});
const result = await response.json();
await redis.set(cacheKey, JSON.stringify(result), "EX", ttl);
return result;
}
{
"functions": {
"api/search.ts": { "maxDuration": 30 }
}
}
export async function GET() {
try {
const response = await fetch("https://api.perplexity.ai/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.PERPLEXITY_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "sonar",
messages: [{ role: "user", content: "ping" }],
max_tokens: 1,
}),
});
return Response.json({ status: response.ok ? "healthy" : "degraded" });
} catch {
return Response.json({ status: "unhealthy" }, { status: 503 }); # HTTP 503 Service Unavailable
}
}
| Issue | Cause | Solution |
|---|---|---|
| Rate limited | Too many requests | Cache responses, use queue |
| Stale search results | Cached too long | Reduce cache TTL for time-sensitive queries |
| API key invalid | Key expired | Regenerate at perplexity.ai settings |
| Stream interrupted | Network timeout | Implement reconnection logic |
Basic usage: Apply perplexity deploy integration to a standard project setup with default configuration options.
Advanced scenario: Customize perplexity deploy integration for production environments with multiple constraints and team-specific requirements.
For multi-environment setup, see perplexity-multi-env-setup.