Deploy Mistral AI integrations to Vercel, Fly.io, and Cloud Run platforms. Use when deploying Mistral AI-powered applications to production, configuring platform-specific secrets, or setting up deployment pipelines. Trigger with phrases like "deploy mistral", "mistral Vercel", "mistral production deploy", "mistral Cloud Run", "mistral Fly.io".
From mistral-packnpx claudepluginhub nickloveinvesting/nick-love-plugins --plugin mistral-packThis skill is limited to using the following tools:
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Deploy Mistral AI-powered applications to production with proper API key management, model endpoint configuration, and platform-specific optimizations. Covers Vercel, Docker, and Cloud Run deployments with the Mistral SDK connecting to api.mistral.ai.
@mistralai/mistralai SDK# Vercel
vercel env add MISTRAL_API_KEY production
vercel env add MISTRAL_MODEL production # e.g., mistral-large-latest
# Docker
echo "MISTRAL_API_KEY=your-key" > .env.production
# Cloud Run
gcloud secrets create mistral-api-key --data-file=- <<< "your-key"
FROM node:20-slim AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
ENV NODE_ENV=production
EXPOSE 3000 # 3000: 3 seconds in ms
CMD ["node", "dist/index.js"]
{
"functions": {
"api/chat.ts": {
"maxDuration": 60
}
},
"env": {
"MISTRAL_API_KEY": "@mistral_api_key"
}
}
// api/chat.ts - Vercel Edge Function
import Mistral from "@mistralai/mistralai";
export const config = { runtime: "edge" };
export default async function handler(req: Request) {
const client = new Mistral({ apiKey: process.env.MISTRAL_API_KEY! });
const { messages } = await req.json();
const response = await client.chat.complete({
model: process.env.MISTRAL_MODEL || "mistral-small-latest",
messages,
});
return Response.json(response);
}
set -euo pipefail
gcloud run deploy mistral-service \
--image gcr.io/$PROJECT_ID/mistral-app \
--region us-central1 \
--platform managed \
--set-secrets=MISTRAL_API_KEY=mistral-api-key:latest \
--set-env-vars=MISTRAL_MODEL=mistral-large-latest \
--min-instances=1 \
--max-instances=10
import Mistral from "@mistralai/mistralai";
export async function GET() {
try {
const client = new Mistral({ apiKey: process.env.MISTRAL_API_KEY! });
await client.models.list();
return Response.json({ status: "healthy", provider: "mistral" });
} catch (error) {
return Response.json({ status: "unhealthy", error: error.message }, { status: 503 }); # HTTP 503 Service Unavailable
}
}
| Issue | Cause | Solution |
|---|---|---|
| API key not found | Missing env var | Verify secret configuration on platform |
| Function timeout | Long completion | Increase maxDuration, use streaming |
| Cold start latency | Serverless spin-up | Set minimum instances or use edge runtime |
| Model not available | Wrong model ID | Check available models at console.mistral.ai |
const client = new Mistral({
apiKey: process.env.MISTRAL_API_KEY!,
timeout: 60000, # 60000: 1 minute in ms
maxRetries: 3,
});
For multi-environment setup, see mistral-multi-env-setup.