Apply production-ready Mistral AI SDK patterns for TypeScript and Python. Use when implementing Mistral integrations, refactoring SDK usage, or establishing team coding standards for Mistral AI. Trigger with phrases like "mistral SDK patterns", "mistral best practices", "mistral code patterns", "idiomatic mistral".
From mistral-packnpx claudepluginhub nickloveinvesting/nick-love-plugins --plugin mistral-packThis skill is limited to using the following tools:
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Production-ready patterns for the Mistral AI SDK (mistralai Python package). Covers client initialization, chat completions, streaming, function calling, and embeddings with idiomatic error handling.
pip install mistralai (v1.0+)MISTRAL_API_KEY environment variable setfrom mistralai import Mistral
import os
# Singleton client with configuration
_client = None
def get_mistral_client() -> Mistral:
global _client
if _client is None:
_client = Mistral(
api_key=os.environ["MISTRAL_API_KEY"],
timeout_ms=30000, # 30000: 30 seconds in ms
max_retries=3
)
return _client
from mistralai import Mistral
import json
client = get_mistral_client()
# Basic chat
response = client.chat.complete(
model="mistral-small-latest",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing briefly."}
],
temperature=0.7,
max_tokens=500 # HTTP 500 Internal Server Error
)
print(response.choices[0].message.content)
# JSON mode for structured output
response = client.chat.complete(
model="mistral-small-latest",
messages=[{"role": "user", "content": "List 3 programming languages as JSON array"}],
response_format={"type": "json_object"}
)
data = json.loads(response.choices[0].message.content)
def stream_response(prompt: str):
stream = client.chat.stream(
model="mistral-small-latest",
messages=[{"role": "user", "content": prompt}]
)
full_response = ""
for event in stream:
chunk = event.data.choices[0].delta.content or ""
full_response += chunk
yield chunk
return full_response
import json
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"},
"units": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["city"]
}
}
}]
response = client.chat.complete(
model="mistral-small-latest",
messages=[{"role": "user", "content": "What's the weather in Paris?"}],
tools=tools,
tool_choice="auto"
)
# Handle tool calls
if response.choices[0].message.tool_calls:
for call in response.choices[0].message.tool_calls:
args = json.loads(call.function.arguments)
result = get_weather(**args) # your implementation
# Send result back
messages.append(response.choices[0].message)
messages.append({"role": "tool", "name": call.function.name,
"content": json.dumps(result), "tool_call_id": call.id})
def embed_texts(texts: list[str]) -> list[list[float]]:
response = client.embeddings.create(
model="mistral-embed",
inputs=texts
)
return [d.embedding for d in response.data]
# Batch large sets
def embed_batch(texts: list[str], batch_size: int = 64) -> list[list[float]]:
embeddings = []
for i in range(0, len(texts), batch_size):
batch = texts[i:i+batch_size]
embeddings.extend(embed_texts(batch))
return embeddings
| Error | Cause | Solution |
|---|---|---|
401 Unauthorized | Invalid API key | Check MISTRAL_API_KEY |
429 Too Many Requests | Rate limit hit | Use built-in retry or add backoff |
400 Bad Request | Invalid model or params | Check model name and parameter ranges |
| Timeout | Large prompt or slow network | Increase timeout_ms |
Basic usage: Apply mistral sdk patterns to a standard project setup with default configuration options.
Advanced scenario: Customize mistral sdk patterns for production environments with multiple constraints and team-specific requirements.