Execute Mistral AI major migrations and re-architecture strategies. Use when migrating to Mistral AI from another provider, performing major refactoring, or re-platforming existing AI integrations to Mistral AI. Trigger with phrases like "migrate to mistral", "mistral migration", "switch to mistral", "mistral replatform", "openai to mistral".
From mistral-packnpx claudepluginhub nickloveinvesting/nick-love-plugins --plugin mistral-packThis skill is limited to using the following tools:
references/implementation.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Comprehensive guide for migrating to Mistral AI from other providers (OpenAI, Anthropic) or performing major version upgrades. Covers assessment, adapter pattern, feature-flag rollout, model mapping, validation testing, and rollback.
| Type | Complexity | Duration | Risk |
|---|---|---|---|
| Fresh install | Low | Days | Low |
| OpenAI to Mistral | Medium | Weeks | Medium |
| Multi-provider | Medium | Weeks | Medium |
| Full replatform | High | Months | High |
Audit current AI code: find all files with AI imports, count integration points (chat completions, embeddings, function calling, streaming). Detect current provider and estimate effort (>10 points = high, >3 = medium, else low).
Define AIAdapter interface with chat(), chatStream(), and embed() methods. Use Message, ChatOptions, and ChatResponse types to abstract away provider differences.
Build OpenAIAdapter implementing AIAdapter that wraps openai SDK. Maps OpenAI-specific fields (prompt_tokens, completion_tokens) to normalized types.
Build MistralAdapter implementing AIAdapter that wraps @mistralai/mistralai SDK. Maps Mistral-specific fields (promptTokens, completionTokens) to normalized types.
Create createAIAdapter() factory using MISTRAL_ROLLOUT_PERCENT env var. Random percentage check routes traffic to Mistral or OpenAI adapter.
Phase rollout: 0% (validation) -> 5% (canary) -> 25% -> 50% -> 100%. Monitor errors and latency at each phase for 24-48 hours before advancing.
Map models: gpt-3.5-turbo -> mistral-small-latest, gpt-4/gpt-4-turbo -> mistral-large-latest, text-embedding-ada-002 -> mistral-embed (1024 dimensions).
Run A/B comparison tests with identical prompts at temperature=0. Verify both providers return non-empty content. Log results for manual quality review.
Set MISTRAL_ROLLOUT_PERCENT=0 to immediately route all traffic back to OpenAI. Verify health and notify team.
| Issue | Cause | Solution |
|---|---|---|
| Different output format | API differences | Normalize in adapter |
| Missing feature | Not supported by Mistral | Implement fallback |
| Performance difference | Model characteristics | Adjust timeouts |
| Cost increase | Token differences | Monitor and optimize |
const [openaiResponse, mistralResponse] = await Promise.all([
openaiAdapter.chat(messages, { temperature: 0 }),
mistralAdapter.chat(messages, { temperature: 0 }),
]);
console.log('OpenAI tokens:', openaiResponse.usage);
console.log('Mistral tokens:', mistralResponse.usage);
See detailed implementation for advanced patterns.