Execute Mistral AI secondary workflows: Embeddings and Function Calling. Use when implementing semantic search, RAG applications, or tool-augmented LLM interactions. Trigger with phrases like "mistral embeddings", "mistral function calling", "mistral tools", "mistral RAG", "mistral semantic search".
From mistral-packnpx claudepluginhub nickloveinvesting/nick-love-plugins --plugin mistral-packThis skill is limited to using the following tools:
references/implementation.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Secondary workflows for Mistral AI: text embeddings for semantic search/RAG and function calling for tool-augmented interactions. Uses mistral-embed (1024 dimensions) for embeddings and mistral-large-latest for function calling.
mistral-install-auth setupmistral-core-workflow-aUse client.embeddings.create() with model mistral-embed and inputs array. Returns 1024-dimensional vectors per input text.
Pass multiple texts in the inputs array for efficient batch processing. Map response data array to extract embedding vectors.
Implement SemanticSearch class with indexDocuments() (embeds all docs) and search() (embeds query, ranks by cosine similarity, returns top-K results). Use cosine similarity: dot product divided by product of norms.
Create tool definitions with JSON Schema parameters. Each tool has type function, name, description, and parameter schema with required fields.
Send messages with tools and toolChoice: 'auto' to client.chat.complete(). Check for toolCalls in response. Execute matching tool function, add result as role: 'tool' message, and loop until model returns final text response.
Combine semantic search with chat completion. Retrieve relevant documents for user query, inject as context in system prompt, generate response with mistral-small-latest. Instruct model to answer from context only.
mistral-embed (1024 dimensions)| Issue | Cause | Resolution |
|---|---|---|
| Empty embeddings | Invalid input text | Validate non-empty strings before API call |
| Tool not found | Unknown function name | Check tool registry matches definitions |
| RAG hallucination | Insufficient context | Add more documents, tune retrieval top-K |
| High latency | Large batch size | Split into smaller batches, add concurrency |
const response = await client.embeddings.create({
model: 'mistral-embed',
inputs: ['Machine learning is fascinating.'],
});
console.log(`Dimensions: ${response.data[0].embedding.length}`); // 1024 # 1024: 1 KB
const response = await client.chat.complete({
model: 'mistral-large-latest',
messages: [{ role: 'user', content: 'Weather in Paris?' }],
tools, toolChoice: 'auto',
});
See detailed implementation for advanced patterns.