From cohere-pack
Optimizes Cohere API costs using model tiering, token budgets, embedding strategies, and usage monitoring. For analyzing billing and reducing expenses in AI/NLP apps.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin cohere-packThis skill is limited to using the following tools:
Optimize Cohere costs through model selection, token budgets, embedding compression, and usage monitoring. Cohere pricing is token-based with separate input/output rates.
Optimizes Cohere API performance with model selection, batching, streaming, and caching for Chat, Embed, and Rerank to reduce latency.
Optimizes Anthropic Claude API costs with model routing, prompt caching, batching, spend monitoring, and Python cost calculators. For billing analysis and reduction.
Optimizes LangChain LLM costs via token tracking, model tiering, caching, prompt compression, and budget enforcement for OpenAI/Anthropic models.
Share bugs, ideas, or general feedback.
Optimize Cohere costs through model selection, token budgets, embedding compression, and usage monitoring. Cohere pricing is token-based with separate input/output rates.
Key principle: Cohere charges per token. Input tokens and output tokens have different rates. Embed, Rerank, and Classify have separate pricing based on search units.
| Tier | Access | Rate Limits | Cost |
|---|---|---|---|
| Trial | Free | 5-20 calls/min, 1000/month | $0 |
| Production | Metered | 1000 calls/min, unlimited | Per-token |
| Model | Input (per 1M tokens) | Output (per 1M tokens) | Best For |
|---|---|---|---|
command-r7b-12-2024 | Lowest | Lowest | High-volume, simple tasks |
command-r-08-2024 | Low | Low | RAG, cost-effective |
command-r-plus-08-2024 | Medium | Medium | Complex reasoning |
command-a-03-2025 | Higher | Higher | Best quality |
| Endpoint | Pricing Unit | Notes |
|---|---|---|
| Embed | Per input token | Batch 96 texts to minimize calls |
| Rerank | Per search unit | 1 query + N docs = 1 search unit |
| Classify | Per classification | Charges per input classified |
type CostTier = 'economy' | 'standard' | 'premium';
function selectModel(tier: CostTier): string {
switch (tier) {
case 'economy': return 'command-r7b-12-2024'; // ~5x cheaper
case 'standard': return 'command-r-08-2024'; // Good balance
case 'premium': return 'command-a-03-2025'; // Best quality
}
}
// Route by use case
function routeModel(task: string): string {
// High-volume, simple tasks → cheapest model
if (['classify', 'extract', 'summarize-short'].includes(task)) {
return selectModel('economy');
}
// RAG, moderate complexity
if (['rag', 'search', 'qa'].includes(task)) {
return selectModel('standard');
}
// Complex reasoning, user-facing
return selectModel('premium');
}
import { CohereClientV2 } from 'cohere-ai';
const cohere = new CohereClientV2();
// Set maxTokens to prevent runaway generation costs
async function budgetedChat(message: string, maxOutputTokens = 500) {
const response = await cohere.chat({
model: 'command-r-08-2024',
messages: [{ role: 'user', content: message }],
maxTokens: maxOutputTokens, // Hard limit on output tokens
});
// Track actual usage
const usage = response.usage?.billedUnits;
console.log(`Tokens: in=${usage?.inputTokens} out=${usage?.outputTokens}`);
return response;
}
// 1. Use int8 embeddings (same quality, cheaper storage)
const response = await cohere.embed({
model: 'embed-v4.0',
texts: documents,
inputType: 'search_document',
embeddingTypes: ['int8'], // 75% less storage than float
});
// 2. Batch to 96 per call (minimize API calls)
// 3. Cache embeddings (they're deterministic — embed once, use forever)
// 4. Use embed-multilingual-v3.0 if you don't need v4 features
class CohereUsageTracker {
private usage: Record<string, { inputTokens: number; outputTokens: number; calls: number }> = {};
private dailyBudget: number;
constructor(dailyBudgetUSD: number) {
this.dailyBudget = dailyBudgetUSD;
}
track(endpoint: string, billedUnits: { inputTokens?: number; outputTokens?: number }) {
if (!this.usage[endpoint]) {
this.usage[endpoint] = { inputTokens: 0, outputTokens: 0, calls: 0 };
}
this.usage[endpoint].inputTokens += billedUnits.inputTokens ?? 0;
this.usage[endpoint].outputTokens += billedUnits.outputTokens ?? 0;
this.usage[endpoint].calls++;
}
getReport(): string {
return Object.entries(this.usage)
.map(([ep, u]) =>
`${ep}: ${u.calls} calls, ${u.inputTokens} in, ${u.outputTokens} out`
)
.join('\n');
}
estimateDailyCost(): number {
// Rough estimate — check cohere.com/pricing for exact rates
const chatIn = (this.usage['chat']?.inputTokens ?? 0) / 1_000_000;
const chatOut = (this.usage['chat']?.outputTokens ?? 0) / 1_000_000;
const embedIn = (this.usage['embed']?.inputTokens ?? 0) / 1_000_000;
// Multiply by per-million-token rates from pricing page
return (chatIn * 0.5) + (chatOut * 1.5) + (embedIn * 0.1); // example rates
}
}
// Wrap all API calls
const tracker = new CohereUsageTracker(10); // $10/day budget
async function trackedChat(params: any) {
const response = await cohere.chat(params);
tracker.track('chat', response.usage?.billedUnits ?? {});
if (tracker.estimateDailyCost() > tracker['dailyBudget'] * 0.8) {
console.warn('WARNING: Approaching daily Cohere budget limit');
}
return response;
}
// If you have < 1000 documents, skip embedding entirely
// Rerank is cheaper than Embed + vector search for small collections
async function cheapRAG(query: string, corpus: string[]) {
// 1 search unit instead of N embed calls
const ranked = await cohere.rerank({
model: 'rerank-v3.5',
query,
documents: corpus,
topN: 3,
});
const docs = ranked.results.map((r, i) => ({
id: `doc-${i}`,
data: { text: corpus[r.index] },
}));
// Use cheaper model for generation
return cohere.chat({
model: 'command-r-08-2024', // Not command-a (cheaper)
messages: [{ role: 'user', content: query }],
documents: docs,
maxTokens: 300,
});
}
command-r7b for simple tasks, command-a only for complex onesmaxTokens on all chat callsint8 embeddings for storageusage.billedUnits in every responsererank instead of embed for small corpora (< 1000 docs)| Issue | Cause | Solution |
|---|---|---|
| Unexpected bill spike | No maxTokens | Set maxTokens on all chat calls |
| High embed costs | Individual texts | Batch to 96 per call |
| Budget exceeded | No monitoring | Track billedUnits per response |
| Over-provisioned model | Using premium everywhere | Tier models by task complexity |
For architecture patterns, see cohere-reference-architecture.