From grepai
Use when you need to view, change, or troubleshoot the embedding provider and model for GrepAI
npx claudepluginhub jugrajsingh/skillgarden --plugin grepaiThis skill is limited to using the following tools:
View, change, or troubleshoot the embedding provider and model used by grepai. Handles cascading changes (dimensions, re-indexing, workspace propagation).
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
View, change, or troubleshoot the embedding provider and model used by grepai. Handles cascading changes (dimensions, re-indexing, workspace propagation).
| Model | Provider | Dims | Speed | Quality | Languages |
|---|---|---|---|---|---|
| nomic-embed-text | Ollama | 768 | Fast | Good | English |
| nomic-embed-text-v2-moe | Ollama | 768 | Fast | Better | 100+ langs |
| bge-m3 | Ollama | 1024 | Medium | Excellent | 100+ langs |
| mxbai-embed-large | Ollama | 1024 | Medium | Better | English |
| all-minilm | Ollama | 384 | Very Fast | Basic | English |
| text-embedding-3-small | OpenAI | 1536 | Fast (API) | Good | Multi |
| text-embedding-3-large | OpenAI | 3072 | Fast (API) | Excellent | Multi |
OpenAI pricing: text-embedding-3-small ~$0.02/1M tokens, text-embedding-3-large ~$0.13/1M tokens. Typical project (10k lines) costs ~$0.001.
Check MCP registration (claude mcp list, .mcp.json, ~/.claude.json), local config (.grepai/config.yaml), and workspace config (grepai workspace list, ~/.grepai/workspace.yaml).
Mode detection:
--workspace {NAME} → workspace mode, config in ~/.grepai/workspace.yaml.grepai/config.yaml exists, no workspace MCP → local modeDisplay: mode, config path, provider, model, dimensions, endpoint.
Ask via AskUserQuestion:
What would you like to do?
○ Change embedding model (keep same provider)
○ Change embedding provider (e.g. Ollama → OpenAI)
○ View current config (done — already displayed above)
○ Troubleshoot embedding issues
If "View current config" — stop here, already displayed.
Show available models for current provider (use model reference table above), ask user to pick, check availability (Ollama: verify pulled, offer to pull), apply to correct config file (workspace: ~/.grepai/workspace.yaml, local: .grepai/config.yaml).
For provider switch: collect provider-specific settings (endpoint, API key, parallelism for OpenAI, dimension detection for LM Studio).
See references/provider-changes.md for full model selection flows, provider-specific setup commands, and config file templates.
Embeddings from different models are incompatible — index must be rebuilt. Warn user, ask to re-index now or later. Clear old index (GOB: remove .gob files, Qdrant: delete collection, PostgreSQL: truncate tables), then run grepai watch.
See references/reindex.md for backend-specific re-index commands.
If user chose troubleshoot: check provider connectivity, model availability, config consistency (model/endpoint not swapped, dimensions match reference), workspace vs local mismatch. Report with OK/FAIL/WARN indicators.
See references/troubleshooting.md for check commands and diagnostic steps.
Print before/after comparison (provider, model, dims), config path, mode, and re-index status. If re-index started, show monitor commands. If manual, show re-index commands.