Full guided setup of GrepAI semantic search — prerequisites, Docker, embedding provider/model, storage backend, MCP registration, and indexing.
Guides users through the complete setup process for GrepAI semantic search, including prerequisites, Docker, embeddings, storage, and MCP registration.
npx claudepluginhub jugrajsingh/skillgardenThis skill is limited to using the following tools:
Guided orchestrator for GrepAI semantic code search. Walks through prerequisites, infrastructure, embedding config, storage, MCP integration, and indexing.
which grepai
If missing, show install instructions:
Install grepai:
macOS/Linux:
brew install grepai/tap/grepai
Or via curl:
curl -fsSL https://get.grepai.dev | sh
Windows (PowerShell):
irm https://get.grepai.dev/install.ps1 | iex
Stop and ask user to install before continuing.
which docker
If missing, instruct to install Docker Desktop or Docker Engine and stop.
Present via AskUserQuestion:
Which storage backend?
○ GOB (local file) — simple, zero config, single-project only (Recommended)
○ PostgreSQL + pgvector — scalable, team-ready, supports workspaces
○ Qdrant — lightweight vector DB, supports workspaces
If GOB: default storage, no extra config needed. Index stored in .grepai/index.gob.
Note: If the user plans to use workspace mode (cross-project search) later, they should pick PostgreSQL or Qdrant for the workspace backend. GOB is fine for the per-project local config — the workspace has its own separate store config.
If PostgreSQL: note the DSN for later:
DSN: postgres://grepai:grepai@localhost:5432/grepai
Known issue: PostgreSQL + pgvector has a UTF-8 encoding bug where files containing Unicode box-drawing characters (e.g. U+2550) fail to index. GOB does not have this limitation.
If Qdrant: note the endpoint for later:
REST API: http://localhost:6333
gRPC: http://localhost:6334
Select the template based on storage choice:
${CLAUDE_PLUGIN_ROOT}/templates/docker-compose-ollama.yml${CLAUDE_PLUGIN_ROOT}/templates/docker-compose-postgres.yml${CLAUDE_PLUGIN_ROOT}/templates/docker-compose-qdrant.ymlRead the selected template.
Present via AskUserQuestion:
Where should docker-compose.yml be placed?
○ Project root (Recommended) — writes to $CLAUDE_PROJECT_DIR/docker-compose.yml
○ Custom path — you specify the location
Write the template to the chosen path.
If file already exists, warn and ask whether to overwrite or skip.
Then ask:
Start Docker services now?
○ Yes, start services (Recommended) — runs docker compose up -d
○ No, I'll start later
If yes:
docker compose -f {COMPOSE_PATH} up -d
Verify with:
docker compose -f {COMPOSE_PATH} ps
Present via AskUserQuestion:
Which embedding provider?
○ Ollama — local, private, free, works offline (Recommended)
○ OpenAI — cloud, high quality, costs ~$0.01-$6.50 per full index
If OpenAI: inform about API key setup:
Set your OpenAI API key:
export OPENAI_API_KEY="sk-..."
Cost estimates per full index:
text-embedding-3-small ~$0.01-$0.10 (small-medium repos)
text-embedding-3-large ~$0.05-$6.50 (depends on repo size)
If Ollama: proceed to model selection.
Present via AskUserQuestion based on chosen provider.
For Ollama:
Which embedding model?
○ nomic-embed-text — 768 dims, 274MB, fast general use (Recommended)
○ mxbai-embed-large — 1024 dims, 670MB, highest accuracy
○ bge-m3 — 1024 dims, 1.2GB, multilingual
○ nomic-embed-text-v2-moe — 768 dims, 500MB, multilingual MoE
For OpenAI:
Which embedding model?
○ text-embedding-3-small — 1536 dims, $0.00002/1K tokens (Recommended)
○ text-embedding-3-large — 3072 dims, $0.00013/1K tokens
Then confirm before downloading (Ollama only):
Pull embedding model now? This downloads {SIZE} to the Ollama container.
○ Yes, pull now (Recommended)
○ No, I'll pull later
If yes:
docker exec ollama ollama pull {MODEL}
Delegate to the initializing skill with collected choices:
Invoke the grepai:initializing skill and follow it exactly.
Pass context: chosen provider, model, storage backend, DSN if postgres, endpoint if qdrant.
For workspace mode: tell the initializing skill to use GOB for the local per-project config. The workspace handles its own shared store separately via ~/.grepai/workspace.yaml.
Delegate to the grepai:mcp-setup skill and follow it exactly.
Pass context: workspace name (if workspace mode was chosen), so it can offer the --workspace flag option.
Inform user about the official grepai-skills plugin:
The official grepai-skills plugin provides 27 reference skills for advanced
configuration tuning, troubleshooting, and workflow optimization.
Install via Claude Code plugin marketplace:
/plugin marketplace add yoanbernabeu/grepai-skills
/plugin install grepai-complete@grepai-skills
Note: These are reference skills (no /commands). The grepai MCP tools
(grepai_search, grepai_trace_*) handle search and trace natively.
Present via AskUserQuestion:
Start grepai watch daemon now? This monitors file changes and updates the index.
○ Yes, start in background (Recommended)
○ No, I'll start later
If yes, branch based on mode:
Single project:
grepai watch --background
Workspace mode:
grepai watch --workspace {NAME} --background
Both per-project and workspace watchers can coexist. For workspace mode, the workspace watcher is the primary one to start.
Print configuration summary:
============================================================================
GrepAI Setup Complete
============================================================================
Infrastructure:
Docker Compose {COMPOSE_PATH}
Ollama http://localhost:11434
PostgreSQL/pgvector localhost:5432 (only if postgres backend)
Qdrant localhost:6333/6334 (only if qdrant backend)
Embedding:
Provider {PROVIDER}
Model {MODEL}
Dimensions {DIMS}
Storage:
Backend {BACKEND}
Integration:
MCP server registered ({SCOPE})
Config .grepai/config.yaml
Workspace: {NAME} (only if workspace mode)
CLAUDE.md workspace guidance added
Watcher: grepai watch --workspace {NAME} --background
Commands:
grepai status # Check index health
grepai watch --background # Start file watcher (single project)
grepai watch --workspace {NAME} --background # Workspace watcher
grepai index # Full re-index
/grepai:status # Health check all components
============================================================================
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.