Build, optimize, and evaluate RAG (Retrieval-Augmented Generation) systems. Covers embedding model selection, vector store design, chunking strategies, retrieval optimization (hybrid search, reranking), context assembly, and evaluation metrics (faithfulness, relevance, hallucination rate).
From godmodeBuild, optimize, and evaluate RAG (Retrieval-Augmented Generation) systems. Covers embedding model selection, vector store design, chunking strategies, retrieval optimization (hybrid search, reranking), context assembly, and evaluation metrics (faithfulness, relevance, hallucination rate).
/godmode:rag # Full RAG pipeline design workflow
/godmode:rag --ingest ./docs # Run document ingestion pipeline
/godmode:rag --chunk semantic # Force semantic chunking strategy
/godmode:rag --store pgvector # Force vector store selection
/godmode:rag --embed text-embedding-3-small # Force embedding model
/godmode:rag --hybrid # Enable hybrid search (dense + BM25)
/godmode:rag --rerank cohere # Add reranking stage
/godmode:rag --eval # Evaluate RAG pipeline quality
/godmode:rag --diagnose # Debug retrieval quality issues
/godmode:rag --compare # Compare pipeline configurations
/godmode:rag --reindex # Force full corpus reindexing
/godmode:rag --stats # Show pipeline statistics
config/rag/<pipeline>-config.yamlsrc/rag/<pipeline>/tests/rag/<pipeline>/eval.pydocs/rag/<pipeline>-eval-results.md"rag: <pipeline> — <embedding model>, <vector store>, <N> chunks, faithfulness=<val>"After RAG pipeline: /godmode:prompt to optimize the generation prompt, /godmode:eval for comprehensive evaluation, or /godmode:agent to wrap RAG in an agent loop.
/godmode:rag Build a knowledge base for internal documentation
/godmode:rag --store pinecone --embed voyage-3 Build a production RAG pipeline
/godmode:rag --diagnose Users say the chatbot gives wrong answers
/godmode:rag --eval Run evaluation on our RAG pipeline
/godmode:rag --hybrid --rerank cohere Upgrade retrieval to hybrid + reranking