From antigravity-awesome-skills
Workflow for implementing RAG systems covering embedding selection, vector database setup, chunking strategies, and retrieval optimization. For semantic search and document Q&A apps.
npx claudepluginhub sickn33/antigravity-awesome-skillsThis skill uses the workspace's default tool permissions.
Specialized workflow for implementing RAG (Retrieval-Augmented Generation) systems including embedding model selection, vector database setup, chunking strategies, retrieval optimization, and evaluation.
Workflow for implementing RAG systems covering embedding selection, vector database setup, chunking strategies, and retrieval optimization. For semantic search and document Q&A apps.
Implements retrieval-augmented generation (RAG) systems for knowledge-intensive apps, document search, Q&A, and grounding LLMs in external data. Covers embeddings, vector stores, retrieval pipelines, evaluation, with cost/prerequisite checks.
Share bugs, ideas, or general feedback.
Specialized workflow for implementing RAG (Retrieval-Augmented Generation) systems including embedding model selection, vector database setup, chunking strategies, retrieval optimization, and evaluation.
Use this workflow when:
ai-product - AI product designrag-engineer - RAG engineeringUse @ai-product to define RAG application requirements
embedding-strategies - Embedding selectionrag-engineer - RAG patternsUse @embedding-strategies to select optimal embedding model
vector-database-engineer - Vector DBsimilarity-search-patterns - Similarity searchUse @vector-database-engineer to set up vector database
rag-engineer - Chunking strategiesrag-implementation - RAG implementationUse @rag-engineer to implement chunking strategy
similarity-search-patterns - Similarity searchhybrid-search-implementation - Hybrid searchUse @similarity-search-patterns to implement retrieval
Use @hybrid-search-implementation to add hybrid search
llm-application-dev-ai-assistant - LLM integrationllm-application-dev-prompt-optimize - Prompt optimizationUse @llm-application-dev-ai-assistant to integrate LLM
prompt-caching - Prompt cachingrag-engineer - RAG optimizationUse @prompt-caching to implement RAG caching
llm-evaluation - LLM evaluationevaluation - AI evaluationUse @llm-evaluation to evaluate RAG system
User Query -> Embedding -> Vector Search -> Retrieved Docs -> LLM -> Response
| | | |
Model Vector DB Chunk Store Prompt + Context
ai-ml - AI/ML developmentai-agent-development - AI agentsdatabase - Vector databases