Guides RAG implementation workflow: requirements analysis, embedding selection, vector DB setup, chunking strategies, retrieval optimization, and LLM integration.
From antigravity-bundle-llm-application-developernpx claudepluginhub sickn33/antigravity-awesome-skills --plugin antigravity-bundle-llm-application-developerThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
Specialized workflow for implementing RAG (Retrieval-Augmented Generation) systems including embedding model selection, vector database setup, chunking strategies, retrieval optimization, and evaluation.
Use this workflow when:
ai-product - AI product designrag-engineer - RAG engineeringUse @ai-product to define RAG application requirements
embedding-strategies - Embedding selectionrag-engineer - RAG patternsUse @embedding-strategies to select optimal embedding model
vector-database-engineer - Vector DBsimilarity-search-patterns - Similarity searchUse @vector-database-engineer to set up vector database
rag-engineer - Chunking strategiesrag-implementation - RAG implementationUse @rag-engineer to implement chunking strategy
similarity-search-patterns - Similarity searchhybrid-search-implementation - Hybrid searchUse @similarity-search-patterns to implement retrieval
Use @hybrid-search-implementation to add hybrid search
llm-application-dev-ai-assistant - LLM integrationllm-application-dev-prompt-optimize - Prompt optimizationUse @llm-application-dev-ai-assistant to integrate LLM
prompt-caching - Prompt cachingrag-engineer - RAG optimizationUse @prompt-caching to implement RAG caching
llm-evaluation - LLM evaluationevaluation - AI evaluationUse @llm-evaluation to evaluate RAG system
User Query -> Embedding -> Vector Search -> Retrieved Docs -> LLM -> Response
| | | |
Model Vector DB Chunk Store Prompt + Context
ai-ml - AI/ML developmentai-agent-development - AI agentsdatabase - Vector databases