Plugins listed here are tagged for this topic and auto-indexed from public GitHub repositories.
Plugins for AI model integration, prompt engineering, LLM workflows, and machine learning pipelines.
OpenAI, Anthropic, LangChain, LlamaIndex, Hugging Face, and PyTorch integrations. Some include MCP servers for direct model API access.
Several include prompt template management, evaluation workflows, and A/B testing tools. Agents can analyze prompt performance and suggest improvements.
Plugins with MCP servers can connect to model APIs — these are flagged with network access warnings. Review the risk indicators before installing.
Supercharge Claude Code with 300+ agents, skills, commands, and hooks to orchestrate autonomous multi-agent coding workflows, enforce TDD, conduct security audits, generate production code across JS/TS/Python/Rust/mobile stacks, optimize performance, and automate deployments/testing.
Search, retrieve, improve, and manage thousands of AI prompts and Claude skills from prompts.chat directly in your coding assistant. Install skills to extend capabilities, fill prompt variables, save custom prompts with metadata, and enhance them using AI.
Deploy AI agents with 41 specialized marketing skills to optimize SaaS conversions via user flows and paywalls, generate SEO/copy/ads/content, audit websites/ASO/SEO, plan launches/growth strategies, reduce churn, and automate campaigns across channels.
Orchestrate 1,388 specialized AI skills in Claude Code to automate expert workflows for Azure SDK integrations, Odoo/Shopify configs, SEO audits, security pentests, full-stack scaffolding, agent building, and DevOps pipelines across Python, React, AWS, Kubernetes.
Generate investor-ready startup business analyses: calculate TAM/SAM/SOM market sizing, build 3-5 year financial models with cohort revenue, cash flow, burn rate, and scenarios; analyze competitive landscapes and team structures; benchmark metrics like CAC/LTV and ARR; produce full business case documents.
Unlock pro-level BMad workflows: analyze project states for skill recommendations and next steps, orchestrate multi-agent roundtables and debates for diverse insights, distill documents losslessly, shard large Markdown files, refine LLM outputs via advanced elicitation, review prose and structure, index directories, and audit code adversarially to uncover edge cases and omissions.
Orchestrate multi-agent teams to parallelize code reviews across security, performance, architecture, and more with consolidated reports; debug complex bugs via competing hypotheses, evidence gathering, and root cause ranking; develop features through task decomposition, file ownership, dependency management, git branching, and integration monitoring.
Equip AI with persistent memory by mining projects and conversations into a searchable 'palace' vector store. Run a local MCP server to query context via RAG tools, with auto-save hooks maintaining fresh indexes and guided setup for quick integration.
Delegate architecture, implementation, optimization, and debugging of complex applications to specialized AI agents expert in Python/Django/FastAPI, TypeScript/React/Next.js/Angular/Vue, Go, Rust, Java/Spring Boot, PHP/Laravel/Symfony, C#/.NET, mobile (Flutter/React Native/Swift/Kotlin), Elixir/Rails, SQL, and DevOps tools.
Build production-ready data engineering stacks: Airflow DAGs for orchestration, dbt models for transformations, scalable pipelines with Spark on cloud warehouses like BigQuery and Snowflake, Kafka streaming, optimized embeddings for RAG, and vector databases like Pinecone, Weaviate, and pgvector.
Delegate complex data engineering, ML, and AI workflows to specialized sub-agents that design scalable pipelines, build and optimize models, architect LLM systems, tune databases for performance, and deploy production infrastructure across clouds.
Deploy specialized research subagents to analyze markets, benchmark competitors, forecast trends, validate project ideas, collect and clean data from web/files/APIs, review scientific literature, and generate actionable insights and strategies.
Orchestrate swarms of 74+ specialized AI agents locally via stdio MCP servers with WASM acceleration for 2.8-4x speedups or connect to cloud platform, managing tasks with 40-150+ tools for GitHub automation, TDD, code review, performance optimization, and enterprise workflows using SPARC methodology.
Empower Claude Code to handle business analyst workflows: design KPI frameworks and dashboards for sales/marketing/product, build 3-5 year startup financial models with cohort revenue and scenario analysis, calculate TAM/SAM/SOM market sizes, and optimize seed-to-Series A metrics using Python, SQL, Snowflake, and BigQuery.
Orchestrate swarms of specialized AI agents to automate end-to-end software development: plan features, implement code with Rails/Python/TS patterns, conduct multi-perspective reviews for architecture/security/performance, resolve todos/PR feedback in parallel, run browser/iOS tests, sync Figma designs, generate docs/videos, and ship PRs.
Orchestrate creative AI image generation workflows: search a 1300+ curated design gallery for inspirations, craft batch prompts for parallel variations and concepts, auto-enhance short prompts, and generate images via MeiGen server with ComfyUI or OpenAI-compatible APIs.
Upgrade Claude AI integrations by migrating code, prompts, and API calls from Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, automatically updating model strings across Anthropic, AWS Bedrock, GCP Vertex AI, and Azure AI Studio platforms.
Run AI-powered code reviews, adversarial audits, and task delegation to OpenAI Codex on local git repos using CLI commands. Launch background jobs for investigations or fixes, monitor status in tables, retrieve structured outputs with verdicts, findings, next steps, and follow-ups. Handles job lifecycle via hooks and agents for seamless offloading when Claude gets stuck.
Orchestrate multi-agent teams for complex AI-driven projects: decompose tasks, match capabilities, coordinate workflows, manage shared context and errors, distribute workloads, monitor performance with Prometheus and OpenTelemetry, and synthesize insights from interactions. Integrates PowerShell, .NET, Azure ops via specialist subagents.
Build production LLM applications using expert strategies for context window management via summarization trimming routing and caching, RAG pipelines with chunking embeddings vector stores and agents, observability via Langfuse tracing evaluations, and retrieval optimization workflows.
Develop full-stack AI apps on Azure in C#, Python, Node.js: access OpenAI models and embeddings via .NET/Python SDKs, implement RAG with Azure AI Search vector/hybrid queries, build Durable Functions for serverless orchestration, authenticate via azure-identity, monitor Node apps with OpenTelemetry to Azure Monitor, deploy containers to Azure Container Apps using azd CLI and Bicep.
Architect production-grade autonomous AI agents using bundled skills: design tool-using multi-agents with ReAct planning and safety, build stateful LangGraph systems with persistence and human-in-loop, implement optimized RAG pipelines, engineer prompts for reliability, evaluate via benchmarks and monitoring, and set up MCP servers for LLM-tool interactions.
Scaffold new Claude Agent SDK apps in TypeScript or Python by interactively gathering requirements, installing dependencies, and configuring projects. Verify apps post-creation or changes for SDK best practices, code quality, security, type safety, documentation, and deployment readiness.
Generate Product Requirements Documents (PRDs) interactively by answering questions on feature goals, users, and scope, structuring them into user stories with acceptance criteria and non-goals, then convert to prd.json format for autonomous execution by the Ralph agent system.
Add persistent memory to Claude Code tasks and AI apps via Mem0: retrieve relevant past decisions, strategies, and session states on new tasks; store user data for personalization; enable semantic search across long-term memories using Python/TS SDKs, hooks, and MCP tools.
Author animated video compositions in HTML using GSAP, Tailwind, Three.js, Lottie, Web Animations API; add AI-generated TTS voiceovers, Whisper captions, audio-reactive visuals; preprocess media; preview, lint, and render via CLI; capture websites or migrate Remotion projects to video.
Invoke MiniMax AI skills to scaffold React/Next.js frontends, fullstack apps with Node/Python/Go backends, Flutter/React Native/Android/iOS mobile projects; generate/edit DOCX/PDF/PPTX/XLSX files; produce GIF stickers, shaders, music playlists/videos; analyze images via CLI workflows.
Pull version-specific documentation and code examples from source repositories directly into your LLM context using Upstash's Context7 MCP server. Enable AI context persistence via vector storage, embeddings management, and semantic retrieval for precise, up-to-date assistance during development.
Embed GitHub Copilot's agentic runtime into Python, TypeScript, Go, or .NET apps to build custom AI agents with tools, sessions, streaming responses, and MCP servers.
Solve IMO, Putnam, USAMO, and AIME competition math problems using pure reasoning enhanced by adversarial verification that detects self-check errors missed by standard methods. Obtain calibrated confidence scores and PDF outputs for verified solutions.
Manage GitHub repositories directly from Claude Code by creating issues, handling pull requests, reviewing code, searching repos, and accessing the full GitHub API via natural language.
Automate end-to-end Hugging Face ML workflows: train and fine-tune language/vision models on Jobs GPUs with TRL/Unsloth/PyTorch, build Gradio demos, run JS/TS inference, manage repos/datasets via CLI, query leaderboards, perform local evals, explore datasets, launch GGUF servers, and publish papers.
Fetch targeted Python code examples from pysheeet cheat sheets covering syntax, concurrency, networking, databases, ML/LLM, and HPC for instant reference during debugging, interviews, or optimization. Enforce 'The Art of Readable Code' rules—like short functions, clear naming, and Pythonic idioms—to write and refactor readable code in real-time.
Spawn parallel AI subagents in isolated git worktrees to compete on tasks like code optimization, refactoring, test writing, or bug fixing. Evaluate results using pytest metrics or LLM judging on git diffs, rank agents, and merge the top performer into your base branch.
Direct AI coding agents to create or update promptfoo evaluation suites with configs, prompts, tests, deterministic assertions, and provider setups following best practices. Streamline LLM eval coverage, regression debugging, and new eval matrix generation in JavaScript or Python projects using OpenAI or Anthropic models.
Launch GPU/TPU clusters, training jobs, and inference servers across 25+ clouds using SkyPilot. Deploy to Kubernetes pods and Slurm jobs; debug YAML configs and optimize costs in your AI workflow.
Build AI agents that generate and interact with React UI components using Tambo: auto-integrate into existing React/Next.js/Vite apps by detecting stack, installing packages, wiring providers, and adding chat UI; or CLI-scaffold new generative UI apps with starter components and schemas.
Audit and optimize paid ad campaigns across Google, Meta, YouTube, LinkedIn, TikTok, Microsoft, and Apple Ads with 250+ AI checks, weighted health scores, prioritized action plans, and parallel agents. Generate AI creatives, campaign briefs, budgets, A/B tests, PPC math, brand profiles, and PDF reports from ad data.
Orchestrate multi-LLM agents (Claude, Gemini, Codex, Ollama) and workflows for end-to-end software development: generate PRDs/specs, design UIs/architectures/databases, code with TDD/debugging, perform reviews/audits/tests, manage DevOps/infra, and automate deliveries via 98 commands/agents.
Perform product market research workflows: generate user personas, behavioral segments, and customer journey maps from surveys, CSVs, or feedback; conduct competitive landscape analysis with competitor profiles and differentiation maps; run sentiment analysis on reviews for insights and recommendations; estimate TAM/SAM/SOM with growth projections; output markdown reports.
Query natural language searches across enterprise tools like Slack, Jira, Notion, Asana, Gmail, Microsoft 365, and docs. Decompose intents into sub-queries, search sources in parallel, synthesize coherent answers with citations, confidence scores, and digests of activity.
Invoke /deploy to initiate TensorZero deployment workflow, which prompts plugin upgrade before unlocking full deployment capabilities for AI/ML applications.
Generate production-ready GPT-Image-2 prompts by selecting from a library of visual styles, industrial templates, categories, and scene tags. Input image intents to match templates and build structured prompts with subject, composition, visuals, and constraints for high-quality image generation.
Execute a production-grade academic research pipeline in Claude Code: deep research, outline generation, paper drafting, literature reviews, peer review simulation, revisions with response letters, citation checks, venue disclosures, format conversions (LaTeX, DOCX, PDF, Markdown), and finalization for scholarly publishing.
Leverage Common Room's product usage, engagement, and intent signals as a GTM copilot to research accounts and contacts, generate call prep briefs with talking points and objections, draft personalized email/LinkedIn/call outreach, build targeted prospect lists, produce weekly meeting briefings, and create strategic account plans.
Delegate full-stack development workflows to Claude via 213 specialized agents, commands, and skills: refactor code, generate tests/deployments/Dockerfiles/K8s manifests, audit security/performance, document APIs/onboarding, orchestrate Git ops, and apply patterns across JS/TS/Python/Rust/Go/Java stacks.
Connect Claude to biomedical databases like PubMed, bioRxiv, ChEMBL, and Open Targets for literature searches and target discovery. Run single-cell RNA-seq QC with scanpy, scvi-tools workflows for integration and multiomics, nf-core pipelines for sequencing analysis, and convert lab files to structured JSON/CSV for preclinical R&D acceleration.
Discover brand materials from Notion, Slack, Confluence, Gong, Figma, and more; distill into structured, LLM-ready guidelines; enforce consistent voice on AI-generated sales, marketing content like emails, proposals, and posts via generation, validation, and refinement.
Transform academic papers, articles, and concepts into structured Org-mode notes, visual PNG infographics/sketchnotes, critical essays, Q&A chains, paper lineages, word breakdowns, roundtable debates, and city research reports. Manage and sync Claude skills with maps and Git pushes.
Automate AI/ML academic research pipelines in Obsidian vaults: bootstrap project KBs, ingest papers from Zotero/arXiv/web into Sources/, synthesize literature reviews/gaps/method taxonomies, analyze experiments with stats/figures/reports, draft NeurIPS/ICML papers/rebuttals, manage note lifecycles/registry/index with git workflows.
Persist memory across AI coding agent sessions by capturing tool usage and insights, compressing via LLM, and injecting relevant past context into future interactions. Recall session history, search observations, and forget specific data via natural language commands.
Perform AI-powered code reviews on GitHub and GitLab pull requests by connecting to Greptile API. View and resolve review comments directly within Claude Code. Query indexed repositories for code search, codebase Q&A, and context retrieval to accelerate development workflows.
Fine-tune LLMs with Tinker API by diagnosing training issues like slow steps, hanging sessions, vLLM mismatches, errors, and deployments; conduct post-training research replicating papers via SFT, RL, DPO, distillation experiments with run monitoring, hyperparameter tuning, and log analysis.
Build interactive conversational AI apps using Google Gemini APIs for multi-turn chats, multimodal processing, function calling, structured outputs, and real-time bidirectional audio/video/text streaming via WebSockets. Integrate with Gemini models, Gemma, and Vertex AI using Python, JavaScript/TypeScript, Go, Java, and C# SDKs.
Crystallize vague requirements into precise YAML Seed specs via Socratic interviews and ambiguity scoring, then execute workflows, run 3-stage evaluations (mechanical/semantic/consensus), detect goal drift, and iteratively self-improve through evolutionary loops with Git integration.
Orchestrate multi-agent AI systems with AI SDK v5 for task decomposition, handoffs, routing, and coordination across OpenAI, Anthropic, and Google providers. Use commands to initialize projects, generate specialized agents with custom prompts and tools, test workflows with metrics, and deploy orchestrator agents for complex task handling in TypeScript.
Convert a single domain description into a multi-agent team architecture for Claude Code, selecting from six patterns like Pipeline, Fan-out/Fan-in, Supervisor, or Hierarchical Delegation. Generates specialist agents and domain-specific skills to automate project setup, architecture design, expansion, audits, and maintenance.
Run 10 AI agents to fully automate Obsidian vault management: triage Gmail/Hey emails and inbox notes, extract deadlines from Google Calendar, transcribe audio into structured notes, audit and defragment vault structure, generate weekly agendas, evolve knowledge graph, and handle multilingual inputs.
Build and configure neural network architectures like CNNs and RNNs for ML tasks such as image classification and text generation. Generate PyTorch code with validation and error handling, get metrics and insights, save artifacts, and produce documentation.
Integrate Perplexity Sonar API for AI-powered web search with verifiable citations into Node.js/Python apps. Handle full lifecycle workflows: auth setup, error debugging, rate limiting, caching optimization, monitoring, security guardrails, CI/CD testing, and scalable deployments to Vercel/Docker.
Generate AI videos from text prompts or images using Kling AI API in Python. Build scalable production pipelines with async Redis queues, batch processing, rate limiting, webhooks, monitoring, cost controls, content filters, security audits, cloud storage uploads, and CI/CD integration.
Interpret Culture Index survey results from PDF or JSON to analyze individual and team behavioral profiles, detect burnout, assess team composition like gas/brake/glue dynamics, compare profiles, predict traits from interviews, and generate recommendations for hiring, coaching, onboarding, and conflict mediation.
Generate plots, charts, and graphs from data via natural language requests—AI analyzes datasets, selects optimal visualization types, produces validated Python code, delivers performance metrics and insights, saves artifacts, and creates documentation.
Generate and run Python code to analyze images via computer vision, performing object detection, classification, and segmentation. Handles validation, errors, performance metrics, saves outputs as artifacts, and adds documentation. Trigger with 'analyze image' prompts or process-vision command.
Automate training and optimization of ML models for classification and regression on datasets: analyze data, select/configure algorithms, cross-validate, evaluate metrics, generate Python code using scikit-learn/PyTorch/TensorFlow/XGBoost, and save artifacts.
Automate long-form webnovel creation: initialize projects interactively with genre/characters/worldbuilding/outlines, generate beat sheets/chapters (2000+ words), extract entities/relationships to SQLite indexes, visualize status/entity graphs in read-only dashboard, recover interrupted workflows, and validate chapters via agents for inconsistencies, pacing, OOC, reader pull, and quality reports.
Generate, edit, and inpaint images via GPT Image 2 CLI skill, using a reference prompt gallery to match styles for UI mockups, diagrams, posters, research figures, anime, and Chinese typography workflows.
Accelerate Atomic Agents app development through a guided 7-phase workflow: delegate schema design, agent and tool creation, architecture planning, codebase analysis, and code review to specialized AI sub-agents for scalable multi-agent LLM systems.
Build and orchestrate advanced Claude Code agentic workflows by creating meta-prompts, subagents, hooks, MCP servers, slash commands, and skills; execute hierarchical plans, run autonomous coding loops, apply expert debugging and productivity frameworks like 5 Whys or Eisenhower Matrix, and audit components for compliance and quality.
Generate production-ready Google Cloud code examples, starter kits, and templates for AI agents and apps from official ADK, Genkit, and Vertex AI sources. Adapt to Python, TypeScript, or Go with security, monitoring, Firebase, and Terraform IaC integration.
Set up OpenRAG locally by assessing your environment, generating requirements and Docker/uvx configs, and verifying services at localhost:3000 and :5001/docs. Then integrate its SDK into Python or JavaScript/TypeScript apps via pip/npm/uv/yarn, configure env vars/API keys, and implement chat/search clients with code examples.
Run end-to-end YouTube content strategy workflows: research competitors via channel scraping and analysis, generate tiered video ideas with validation, produce structured briefs and detailed outlines including demo prep, craft CTR-optimized titles and thumbnail concepts.
Build, debug, optimize, secure, and deploy FireCrawl web scraping pipelines for LLM/RAG data ingestion: scrape/crawl sites to markdown/JSON, extract structured data, handle rate limits/errors, add monitoring/observability, scale with backoff/caching, and integrate into Node/Python apps from dev to production.
Automate machine learning feature engineering by generating and executing validated Python code to create interactions, scale data, encode categoricals, select features via importance analysis, compute metrics, save artifacts, and generate documentation.
Automate full Databricks lakehouse lifecycle: build Delta Lake ETL pipelines with medallion architecture and Auto Loader, engineer ML workflows via MLflow and Feature Store, deploy jobs/pipelines with Asset Bundles and GitHub Actions CI/CD, secure via Unity Catalog RBAC, optimize costs/performance, troubleshoot errors, and monitor with system tables.
Validate AI/ML models and datasets for bias, fairness, and ethics using Fairlearn, AIF360 metrics, four-fifths rule, and severity classification. Generate production-ready AI/ML code with integrated validation, error handling, metrics, artifacts, and documentation tailored to modern frameworks.
Generate and execute automated Python pipelines for data cleaning, transformation, validation, and ETL in ML workflows. Analyze context to produce AI/ML code with built-in validation, error handling, performance metrics, saved artifacts, and documentation.
Set up Ollama for local AI model inference on macOS, Linux, or Docker with automated installation, hardware-optimized model selection, GPU configuration, verification, model pulls, API testing, and client integration via Python, Node.js, or REST for zero-cost, privacy-first LLM workflows.
Optimize LLM prompts for OpenAI and Anthropic by automatically detecting redundancy, simplifying instructions, and rewriting to reduce token usage, lower costs, and improve performance.
Build and deploy Cloudflare Workers apps, stateful AI agents with Agents SDK, Durable Objects for coordination, secure sandboxes, and transactional emails. Scaffold projects with Wrangler configs, review code for best practices, audit web performance via Core Web Vitals, and connect to official MCP servers for DNS, docs, bindings, builds, and observability.
Build .NET applications that provision Azure infrastructure (databases, caches, bots), integrate AI services (agents, OpenAI, voice, document intelligence, search), manage events/messaging (Event Grid, Hubs, Service Bus), authenticate via Entra ID, and handle Key Vault cryptography using official SDKs and ARM clients.
Build production-grade LLM gateways with OpenRouter: route requests across 400+ models by task or criteria, chain fallbacks for reliability, cache responses to cut costs/latency, monitor usage/costs/latency, redact PII for compliance, and benchmark performance using Python OpenAI SDK wrappers.
Track real-time prices for cryptocurrencies, stocks, forex, and commodities from multiple exchange APIs and WebSockets. Set watchlists and alerts, export data to CSV/JSON, analyze trends with technical indicators, volume, patterns, and generate trading signals, forecasts, and recommendations.
Aggregate cryptocurrency news from 50+ RSS sources with coin, category, and time filters, relevance scoring, AI sentiment analysis, trend detection, and market impact scoring to monitor market updates, announcements, and gain real-time trading insights.
Forecast future values from historical time series data using ARIMA and Prophet models, including trend, seasonality, and autocorrelation analysis with confidence intervals. Generate validated AI/ML code for forecasting tasks complete with error handling, performance metrics, insights, artifacts, and documentation.
Access Z.AI's multimodal AI capabilities directly from your CLI to analyze images and videos with vision models, perform OCR and UI-to-code conversion, search the web, extract pages as markdown, and explore GitHub repositories deeply. Requires Z_AI_API_KEY for seamless terminal-based workflows.
Evaluate machine learning models using metrics like accuracy, precision, recall, and F1-score to perform performance analysis, validation, model comparison, and optimization. Generate production-ready AI/ML code that includes validation, error handling, performance metrics, saved artifacts, and documentation.
Generate importable n8n workflow JSON files from natural language descriptions, designing complex automations with loops, branching, error handling, retries, notifications, AI content pipelines, lead qualification, document processing, and OpenAI/JavaScript integrations.
Integrate OpenEvidence medical AI for clinical decision support in healthcare SaaS: run evidence-based queries, drug interactions, DeepConsult syntheses; automate auth setup, rate limiting, caching, RBAC, monitoring, CI/CD pipelines, Docker deploys, and production checklists in TypeScript/Node.js/Python projects.
Build, debug, test, deploy, secure, monitor, and optimize production LangChain applications using 24 Claude Code skills that generate LCEL chains and agents, implement RAG pipelines, set up CI/CD and FastAPI/Express APIs, handle migrations/upgrades, apply cost/performance tuning, and enforce security best practices.
Analyze cryptocurrency market sentiment by pulling data from social media, news, on-chain metrics, derivatives, whale activity, and Fear & Greed Index to generate 0-100 mood scores, weighted insights, and predictions for overall market or specific coins like BTC.
Build, initialize, architect, implement, debug, and deploy production-ready Firebase Genkit AI workflows with RAG, multi-step flows, tool calling, Gemini/Vertex AI integration, and OpenTelemetry monitoring in Node.js/TypeScript, Python, or Go to Firebase Functions or Cloud Run.
Provision Vertex AI infrastructure on GCP using Terraform modules to deploy Model Garden models, Gemini endpoints, vector search indices, ML pipelines, encryption, auto-scaling, and IAM roles for Agent Engine.
Design, implement, and deploy secure Firebase apps with Vertex AI Gemini integration in Cloud Functions for authentication, Firestore, storage, and hosting.
Build and deploy production-ready containerized multi-agent systems on Google Cloud using ADK templates. Scaffold Python projects with code structures, CI/CD pipelines, and deployment guides targeting Cloud Run, GKE, or Vertex AI Agent Engine. Get recommendations for architectures, tool contracts, and scaffolds.
Create and validate production-grade Claude Code skills per AgentSkills.io 2026 spec and 100-point rubric, plus Anthropic agent .md files matching 16-field 2026 standard. Audit existing skills/agents or build custom subagents for orchestrators and marketplace submission.