28 plugins for Ollama development
Automate end-to-end AI/ML workflows on Hugging Face Hub using agent skills: discover top models from leaderboards, manage models/datasets/repos via CLI and APIs, validate datasets, fine-tune/train LLMs and vision models on cloud GPUs or locally with GGUF, run evaluations and inferences, build Gradio web UIs, publish research papers, track metrics, and connect via MCP.
Rapidly implement production-ready AI/ML features in apps: integrate LLMs with prompt engineering and response handling, build ML pipelines for recommendation systems, add computer vision for visual search, and enable intelligent automation using OpenAI, Anthropic, LangChain, Hugging Face, or Ollama.
Rapidly implement production-ready AI/ML features in apps: integrate LLMs via prompt engineering and response handling, build ML pipelines for user behavior-based recommendations, add computer vision for photo-based product search, and deploy intelligent automations.
Delegate end-to-end ML engineering workflows to specialized agents that construct data preparation and training pipelines with feature engineering and hyperparameter tuning, optimize inference through quantization, pruning, batching, and edge deployment, and manage MLOps for model versioning, monitoring, A/B testing, and production orchestration.
Guides developers through installing, configuring, and managing OpenClaw AI gateway on Docker, Kubernetes, macOS, or Linux. Automates setup for 20+ messaging channels like Slack, Telegram, Discord, Signal; troubleshoots issues, hardens security, manages secrets, and configures multi-provider AI models.
Harden AI coding agent sessions for production use by enforcing structured behavioral modes like TDD and debugging, blocking dangerous commands with pre-tool guards, running continuous QA tests on file writes, scoring output quality with 4D metrics, generating handoff artifacts, and persisting session context memory.
Index project documentation, codebases, and knowledge graphs for hybrid retrieval: BM25 keywords, semantic similarity, GraphRAG relationships, or fused multi-mode search. Retrieve cited chunks with scores to research dependencies, errors, and concepts in seconds using Ollama, OpenAI, or Anthropic.
Orchestrate persistent Claude Code agents across WhatsApp, Telegram, Discord, Slack, Gmail for message routing, triage, and SWE task management. Use slash commands to setup instances, add channels/integrations (Ollama tools, Whisper transcription, image vision, PDF reading), manage git updates/extensions, debug containers, and handle customizations.
Author test fixtures for @copilotkit/aimock to mock LLM responses, tool call sequences, errors, multi-turn loops, embeddings, structured outputs, and debug mismatches, enabling robust testing of AI applications with OpenAI, Anthropic, and Ollama providers.
Unify Python LLM API calls across 100+ providers like OpenAI, Anthropic, Ollama, and llamafile servers using OpenAI format, with built-in retries, fallbacks, exception handling, and cost tracking.
Rapidly implement production-ready AI/ML features in apps, including LLM integrations with prompt engineering, ML pipelines for recommendations, computer vision for visual search, and intelligent automation, using a specialized agent.
Build provider-agnostic, type-safe streaming LLM chats with tools, agent loops, and multimodal support directly in React and Next.js apps using hooks like useChat, compatible with OpenAI, Anthropic, Gemini, Ollama.