38 plugins for Hugging Face development
Dynamically discover and activate 410+ production-ready skills across 33 domains for expert guidance in frontend, backend, infra, AI/ML, security, devops, and more. Invoke gateway discoverers, delegate to specialized agents for architecture or polyglot engineering, generate new skills, and get project-tailored recommendations via context analysis.
Implement, review, debug, and optimize features using 39 Apple Kit frameworks in Swift iOS apps, unlocking fast workflows for widgets, Live Activities, HealthKit queries, CloudKit sync, CarPlay UIs, RealityKit AR, AVKit video, CryptoKit security, PencilKit drawing, and ML inference without manual docs lookup.
Rapidly implement production-ready AI/ML features in apps: integrate LLMs with prompt engineering and response handling, build ML pipelines for recommendation systems, add computer vision for visual search, and enable intelligent automation using OpenAI, Anthropic, LangChain, Hugging Face, or Ollama.
Rapidly implement production-ready AI/ML features in apps: integrate LLMs via prompt engineering and response handling, build ML pipelines for user behavior-based recommendations, add computer vision for photo-based product search, and deploy intelligent automations.
Delegate end-to-end ML engineering workflows to specialized agents that construct data preparation and training pipelines with feature engineering and hyperparameter tuning, optimize inference through quantization, pruning, batching, and edge deployment, and manage MLOps for model versioning, monitoring, A/B testing, and production orchestration.
Delegate complex image analysis to a vision expert agent that performs OCR with Tesseract or EasyOCR, detects barcodes and QR codes, processes documents, and optimizes workflows using GPT-4V, Claude Vision, Mistral-OCR, with advanced preprocessing and prompt engineering for accurate visual AI results.
Run GGUF models locally with Mozilla Llamafile, launching OpenAI-compatible API servers configurable for GPU/CPU inference, SDK integrations, installations, startups, and connection troubleshooting in offline setups.
Deploy vLLM OpenAI-compatible inference servers locally with hardware detection, via Docker images, or Kubernetes YAML manifests with GPU support, then benchmark throughput, TTFT, TPOT, inter-token latency, and prefix caching using synthetic data, ShareGPT, or fixed prompts.
Automate SageMaker AI/ML workflows: prepare and validate datasets, fine-tune LLMs via SFT/DPO/RLVR on serverless jobs, evaluate with LLM-as-a-Judge, deploy models to endpoints/Bedrock, and diagnose/manage HyperPod clusters using generated notebooks, scripts, and AWS tools.
Invoke 24 elite skills in Claude Code to enforce disciplined engineering workflows: strict TDD for changes, step-by-step design and implementation plans, multi-agent task dispatching, domain expertise in ML/embedded/AI/frontend, git worktrees and branch management, root-cause debugging, rigorous code reviews, and context optimization for long sessions.
Install, run, diagnose, and upgrade local Kokoro TTS engine on Apple Silicon Macs: automate MLX-Audio setup with Hugging Face models, launch OpenAI-compatible HTTP server on localhost:8779, synthesize speech to WAV or stream via CLI, run health checks, fix real-time audio issues, and manage full lifecycle.
Supercharge AI coding agents for ML engineering: diagnose failures like OOM/NaN/crashes, verify code/configs against Hugging Face/PyTorch docs pre-training, generate grounded fine-tuning plans and next steps, maintain persistent experiment journals, deep-dive frameworks like vLLM/DeepSpeed, and optimize inference pipelines.