Plugins listed here are tagged for this technology stack and auto-indexed from public GitHub repositories.
Plugins listed here are tagged for this technology stack and auto-indexed from public GitHub repositories.
Automate end-to-end Hugging Face ML workflows: train and fine-tune language/vision models on Jobs GPUs with TRL/Unsloth/PyTorch, build Gradio demos, run JS/TS inference, manage repos/datasets via CLI, query leaderboards, perform local evals, explore datasets, launch GGUF servers, and publish papers.
Fetch targeted Python code examples from pysheeet cheat sheets covering syntax, concurrency, networking, databases, ML/LLM, and HPC for instant reference during debugging, interviews, or optimization. Enforce 'The Art of Readable Code' rules—like short functions, clear naming, and Pythonic idioms—to write and refactor readable code in real-time.
Configure Claude Code agents using battle-tested principles and safety hooks to orchestrate pixel-art studios for sprite/animation generation with multi-agent quality reviews, engineer diffusion model pipelines for image editing, perform parallel code audits across competencies, design responsive UIs/mobile apps, and produce animated videos.
Train multimodal understanding (Qwen2.5VL, InternVL, GLM4V) and generative (Wan, HunyuanVideo, CogVideoX, FLUX) models on Huawei Ascend NPU using MindSpeed-MM. Setup base environments with CANN/PyTorch/torch_npu, convert weights via mm-convert CLI, and run end-to-end fine-tuning pipelines with Megatron, FSDP2, Accelerate+DeepSpeed.
Autonomously draft complete USPTO, EPO, and PCT patent applications from invention disclosures: search 100M+ patents via BigQuery for prior art and patentability, analyze claims/specifications for MPEP/35 USC 112/EPC Art 84 compliance, generate Graphviz diagrams, validate formalities, and output markdown+SVG filing packages. Self-contained, no MCP server needed.
Analyze survey microdata with weighted pandas DataFrames to compute Gini coefficients, poverty rates, quantiles, and inequality metrics. Impute missing values using ML methods like random forest and XGBoost from donor data. Calibrate weights to population targets with L0 regularization. Enhance datasets like CPS ASEC and run PolicyEngine microsimulations for tax-benefit policy impacts across populations.
Automate end-to-end ML performance investigations: research SOTA papers and architectures, generate phased plans, judge experimental methodologies, profile bottlenecks, run metric-improvement campaigns with atomic git commits, auto-rollback on regressions, and leverage specialist agents for data lifecycle and deep paper analysis.
Guardrail your AI/ML research workflow with an AI collaborator that searches literature using query variations, analyzes codebases and logs, designs minimal falsification experiments, records predictions, and audits bugs.
Orchestrate end-to-end LLM inference pipelines on multi-chip GPUs and NPUs using FlagOS agent skills: automate stack installation, environment verification, model migration, kernel generation and optimization, performance benchmarking, and structured reporting.
Bootstrap Claude Code with 17 specialized agents, skills, and hooks to audit/evolve .claude/ configs, engineer/refactor Python code via TDD, profile/optimize ML workloads, generate docs/tests, design systems, diagnose issues, and manage workflows professionally.
Streamline end-to-end data science and ML workflows: frame business problems into ML tasks, preprocess and validate data with quality checks, perform EDA on diverse formats, design and execute experiments with hyperparameter tuning via Optuna and interpretability via SHAP, audit reproducibility and leakage, evaluate model performance and readiness for deployment, generate model cards, and extract structured learnings into docs.
Run endless autonomous optimization loops on code targets like LLM training loss, test speed, bundle size, or build time: plugin edits files, commits via git, executes short experiments or benchmarks, measures metrics, keeps improvements, reverts failures, and iterates until manually stopped.
Equip AI agents with 9 engineering skills to architect scalable backends and distributed systems, secure apps and pipelines, prototype MVPs, build mobile and ML apps, guide frontend development, automate DevOps infrastructure, and plan senior-level software delivery.
Prefix terminal commands with 'gpu' to run ML training, LLM inference, ComfyUI workflows, and media processing on remote NVIDIA GPUs (A100, H100, RTX 4090) from your Mac. Automatically provisions pods, syncs files bidirectionally, streams logs, debugs interactively, selects optimal GPUs, and optimizes costs.
Delegate expert-level AI/ML workflows to specialized agents: engineer optimized prompts with evaluation and A/B testing, architect scalable LLM systems with RAG/LoRA fine-tuning, build production NLP pipelines for NER/classification/QA, and deploy optimized models via vLLM/Triton/Docker/K8s for reliability, performance, and cost control.
Train and run inference on machine learning models using Hugging Face Transformers and PEFT with PyTorch on cloud GPUs from Modal, Lambda Labs, or RunPod—no local GPU required.
Write idiomatic MLX code for machine learning on Apple Silicon, implementing arrays, neural networks, training loops, lazy evaluation, unified memory, Metal GPU acceleration, and PyTorch migrations.
Claude Code plugins tagged for PyTorch development. Browse commands, agents, skills, and more.