By vanman2024
Machine learning training and inference pipeline using cloud GPUs (Modal, Lambda Labs, RunPod) with HuggingFace ecosystem - no local GPU required
npx claudepluginhub vanman2024/ai-dev-marketplace --plugin ml-trainingGPU selection, PEFT configuration, batch size tuning, and cost estimation for optimal training efficiency
Dataset preparation, Supabase integration, data loading, and data validation
Advanced preprocessing, tokenization, augmentation, and data quality checks
Multi-GPU training with FSDP, DeepSpeed, and Accelerate - handles sharding strategies, gradient accumulation, and performance optimization
BigQuery ML for SQL-based machine learning training - model creation, Vertex AI integration, remote model deployment, and cost estimation
Vertex AI custom training jobs for deep learning - GPU/TPU selection, PyTorch/TensorFlow/Hugging Face integration, distributed training setup
Model deployment for serverless inference, auto-scaling configuration, and endpoint creation
Integrate ML pipeline with FastAPI, Next.js, and Supabase for full-stack ML applications
Lambda Labs cloud instances, API integration, and cost-optimized GPU infrastructure for ML training workloads
High-level ML pipeline design, framework selection, platform recommendation, and project initialization
End-to-end testing of ML pipeline including data validation, training tests, and inference tests
Modal platform deployment, GPU configuration, and serverless ML endpoint setup with cost optimization
Parameter-efficient fine-tuning with LoRA/QLoRA/prefix-tuning
RunPod serverless and on-demand GPU configuration, FlashBoot setup, and deployment
Training configuration, hyperparameter tuning, framework setup, and TrainingArguments creation
Monitor ML training runs, track metrics with TensorBoard and Weights & Biases, implement failure recovery strategies
Platform-specific configuration templates and GPU selection guidance for Modal, Lambda Labs, and RunPod cloud platforms.
Cost estimation scripts and tools for calculating GPU hours, training costs, and inference pricing across Modal, Lambda Labs, and RunPod platforms. Use when estimating ML training costs, comparing platform pricing, calculating GPU hours, budgeting for ML projects, or when user mentions cost estimation, pricing comparison, GPU budgeting, training cost analysis, or inference cost optimization.
Provides three production-ready ML training examples (sentiment classification, text generation, RedAI trade classifier) with complete training scripts, deployment configs, and datasets. Use when user needs example projects, reference implementations, starter templates, or wants to see working code for sentiment analysis, text generation, or financial trade classification.
Google Cloud Platform configuration templates for BigQuery ML and Vertex AI training with authentication setup, GPU/TPU configs, and cost estimation tools. Use when setting up GCP ML training, configuring BigQuery ML models, deploying Vertex AI training jobs, estimating GCP costs, configuring cloud authentication, selecting GPUs/TPUs for training, or when user mentions BigQuery ML, Vertex AI, GCP training, cloud ML setup, TPU training, or Google Cloud costs.
Integration templates for FastAPI endpoints, Next.js UI components, and Supabase schemas for ML model deployment. Use when deploying ML models, creating inference APIs, building ML prediction UIs, designing ML database schemas, integrating trained models with applications, or when user mentions FastAPI ML endpoints, prediction forms, model serving, ML API deployment, inference integration, or production ML deployment.
Training monitoring dashboard setup with TensorBoard and Weights & Biases (WandB) including real-time metrics tracking, experiment comparison, hyperparameter visualization, and integration patterns. Use when setting up training monitoring, tracking experiments, visualizing metrics, comparing model runs, or when user mentions TensorBoard, WandB, training metrics, experiment tracking, or monitoring dashboard.
Templates and patterns for common ML training scenarios including text classification, text generation, fine-tuning, and PEFT/LoRA. Provides ready-to-use training configurations, dataset preparation scripts, and complete training pipelines. Use when building ML training pipelines, fine-tuning models, implementing classification or generation tasks, setting up PEFT/LoRA training, or when user mentions model training, fine-tuning, classification, generation, or parameter-efficient tuning.
Data validation and pipeline testing utilities for ML training projects. Validates datasets, model checkpoints, training pipelines, and dependencies. Use when validating training data, checking model outputs, testing ML pipelines, verifying dependencies, debugging training failures, or ensuring data quality before training.
Comprehensive UI/UX design plugin for mobile (iOS, Android, React Native) and web applications with design systems, accessibility, and modern patterns
Uses power tools
Uses Bash, Write, or Edit tools
Ultra-compressed communication mode. Cuts ~75% of tokens while keeping full technical accuracy by speaking like a caveman.
Intelligent prompt optimization using skill-based architecture. Enriches vague prompts with research-based clarifying questions before Claude Code executes them
Persistent memory system for Claude Code - seamlessly preserve context across sessions
Standalone image generation plugin using Nano Banana MCP server. Generates and edits images, icons, diagrams, patterns, and visual assets via Gemini image models. No Gemini CLI dependency required.