By nishide-dev
Comprehensive Machine Learning research plugin for Claude Code. Provides project scaffolding, experiment management, training support, and debugging tools for PyTorch Lightning, Hydra, PyTorch Geometric, and Hugging Face Transformers.
npx claudepluginhub nishide-dev/claude-code-ml-researchHydra configuration specialist for generating, validating, and managing ML experiment configs. Use when creating new configs, setting up experiments, or troubleshooting configuration issues.
PyTorch Geometric expert for implementing Graph Neural Networks, handling graph data, and optimizing GNN training. Use when working with graph-structured data, GNNs, or PyG-specific issues.
Machine Learning system architecture specialist for designing ML pipelines, model architectures, and scalable training systems. Use PROACTIVELY when planning new ML projects, designing model architectures, or optimizing training pipelines.
Optimize PyTorch models for inference through quantization, pruning, ONNX/TorchScript conversion, and deployment optimization. Use when converting research models to production, reducing model size, improving inference speed, or preparing models for edge deployment.
PyTorch implementation expert for writing efficient, correct, and optimized PyTorch code. Use when implementing models, custom layers, loss functions, or optimizing PyTorch performance.
ML training troubleshooting specialist for diagnosing and fixing training issues like NaN loss, poor convergence, memory errors, and performance problems. Use when training fails or performs unexpectedly.
Hugging Face Transformers and NLP expert specializing in LLM fine-tuning, PEFT (LoRA/QLoRA), tokenization, HF Datasets integration, and distributed training for transformers. Use when working with NLP tasks, LLMs, or HF-specific issues.
Building professional CLIs with Typer and Rich - type-safe argument parsing, progress bars, model visualization, Hydra integration, RichHandler logging, and multi-process handling for ML workflows
Generate and manage Hydra configuration files for machine learning experiments. Use when creating new configs (model, data, trainer, logger, experiment, sweep), organizing config hierarchies, or setting up hyperparameter sweeps with Optuna.
Create and manage data loading, preprocessing, and augmentation pipelines (DataModule, transforms, data loaders). Use when implementing DataModules, setting up data loaders, or optimizing data pipelines for computer vision, NLP, or graph ML tasks.
Systematic debugging guide for machine learning training issues with PyTorch Lightning.
Systematic experiment tracking, comparison, and analysis for machine learning research.
Format Python code with ruff formatter and optionally fix auto-fixable linting issues. Use when formatting code, preparing code for commit, or ensuring consistent code style across the project.
Comprehensive guide for Hydra configuration management, hierarchical configs, experiment management, Optuna integration, and Lightning integration patterns
Comprehensive guide for PyTorch Lightning - LightningModule, Trainer, distributed training, PyTorch 2.0 torch.compile integration, Lightning Fabric, and production best practices
Run comprehensive code quality checks with ruff (format, lint) and ty (type checking). Use when checking code quality, fixing linting errors, or ensuring code follows best practices before commits or PRs.
Export trained PyTorch models to various formats (ONNX, TorchScript, TensorRT) and upload to model registries (Hugging Face Hub, MLflow). Use when deploying models, sharing trained weights, or preparing for production inference.
Profile ML training to identify bottlenecks in data loading, model computation, and memory usage.
Initialize a new ML research project using the ML Research template with PyTorch Lightning, Hydra, and modern Python tooling. Use when starting a new ML project from scratch.
Complete guide for PyTorch Geometric (PyG) - graph neural networks, message passing, large-scale distributed graph learning, Lightning integration, and heterogeneous graphs
Setup development environment with modern Python tooling (uv/pixi), install dependencies, and configure development tools (ruff, ty, pytest). Use when setting up new ML projects, configuring environments, or installing dependencies.
Execute training runs with proper monitoring, checkpointing, and experiment tracking. Use when starting training, resuming training, debugging training issues, or setting up multi-GPU/distributed training with PyTorch Lightning and Hydra.
Hugging Face Transformers with PyTorch Lightning - LightningModule integration, distributed training (FSDP/DeepSpeed), PEFT (LoRA/QLoRA), data pipelines with HF Datasets, evaluation metrics, and common NLP tasks
Comprehensive validation of ML project structure, configurations, code quality, and training readiness. Use when setting up a new project, before training runs, or debugging configuration issues. Validates config loading, data pipeline, model architecture, and dependencies.
Complete guide for Weights & Biases (W&B) - experiment tracking, hyperparameter sweeps, artifact management, model registry, and PyTorch Lightning integration
Comprehensive guide for marimo - reactive Python notebooks as pure .py files, uv integration, AI-friendly architecture, reproducible data science workflows, and serverless deployment with WASM
Comprehensive guide for Pixi package manager - Python environment management, CUDA/GPU support, PyPI integration, Docker/Pixi-Pack deployment, and best practices for ML research
Comprehensive guide for building Python monorepos with uv workspaces - unified dependency resolution, shared lock files, editable installs, testing strategies, Docker optimization, and CI/CD patterns for managing multiple packages in a single repository
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Modifies files
Hook triggers on file write and edit operations
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use
Complete collection of battle-tested Claude Code configs agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use
AI-supervised issue tracker for coding workflows. Manage tasks, discover work, and maintain context with simple CLI commands.
Manus-style persistent markdown files for planning, progress tracking, and knowledge storage. Works with Claude Code, Kiro, Clawd CLI, Gemini CLI, Cursor, Continue, Hermes, and 17+ AI coding assistants. Now with Arabic, German, Spanish, and Chinese (Simplified & Traditional) support.
AI-powered development tools for code review, research, design, and workflow automation.