From transfer-learning-adapter
Fine-tunes pre-trained ML models like ResNet, BERT, GPT on new datasets via transfer learning, generating Python code with validation and metrics.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin transfer-learning-adapterThis skill is limited to using the following tools:
Adapt pre-trained models (ResNet, BERT, GPT) to new tasks and datasets through fine-tuning, layer freezing, and domain-specific optimization.
Guides use of Hugging Face Transformers library for loading pre-trained models, running inference pipelines on NLP/CV/audio tasks, text generation, classification, QA, and fine-tuning datasets.
Guides loading, inference, and fine-tuning of Hugging Face transformer models for NLP, vision, audio tasks via pipelines and custom configs.
Guides LLM fine-tuning with LoRA/QLoRA/PEFT including dataset preparation, hyperparameter tuning, training, evaluation, and deployment.
Share bugs, ideas, or general feedback.
Adapt pre-trained models (ResNet, BERT, GPT) to new tasks and datasets through fine-tuning, layer freezing, and domain-specific optimization.
This skill streamlines the process of adapting pre-trained machine learning models via transfer learning. It enables you to quickly fine-tune models for specific tasks, saving time and resources compared to training from scratch. It handles the complexities of model adaptation, data validation, and performance optimization.
This skill activates when you need to:
User request: "Fine-tune a ResNet50 model to classify images of different types of flowers."
The skill will:
User request: "Adapt a BERT model to perform sentiment analysis on customer reviews."
The skill will:
This skill can be integrated with other plugins for data loading, model evaluation, and deployment. For example, it can work with a data loading plugin to fetch datasets and a model deployment plugin to deploy the adapted model to a serving infrastructure.
The skill produces structured output relevant to the task.