Guides factory/registry patterns for creating registrable ML components (datasets, models) with @register_* decorators in Python projects.
From claude-scholarnpx claudepluginhub galaxy-dawn/claude-scholar --plugin claude-scholarThis skill uses the workspace's default tool permissions.
examples/augmentation_example.pyexamples/config_example.yamlexamples/custom_dataset.pyexamples/custom_model.pyexamples/pipeline_example.shreferences/auto_import.mdreferences/code_style.mdreferences/factory_pattern.mdreferences/registry_pattern.mdreferences/structure.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
This skill defines the standard code architecture for machine learning projects based on the template structure. When modifying or extending code, follow these patterns to maintain consistency.
The project follows a modular, extensible architecture with clear separation of concerns. Each module (data, model, trainer, analysis) is independently organized using factory and registry patterns for maximum flexibility.
Use this skill when:
@register_dataset@register_model__init__.py factory wiringDo not use this skill when:
Key indicator: if the task does not require a @register_* decorator or a Factory pattern, skip this skill.
Each module uses a factory to create instances dynamically:
# Example from data_module/dataset/__init__.py
DATASET_FACTORY: Dict = {}
def DatasetFactory(data_name: str):
dataset = DATASET_FACTORY.get(data_name, None)
if dataset is None:
print(f"{data_name} dataset is not implementation, use simple dataset")
dataset = DATASET_FACTORY.get('simple')
return dataset
For detailed guidance, refer to references/factory_pattern.md.
Components register themselves via decorators:
# Example from data_module/dataset/simple_dataset.py
@register_dataset("simple")
class SimpleDataset(Dataset):
def __init__(self, data):
self.data = data
For detailed guidance, refer to references/registry_pattern.md.
Modules automatically discover and import submodules:
# Example from data_module/dataset/__init__.py
models_dir = os.path.dirname(__file__)
import_modules(models_dir, "src.data_module.dataset")
For detailed guidance, refer to references/auto_import.md.
project/
├── run/
│ ├── pipeline/ # Main workflow scripts
│ │ ├── training/ # Training pipelines
│ │ ├── prepare_data/ # Data preparation pipelines
│ │ └── analysis/ # Analysis pipelines
│ └── conf/ # Hydra configuration files
│ ├── training/ # Training configs
│ ├── dataset/ # Dataset configs
│ ├── model/ # Model configs
│ ├── prepare_data/ # Data prep configs
│ └── analysis/ # Analysis configs
│
├── src/
│ ├── data_module/ # Data processing module
│ │ ├── dataset/ # Dataset implementations
│ │ ├── augmentation/ # Data augmentation
│ │ ├── collate_fn/ # Collate functions
│ │ ├── compute_metrics/ # Metrics computation
│ │ ├── prepare_data/ # Data preparation logic
│ │ ├── data_func/ # Data utility functions
│ │ └── utils.py # Module-specific utilities
│ │
│ ├── model_module/ # Model implementations
│ │ ├── brain_decoder/ # Brain decoder models
│ │ └── model/ # Alternative model location
│ │
│ ├── trainer_module/ # Training logic
│ ├── analysis_module/ # Analysis and evaluation
│ ├── llm/ # LLM-related code
│ └── utils/ # Shared utilities
│
├── data/
│ ├── raw/ # Original, immutable data
│ ├── processed/ # Cleaned, transformed data
│ └── external/ # Third-party data
│
├── outputs/
│ ├── logs/ # Training and evaluation logs
│ ├── checkpoints/ # Model checkpoints
│ ├── tables/ # Result tables
│ └── figures/ # Plots and visualizations
│
├── pyproject.toml # Project configuration
├── uv.lock # Dependency lock file
├── TODO.md # Task tracking
├── README.md # Project documentation
└── .gitignore # Git ignore rules
For detailed directory structure with file descriptions, refer to references/structure.md.
When adding a new dataset:
src/data_module/dataset/@register_dataset("name") decoratortorch.utils.data.Dataset__init__, __len__, __getitem__from torch.utils.data import Dataset
from typing import Dict
import torch
from src.data_module.dataset import register_dataset
@register_dataset("custom")
class CustomDataset(Dataset):
def __init__(self, data):
self.data = data
def __len__(self):
return len(self.data)
def __getitem__(self, i: int) -> Dict[str, torch.Tensor]:
return self.data[i]
CRITICAL: Models use config-driven pattern
When adding a new model:
src/model_module/model/ or appropriate module subdirectory@register_model('ModelName') decorator__init__ accepts ONLY cfg parameter - all hyperparameters come from configforward() returns dict: {"loss": loss, "labels": labels, "logits": logits}self.trainingfrom src.model_module.brain_decoder import register_model
@register_model('MyModel')
class MyModel(nn.Module):
def __init__(self, cfg):
super().__init__()
self.cfg = cfg
self.task = cfg.dataset.task
# ALL parameters from cfg
self.hidden_dim = cfg.model.hidden_dim
self.output_dim = cfg.dataset.target_size[cfg.dataset.task]
def forward(self, x, labels=None, **kwargs):
if self.training:
# Training logic
pass
else:
# Inference logic
pass
return {"loss": loss, "labels": labels, "logits": logits}
When adding augmentation:
src/data_module/augmentation/For comprehensive style guidelines, refer to references/code_style.md.
Key principles:
__init__.py files contain factory/registry logicThe project uses Hydra for configuration management:
run/conf/ organize by moduleFor detailed information, consult:
references/structure.md - Detailed directory structure with file descriptionsreferences/factory_pattern.md - Factory pattern in-depth explanationreferences/registry_pattern.md - Registry pattern in-depth explanationreferences/auto_import.md - Auto-import pattern in-depth explanationreferences/code_style.md - Comprehensive code style guidelinesWorking examples in examples/:
examples/custom_dataset.py - Custom dataset implementationexamples/custom_model.py - Custom model implementationexamples/augmentation_example.py - Data augmentation exampleexamples/config_example.yaml - Configuration file exampleexamples/pipeline_example.sh - Pipeline script example