Declarative hyperparameter management skills for ML/AI experiments
npx claudepluginhub geyang/params-protoDeclarative hyperparameter management skills for ML/AI experiments
params-proto is a lightweight, decorator-based library for defining configurations in Python. Write your parameters once with type hints and inline documentation, and get automatic CLI parsing, validation, and help generation.
Note: This is v3 with a completely redesigned API. For the v2 API, see params-proto-v2.
Stop fighting with argparse and click. With params-proto, your configuration is your documentation:
from params_proto import proto
@proto.cli
def train_mnist(
batch_size: int = 128, # Training batch size
lr: float = 0.001, # Learning rate
epochs: int = 10, # Number of training epochs
):
"""Train an MLP on MNIST dataset."""
print(f"Training with lr={lr}, batch_size={batch_size}, epochs={epochs}")
# Your training code here...
if __name__ == "__main__":
train_mnist()
That's it! No argparse boilerplate, no manual help strings, no type conversion logic. Just pure Python functions with type hints and inline comments.
Run it:
$ python train.py --help
usage: train.py [-h] [--batch-size INT] [--lr FLOAT] [--epochs INT]
Train an MLP on MNIST dataset.
options:
-h, --help show this help message and exit
--batch-size INT Training batch size (default: 128)
--lr FLOAT Learning rate (default: 0.001)
--epochs INT Number of training epochs (default: 10)
$ python train.py --lr 0.01 --batch-size 256
Training with lr=0.01, batch_size=256, epochs=10
Note: The actual terminal output includes beautiful ANSI colors! See the demo below or check the documentation for colorized examples.
Want to see the colorized help in action? Clone the repo and run the demo:
# Clone and setup
git clone https://github.com/geyang/params-proto.git
cd params-proto
uv sync
# See the colorized help (with bright blue types, bold cyan defaults, bold red required)
uv run python scratch/demo_v3.py --help
# Try running without required --seed (shows error)
uv run python scratch/demo_v3.py
# Error: the following arguments are required: --seed
# Run with required parameter (keyword syntax)
uv run python scratch/demo_v3.py --seed 42
# Or use positional syntax
uv run python scratch/demo_v3.py 42
pip install params-proto==3.0.0-rc25
Define parameters using type-annotated functions:
@proto.cli
def train(
model: str = "resnet50", # Model architecture
dataset: str = "imagenet", # Dataset to use
gpu: bool = True, # Enable GPU acceleration
):
"""Train a model on a dataset."""
print(f"Training {model} on {dataset}")
Or use classes for more structure:
@proto
class Params: """Training configuration."""
# Model settings
model: str = "resnet50"
pretrained: bool = True # Use pretrained weights
# Training settings
lr: float = 0.001 # Learning rate
batch_size: int = 32 # Batch size
epochs: int = 100 # Number of epochs
Create modular, reusable configuration groups:
from params_proto import proto
@proto.prefix
class Environment:
"""Environment configuration."""
domain: str = "cartpole" # Domain name
task: str = "swingup" # Task name
time_limit: float = 10.0 # Episode time limit
@proto.prefix
class Agent:
"""Agent hyperparameters."""
algorithm: str = "SAC" # RL algorithm
lr: float = 3e-4 # Learning rate
gamma: float = 0.99 # Discount factor
@proto.cli
def train_rl(
seed: int = 0, # Random seed
total_steps: int = 1000000, # Total training steps
):
"""Train RL agent on dm_control."""
print(f"Training {Agent.algorithm} on {Environment.domain}-{Environment.task}")
print(f"Agent LR: {Agent.lr}, Gamma: {Agent.gamma}")
Command line:
$ python train_rl.py --Agent.lr 0.001 --Environment.domain walker --seed 42
Training SAC on walker-swingup
Agent LR: 0.001, Gamma: 0.99
Override parameters in multiple ways:
# 1. Command line
$ python train.py --lr 0.01
# 2. Direct attribute assignment
Params.lr = 0.01
# 3. Function call with kwargs
train(lr=0.01, batch_size=256)
# 4. Using proto.bind() context manager
with proto.bind(lr=0.01, **{"train.epochs": 50}):
train()