npx claudepluginhub andikarachman/data-science-plugin --plugin dsThis skill uses the workspace's default tool permissions.
This skill provides comprehensive guidance for machine learning tasks using scikit-learn, the industry-standard Python library for classical machine learning. Use this skill for classification, regression, clustering, dimensionality reduction, preprocessing, model evaluation, and building production-ready ML pipelines.
Guides machine learning tasks in Python using scikit-learn for classification, regression, clustering, preprocessing, model evaluation, and pipelines.
Guides scikit-learn for supervised/unsupervised learning, classification, regression, clustering, preprocessing, pipelines, model evaluation, and hyperparameter tuning in Python.
Guides classical ML in Python with scikit-learn: classification, regression, clustering, dim reduction, evaluation, tuning, preprocessing pipelines for tabular data.
Share bugs, ideas, or general feedback.
This skill provides comprehensive guidance for machine learning tasks using scikit-learn, the industry-standard Python library for classical machine learning. Use this skill for classification, regression, clustering, dimensionality reduction, preprocessing, model evaluation, and building production-ready ML pipelines.
Role in the ds plugin: This skill is invoked by /ds:experiment at step 3 (Methodology Design) for pipeline construction and hyperparameter search setup, at step 6 (Execute) for code scaffold generation, and at step 7 (Generate Results) for evaluation utilities. It provides concrete scikit-learn API patterns complementing the split-strategy skill (which split to use), the feature-engineer agent (which transforms to apply), and the model-evaluator agent (evaluation methodology). For time-series-specific classification, regression, clustering, and anomaly detection, use the aeon skill. Aeon estimators follow the scikit-learn API (fit, predict, transform) and work within sklearn pipelines. Scikit-learn remains the primary reference for cross-sectional (tabular) ML. For model-agnostic feature attribution beyond permutation_importance and feature_importances_, use the shap skill. SHAP provides per-prediction explanations, interaction detection, and fairness analysis that scikit-learn's built-in methods do not cover. For pre-model data preparation (deduplication, format conversion, schema validation, structural cleaning, statistical imputation, text processing, outlier handling, ETL orchestration), use the data-preprocessing skill. Scikit-learn handles in-model preprocessing inside sklearn Pipelines (scaling, encoding, imputation that participates in cross-validation); data-preprocessing handles everything before data enters a Pipeline. Imputation boundary: data-preprocessing provides pre-model imputation (median, mode, KNN) applied once to the entire dataset before EDA; scikit-learn provides in-model imputation (SimpleImputer, KNNImputer, IterativeImputer) inside Pipelines where it participates in cross-validation folds. For pandas API patterns (DataFrame operations, efficient indexing, memory optimization, merge strategies), use the pandas-pro skill. Boundary with tuning-hyperparameters: scikit-learn provides the API patterns for GridSearchCV, RandomizedSearchCV, and HalvingGridSearchCV (what methods exist, their parameters, and code examples). For hyperparameter tuning workflow guidance (strategy selection decision tree, Bayesian optimization with Optuna, search space design, budget estimation, and result analysis), use the tuning-hyperparameters skill.
# Install scikit-learn using uv
uv pip install scikit-learn
# Optional: Install visualization dependencies
uv pip install matplotlib seaborn
# Commonly used with
uv pip install pandas numpy
Use the scikit-learn skill when:
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
# Split data
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, stratify=y, random_state=42
)
# Preprocess
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Train model
model = RandomForestClassifier(n_estimators=100, random_state=42)
model.fit(X_train_scaled, y_train)
# Evaluate
y_pred = model.predict(X_test_scaled)
print(classification_report(y_test, y_pred))
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.ensemble import GradientBoostingClassifier
# Define feature types
numeric_features = ['age', 'income']
categorical_features = ['gender', 'occupation']
# Create preprocessing pipelines
numeric_transformer = Pipeline([
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())
])
categorical_transformer = Pipeline([
('imputer', SimpleImputer(strategy='most_frequent')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
# Combine transformers
preprocessor = ColumnTransformer([
('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categorical_features)
])
# Full pipeline
model = Pipeline([
('preprocessor', preprocessor),
('classifier', GradientBoostingClassifier(random_state=42))
])
# Fit and predict
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
Comprehensive algorithms for classification and regression tasks.
Key algorithms:
When to use:
See: references/supervised_learning.md for detailed algorithm documentation, parameters, and usage examples.
Discover patterns in unlabeled data through clustering and dimensionality reduction.
Clustering algorithms:
Dimensionality reduction:
When to use:
See: references/unsupervised_learning.md for detailed documentation.
Tools for robust model evaluation, cross-validation, and hyperparameter tuning.
Cross-validation strategies:
Hyperparameter tuning:
Metrics:
When to use:
See: references/model_evaluation.md for comprehensive metrics and tuning strategies.
Transform raw data into formats suitable for machine learning.
Scaling and normalization:
Encoding categorical variables:
Handling missing values:
Feature engineering:
When to use:
See: references/preprocessing.md for detailed preprocessing techniques.
Build reproducible, production-ready ML workflows.
Key components:
Benefits:
When to use:
See: references/pipelines_and_composition.md for comprehensive pipeline patterns.
Run a complete classification workflow with preprocessing, model comparison, hyperparameter tuning, and evaluation:
python scripts/classification_pipeline.py
This script demonstrates:
Perform clustering analysis with algorithm comparison and visualization:
python scripts/clustering_analysis.py
This script demonstrates:
This skill includes comprehensive reference files for deep dives into specific topics:
File: references/quick_reference.md
File: references/supervised_learning.md
File: references/unsupervised_learning.md
File: references/model_evaluation.md
File: references/preprocessing.md
File: references/pipelines_and_composition.md
Load and explore data
import pandas as pd
df = pd.read_csv('data.csv')
X = df.drop('target', axis=1)
y = df['target']
Split data with stratification
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, stratify=y, random_state=42
)
Create preprocessing pipeline
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.compose import ColumnTransformer
# Handle numeric and categorical features separately
preprocessor = ColumnTransformer([
('num', StandardScaler(), numeric_features),
('cat', OneHotEncoder(), categorical_features)
])
Build complete pipeline
model = Pipeline([
('preprocessor', preprocessor),
('classifier', RandomForestClassifier(random_state=42))
])
Tune hyperparameters
from sklearn.model_selection import GridSearchCV
param_grid = {
'classifier__n_estimators': [100, 200],
'classifier__max_depth': [10, 20, None]
}
grid_search = GridSearchCV(model, param_grid, cv=5)
grid_search.fit(X_train, y_train)
Evaluate on test set
from sklearn.metrics import classification_report
best_model = grid_search.best_estimator_
y_pred = best_model.predict(X_test)
print(classification_report(y_test, y_pred))
Preprocess data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
Find optimal number of clusters
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
scores = []
for k in range(2, 11):
kmeans = KMeans(n_clusters=k, random_state=42)
labels = kmeans.fit_predict(X_scaled)
scores.append(silhouette_score(X_scaled, labels))
optimal_k = range(2, 11)[np.argmax(scores)]
Apply clustering
model = KMeans(n_clusters=optimal_k, random_state=42)
labels = model.fit_predict(X_scaled)
Visualize with dimensionality reduction
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X_2d = pca.fit_transform(X_scaled)
plt.scatter(X_2d[:, 0], X_2d[:, 1], c=labels, cmap='viridis')
Pipelines prevent data leakage and ensure consistency:
# Good: Preprocessing in pipeline
pipeline = Pipeline([
('scaler', StandardScaler()),
('model', LogisticRegression())
])
# Bad: Preprocessing outside (can leak information)
X_scaled = StandardScaler().fit_transform(X)
Never fit on test data:
# Good
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test) # Only transform
# Bad
scaler = StandardScaler()
X_all_scaled = scaler.fit_transform(np.vstack([X_train, X_test]))
Preserve class distribution:
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, stratify=y, random_state=42
)
model = RandomForestClassifier(n_estimators=100, random_state=42)
Algorithms requiring feature scaling:
Algorithms not requiring scaling:
Issue: Model didn't converge
Solution: Increase max_iter or scale features
model = LogisticRegression(max_iter=1000)
Issue: Overfitting Solution: Use regularization, cross-validation, or simpler model
# Add regularization
model = Ridge(alpha=1.0)
# Use cross-validation
scores = cross_val_score(model, X, y, cv=5)
Solution: Use algorithms designed for large data
# Use SGD for large datasets
from sklearn.linear_model import SGDClassifier
model = SGDClassifier()
# Or MiniBatchKMeans for clustering
from sklearn.cluster import MiniBatchKMeans
model = MiniBatchKMeans(n_clusters=8, batch_size=100)