Comprehensive toolkit for survival analysis and time-to-event modeling in Python using scikit-survival. Use this skill when working with censored survival data, performing time-to-event analysis, fitting Cox models, Random Survival Forests, Gradient Boosting models, or Survival SVMs, evaluating survival predictions with concordance index or Brier score, handling competing risks, or implementing any survival analysis workflow with the scikit-survival library.
/plugin marketplace add kjgarza/marketplace-claude/plugin install kjgarza-scholarly-comms-researcher-plugins-scholarly-comms-researcher@kjgarza/marketplace-claudeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/competing-risks.mdreferences/cox-models.mdreferences/data-handling.mdreferences/ensemble-models.mdreferences/evaluation-metrics.mdreferences/svm-models.mdscikit-survival is a Python library for survival analysis built on top of scikit-learn. It provides specialized tools for time-to-event analysis, handling the unique challenge of censored data where some observations are only partially known.
Survival analysis aims to establish connections between covariates and the time of an event, accounting for censored records (particularly right-censored data from studies where participants don't experience events during observation periods).
Use this skill when:
scikit-survival provides multiple model families, each suited for different scenarios:
Use for: Standard survival analysis with interpretable coefficients
CoxPHSurvivalAnalysis: Basic Cox modelCoxnetSurvivalAnalysis: Penalized Cox with elastic net for high-dimensional dataIPCRidge: Ridge regression for accelerated failure time modelsSee: references/cox-models.md for detailed guidance on Cox models, regularization, and interpretation
Use for: High predictive performance with complex non-linear relationships
RandomSurvivalForest: Robust, non-parametric ensemble methodGradientBoostingSurvivalAnalysis: Tree-based boosting for maximum performanceComponentwiseGradientBoostingSurvivalAnalysis: Linear boosting with feature selectionExtraSurvivalTrees: Extremely randomized trees for additional regularizationSee: references/ensemble-models.md for comprehensive guidance on ensemble methods, hyperparameter tuning, and when to use each model
Use for: Medium-sized datasets with margin-based learning
FastSurvivalSVM: Linear SVM optimized for speedFastKernelSurvivalSVM: Kernel SVM for non-linear relationshipsHingeLossSurvivalSVM: SVM with hinge lossClinicalKernelTransform: Specialized kernel for clinical + molecular dataSee: references/svm-models.md for detailed SVM guidance, kernel selection, and hyperparameter tuning
Start
├─ High-dimensional data (p > n)?
│ ├─ Yes → CoxnetSurvivalAnalysis (elastic net)
│ └─ No → Continue
│
├─ Need interpretable coefficients?
│ ├─ Yes → CoxPHSurvivalAnalysis or ComponentwiseGradientBoostingSurvivalAnalysis
│ └─ No → Continue
│
├─ Complex non-linear relationships expected?
│ ├─ Yes
│ │ ├─ Large dataset (n > 1000) → GradientBoostingSurvivalAnalysis
│ │ ├─ Medium dataset → RandomSurvivalForest or FastKernelSurvivalSVM
│ │ └─ Small dataset → RandomSurvivalForest
│ └─ No → CoxPHSurvivalAnalysis or FastSurvivalSVM
│
└─ For maximum performance → Try multiple models and compare
Before modeling, properly prepare survival data:
from sksurv.util import Surv
# From separate arrays
y = Surv.from_arrays(event=event_array, time=time_array)
# From DataFrame
y = Surv.from_dataframe('event', 'time', df)
See: references/data-handling.md for complete preprocessing workflows, data validation, and best practices
Proper evaluation is critical for survival models. Use appropriate metrics that account for censoring:
Primary metric for ranking/discrimination:
from sksurv.metrics import concordance_index_censored, concordance_index_ipcw
# Harrell's C-index
c_harrell = concordance_index_censored(y_test['event'], y_test['time'], risk_scores)[0]
# Uno's C-index (recommended)
c_uno = concordance_index_ipcw(y_train, y_test, risk_scores)[0]
Evaluate discrimination at specific time points:
from sksurv.metrics import cumulative_dynamic_auc
times = [365, 730, 1095] # 1, 2, 3 years
auc, mean_auc = cumulative_dynamic_auc(y_train, y_test, risk_scores, times)
Assess both discrimination and calibration:
from sksurv.metrics import integrated_brier_score
ibs = integrated_brier_score(y_train, y_test, survival_functions, times)
See: references/evaluation-metrics.md for comprehensive evaluation guidance, metric selection, and using scorers with cross-validation
Handle situations with multiple mutually exclusive event types:
from sksurv.nonparametric import cumulative_incidence_competing_risks
# Estimate cumulative incidence for each event type
time_points, cif_event1, cif_event2 = cumulative_incidence_competing_risks(y)
Use competing risks when:
See: references/competing-risks.md for detailed competing risks methods, cause-specific hazard models, and interpretation
Estimate survival functions without parametric assumptions:
from sksurv.nonparametric import kaplan_meier_estimator
time, survival_prob = kaplan_meier_estimator(y['event'], y['time'])
from sksurv.nonparametric import nelson_aalen_estimator
time, cumulative_hazard = nelson_aalen_estimator(y['event'], y['time'])
from sksurv.datasets import load_breast_cancer
from sksurv.linear_model import CoxPHSurvivalAnalysis
from sksurv.metrics import concordance_index_ipcw
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# 1. Load and prepare data
X, y = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 2. Preprocess
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# 3. Fit model
estimator = CoxPHSurvivalAnalysis()
estimator.fit(X_train_scaled, y_train)
# 4. Predict
risk_scores = estimator.predict(X_test_scaled)
# 5. Evaluate
c_index = concordance_index_ipcw(y_train, y_test, risk_scores)[0]
print(f"C-index: {c_index:.3f}")
from sksurv.linear_model import CoxnetSurvivalAnalysis
from sklearn.model_selection import GridSearchCV
from sksurv.metrics import as_concordance_index_ipcw_scorer
# 1. Use penalized Cox for feature selection
estimator = CoxnetSurvivalAnalysis(l1_ratio=0.9) # Lasso-like
# 2. Tune regularization with cross-validation
param_grid = {'alpha_min_ratio': [0.01, 0.001]}
cv = GridSearchCV(estimator, param_grid,
scoring=as_concordance_index_ipcw_scorer(), cv=5)
cv.fit(X, y)
# 3. Identify selected features
best_model = cv.best_estimator_
selected_features = np.where(best_model.coef_ != 0)[0]
from sksurv.ensemble import GradientBoostingSurvivalAnalysis
from sklearn.model_selection import GridSearchCV
# 1. Define parameter grid
param_grid = {
'learning_rate': [0.01, 0.05, 0.1],
'n_estimators': [100, 200, 300],
'max_depth': [3, 5, 7]
}
# 2. Grid search
gbs = GradientBoostingSurvivalAnalysis()
cv = GridSearchCV(gbs, param_grid, cv=5,
scoring=as_concordance_index_ipcw_scorer(), n_jobs=-1)
cv.fit(X_train, y_train)
# 3. Evaluate best model
best_model = cv.best_estimator_
risk_scores = best_model.predict(X_test)
c_index = concordance_index_ipcw(y_train, y_test, risk_scores)[0]
from sksurv.linear_model import CoxPHSurvivalAnalysis
from sksurv.ensemble import RandomSurvivalForest, GradientBoostingSurvivalAnalysis
from sksurv.svm import FastSurvivalSVM
from sksurv.metrics import concordance_index_ipcw, integrated_brier_score
# Define models
models = {
'Cox': CoxPHSurvivalAnalysis(),
'RSF': RandomSurvivalForest(n_estimators=100, random_state=42),
'GBS': GradientBoostingSurvivalAnalysis(random_state=42),
'SVM': FastSurvivalSVM(random_state=42)
}
# Evaluate each model
results = {}
for name, model in models.items():
model.fit(X_train_scaled, y_train)
risk_scores = model.predict(X_test_scaled)
c_index = concordance_index_ipcw(y_train, y_test, risk_scores)[0]
results[name] = c_index
print(f"{name}: C-index = {c_index:.3f}")
# Select best model
best_model_name = max(results, key=results.get)
print(f"\nBest model: {best_model_name}")
scikit-survival fully integrates with scikit-learn's ecosystem:
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import cross_val_score, GridSearchCV
# Use pipelines
pipeline = Pipeline([
('scaler', StandardScaler()),
('model', CoxPHSurvivalAnalysis())
])
# Use cross-validation
scores = cross_val_score(pipeline, X, y, cv=5,
scoring=as_concordance_index_ipcw_scorer())
# Use grid search
param_grid = {'model__alpha': [0.1, 1.0, 10.0]}
cv = GridSearchCV(pipeline, param_grid, cv=5)
cv.fit(X, y)
This skill includes detailed reference files for specific topics:
references/cox-models.md: Complete guide to Cox proportional hazards models, penalized Cox (CoxNet), IPCRidge, regularization strategies, and interpretationreferences/ensemble-models.md: Random Survival Forests, Gradient Boosting, hyperparameter tuning, feature importance, and model selectionreferences/evaluation-metrics.md: Concordance index (Harrell's vs Uno's), time-dependent AUC, Brier score, comprehensive evaluation pipelinesreferences/data-handling.md: Data loading, preprocessing workflows, handling missing data, feature encoding, validation checksreferences/svm-models.md: Survival Support Vector Machines, kernel selection, clinical kernel transform, hyperparameter tuningreferences/competing-risks.md: Competing risks analysis, cumulative incidence functions, cause-specific hazard modelsLoad these reference files when detailed information is needed for specific tasks.
sksurv.datasets for practice datasets (GBSG2, WHAS500, veterans lung cancer, etc.)# Models
from sksurv.linear_model import CoxPHSurvivalAnalysis, CoxnetSurvivalAnalysis, IPCRidge
from sksurv.ensemble import RandomSurvivalForest, GradientBoostingSurvivalAnalysis
from sksurv.svm import FastSurvivalSVM, FastKernelSurvivalSVM
from sksurv.tree import SurvivalTree
# Evaluation metrics
from sksurv.metrics import (
concordance_index_censored,
concordance_index_ipcw,
cumulative_dynamic_auc,
brier_score,
integrated_brier_score,
as_concordance_index_ipcw_scorer,
as_integrated_brier_score_scorer
)
# Non-parametric estimation
from sksurv.nonparametric import (
kaplan_meier_estimator,
nelson_aalen_estimator,
cumulative_incidence_competing_risks
)
# Data handling
from sksurv.util import Surv
from sksurv.preprocessing import OneHotEncoder, encode_categorical
from sksurv.datasets import load_gbsg2, load_breast_cancer, load_veterans_lung_cancer
# Kernels
from sksurv.kernels import ClinicalKernelTransform
Use when working with Payload CMS projects (payload.config.ts, collections, fields, hooks, access control, Payload API). Use when debugging validation errors, security issues, relationship queries, transactions, or hook behavior.