From ds
Provides pandas API patterns for DataFrame operations, data cleaning, aggregation, merging, and performance optimization. Useful for generating pandas code in data loading, manipulation, or profiling workflows.
npx claudepluginhub andikarachman/data-science-plugin --plugin dsThis skill uses the workspace's default tool permissions.
Expert pandas API reference providing efficient data manipulation, analysis, and transformation patterns with production-grade performance. Covers DataFrame operations, data cleaning fundamentals, aggregation/groupby, merging/joining, and memory optimization for pandas 2.0+.
Performs pandas DataFrame operations for analysis, cleaning, aggregation, merging, pivoting, time series resampling, NaN handling, and large dataset optimization.
Performs pandas DataFrame operations for data analysis, manipulation, transformation, cleaning, aggregation, merging, and time series. Use for vectorized tasks like groupby, pivoting, NaN handling, and optimization.
Provides Polars expression API for high-performance DataFrame operations with lazy evaluation, joins, aggregations, and I/O. Use for large datasets (10M+ rows) as pandas alternative in EDA, preprocessing, and experiments.
Share bugs, ideas, or general feedback.
Expert pandas API reference providing efficient data manipulation, analysis, and transformation patterns with production-grade performance. Covers DataFrame operations, data cleaning fundamentals, aggregation/groupby, merging/joining, and memory optimization for pandas 2.0+.
Role in the ds plugin: This skill is the canonical pandas API reference for the plugin. It is invoked by /ds:eda for efficient data loading (step 3), structural profiling patterns (step 4), and groupby-based distribution analysis (step 5); by /ds:preprocess for I/O optimization (step 2) and vectorized operation patterns (step 5); by /ds:experiment for feature assembly merge patterns (step 3) and data preparation code scaffolds (step 6); and by /ds:plan for large-dataset handling strategy (step 3). Boundary with data-preprocessing: pandas-pro teaches how to call pandas methods (API syntax, parameters, best practices). data-preprocessing teaches when and how to sequence cleaning operations in a tracked pipeline with error handling and logging. For pipeline-oriented data cleaning (deduplication, imputation, outlier removal, schema validation), use the data-preprocessing skill. Boundary with scikit-learn: For in-model preprocessing inside sklearn Pipelines (scaling, encoding, imputation that participates in cross-validation), use the scikit-learn skill. Boundary with polars: For Polars expression API patterns (lazy evaluation, pl.col() expressions, Arrow-native I/O), use the polars skill. pandas-pro and polars are parallel alternatives -- for large datasets (10M+ rows or >100MB), prefer the polars skill for its lazy evaluation and streaming capabilities. pandas 2.0+ note: Patterns in this skill target pandas 2.0+. On pandas 1.x, nullable types (Int64, string), format='mixed' in pd.to_datetime(), and Arrow-backed types (string[pyarrow]) may not be available.
Load detailed guidance based on context:
| Topic | Reference | Load When |
|---|---|---|
| DataFrame Operations | references/dataframe-operations.md | Indexing, selection, filtering, sorting, column operations |
| Data Cleaning | references/data-cleaning.md | Missing values, type conversion, string cleaning, validation |
| Aggregation & GroupBy | references/aggregation-groupby.md | GroupBy, pivot tables, crosstab, window functions, transform/apply |
| Merging & Joining | references/merging-joining.md | Merge, join, concat, combine strategies, anti-joins |
| Performance Optimization | references/performance-optimization.md | Memory profiling, vectorization, chunking, I/O optimization |
.memory_usage(deep=True).copy() when modifying subsets to avoid SettingWithCopyWarning.iterrows() unless absolutely necessarydf['A']['B']) -- use .loc[] or .iloc[].ix, .append() -- use pd.concat())When implementing pandas solutions, provide: