From ds
Provides Polars expression API for high-performance DataFrame operations with lazy evaluation, joins, aggregations, and I/O. Use for large datasets (10M+ rows) as pandas alternative in EDA, preprocessing, and experiments.
npx claudepluginhub andikarachman/data-science-plugin --plugin dsThis skill uses the workspace's default tool permissions.
Polars is a lightning-fast DataFrame library for Python and Rust built on Apache Arrow. Work with Polars' expression-based API, lazy evaluation framework, and high-performance data manipulation capabilities for efficient data processing, pandas migration, and data pipeline optimization.
Guides Polars DataFrame library for fast in-memory data processing with lazy evaluation, parallel execution, and Apache Arrow. Use for ETL pipelines and faster pandas on 1-100GB RAM datasets.
Processes 1-100GB tabular data in RAM using Polars DataFrames with lazy evaluation, parallel execution, Arrow backend, select/filter/group_by/joins outperforming pandas.
Guides Polars DataFrame library usage in Python and Rust for fast in-memory data processing. Use for ETL pipelines on 1-100GB datasets with lazy evaluation when pandas is slow.
Share bugs, ideas, or general feedback.
Polars is a lightning-fast DataFrame library for Python and Rust built on Apache Arrow. Work with Polars' expression-based API, lazy evaluation framework, and high-performance data manipulation capabilities for efficient data processing, pandas migration, and data pipeline optimization.
Role in the ds plugin: This skill is the Polars API reference for the plugin, serving as a parallel alternative to pandas-pro for large datasets. It is invoked alongside pandas-pro references in /ds:eda for lazy data scanning (step 3), schema inspection (step 4), and expression-based distribution analysis (step 5); in /ds:preprocess for I/O optimization (step 2); in /ds:experiment for window/rolling features and join patterns (step 3) and expression-based code scaffolds (step 6); in /ds:plan for large-dataset handling with lazy evaluation and streaming (step 3); and in /ds:validate for data loading (step 2). Boundary with pandas-pro: polars teaches the Polars expression API (lazy evaluation, pl.col() expressions, Arrow-native operations). pandas-pro teaches the pandas API (indexing, groupby, merge, method chaining). For large datasets (10M+ rows or >100MB), prefer polars for its lazy evaluation and streaming capabilities. For smaller datasets or when downstream tools require pandas (scikit-learn, statsmodels), use pandas-pro. Boundary with data-preprocessing: data-preprocessing pipeline scripts use pandas internally. Polars users should convert with df.to_pandas() for preprocessing pipelines or write custom Polars-based cleaning using patterns from this skill's reference files. Boundary with scikit-learn: sklearn expects pandas DataFrames by default. Use df.to_pandas() before sklearn pipeline input, or use set_output(transform="polars") (sklearn 1.4+) for native Polars output. Boundary with statsmodels: statsmodels formula API (smf.ols('y ~ x', data=df)) expects pandas DataFrames. Convert with df.to_pandas() before statsmodels calls.
over()pandas-prodata-preprocessing (pandas-based)scikit-learnstatsmodels (expects pandas)Install Polars:
uv pip install polars
Basic DataFrame creation and operations:
import polars as pl
# Create DataFrame
df = pl.DataFrame({
"name": ["Alice", "Bob", "Charlie"],
"age": [25, 30, 35],
"city": ["NY", "LA", "SF"]
})
# Select columns
df.select("name", "age")
# Filter rows
df.filter(pl.col("age") > 25)
# Add computed columns
df.with_columns(
age_plus_10=pl.col("age") + 10
)
Expressions are the fundamental building blocks of Polars operations. They describe transformations on data and can be composed, reused, and optimized.
Key principles:
pl.col("column_name") to reference columnsExample:
# Expression-based computation
df.select(
pl.col("name"),
(pl.col("age") * 12).alias("age_in_months")
)
Eager (DataFrame): Operations execute immediately
df = pl.read_csv("file.csv") # Reads immediately
result = df.filter(pl.col("age") > 25) # Executes immediately
Lazy (LazyFrame): Operations build a query plan, optimized before execution
lf = pl.scan_csv("file.csv") # Doesn't read yet
result = lf.filter(pl.col("age") > 25).select("name", "age")
df = result.collect() # Now executes optimized query
When to use lazy:
Benefits of lazy evaluation:
For detailed concepts, load references/core_concepts.md.
Select and manipulate columns:
# Select specific columns
df.select("name", "age")
# Select with expressions
df.select(
pl.col("name"),
(pl.col("age") * 2).alias("double_age")
)
# Select all columns matching a pattern
df.select(pl.col("^.*_id$"))
Filter rows by conditions:
# Single condition
df.filter(pl.col("age") > 25)
# Multiple conditions (cleaner than using &)
df.filter(
pl.col("age") > 25,
pl.col("city") == "NY"
)
# Complex conditions
df.filter(
(pl.col("age") > 25) | (pl.col("city") == "LA")
)
Add or modify columns while preserving existing ones:
# Add new columns
df.with_columns(
age_plus_10=pl.col("age") + 10,
name_upper=pl.col("name").str.to_uppercase()
)
# Parallel computation (all columns computed in parallel)
df.with_columns(
pl.col("value") * 10,
pl.col("value") * 100,
)
Group data and compute aggregations:
# Basic grouping
df.group_by("city").agg(
pl.col("age").mean().alias("avg_age"),
pl.len().alias("count")
)
# Multiple group keys
df.group_by("city", "department").agg(
pl.col("salary").sum()
)
# Conditional aggregations
df.group_by("city").agg(
(pl.col("age") > 30).sum().alias("over_30")
)
For detailed operation patterns, load references/operations.md.
Common aggregations within group_by context:
pl.len() - count rowspl.col("x").sum() - sum valuespl.col("x").mean() - averagepl.col("x").min() / pl.col("x").max() - extremespl.first() / pl.last() - first/last valuesover()Apply aggregations while preserving row count:
# Add group statistics to each row
df.with_columns(
avg_age_by_city=pl.col("age").mean().over("city"),
rank_in_city=pl.col("salary").rank().over("city")
)
# Multiple grouping columns
df.with_columns(
group_avg=pl.col("value").mean().over("category", "region")
)
Mapping strategies:
group_to_rows (default): Preserves original row orderexplode: Faster but groups rows togetherjoin: Creates list columnsPolars supports reading and writing:
CSV:
# Eager
df = pl.read_csv("file.csv")
df.write_csv("output.csv")
# Lazy (preferred for large files)
lf = pl.scan_csv("file.csv")
result = lf.filter(...).select(...).collect()
Parquet (recommended for performance):
df = pl.read_parquet("file.parquet")
df.write_parquet("output.parquet")
JSON:
df = pl.read_json("file.json")
df.write_json("output.json")
For comprehensive I/O documentation, load references/io_guide.md.
Combine DataFrames:
# Inner join
df1.join(df2, on="id", how="inner")
# Left join
df1.join(df2, on="id", how="left")
# Join on different column names
df1.join(df2, left_on="user_id", right_on="id")
Stack DataFrames:
# Vertical (stack rows)
pl.concat([df1, df2], how="vertical")
# Horizontal (add columns)
pl.concat([df1, df2], how="horizontal")
# Diagonal (union with different schemas)
pl.concat([df1, df2], how="diagonal")
Reshape data:
# Pivot (wide format)
df.pivot(values="sales", index="date", columns="product")
# Unpivot (long format)
df.unpivot(index="id", on=["col1", "col2"])
For detailed transformation examples, load references/transformations.md.
Polars offers significant performance improvements over pandas with a cleaner API. Key differences:
| Operation | Pandas | Polars |
|---|---|---|
| Select column | df["col"] | df.select("col") |
| Filter | df[df["col"] > 10] | df.filter(pl.col("col") > 10) |
| Add column | df.assign(x=...) | df.with_columns(x=...) |
| Group by | df.groupby("col").agg(...) | df.group_by("col").agg(...) |
| Window | df.groupby("col").transform(...) | df.with_columns(...).over("col") |
Pandas sequential (slow):
df.assign(
col_a=lambda df_: df_.value * 10,
col_b=lambda df_: df_.value * 100
)
Polars parallel (fast):
df.with_columns(
col_a=pl.col("value") * 10,
col_b=pl.col("value") * 100,
)
For comprehensive migration guide, load references/pandas_migration.md.
Use lazy evaluation for large datasets:
lf = pl.scan_csv("large.csv") # Don't use read_csv
result = lf.filter(...).select(...).collect()
Avoid Python functions in hot paths:
.map_elements() only when necessaryUse streaming for very large data:
lf.collect(streaming=True)
Select only needed columns early:
# Good: Select columns early
lf.select("col1", "col2").filter(...)
# Bad: Filter on all columns first
lf.filter(...).select("col1", "col2")
Use appropriate data types:
Conditional operations:
pl.when(condition).then(value).otherwise(other_value)
Column operations across multiple columns:
df.select(pl.col("^.*_value$") * 2) # Regex pattern
Null handling:
pl.col("x").fill_null(0)
pl.col("x").is_null()
pl.col("x").drop_nulls()
For additional best practices and patterns, load references/best_practices.md.
This skill includes comprehensive reference documentation:
core_concepts.md - Detailed explanations of expressions, lazy evaluation, and type systemoperations.md - Comprehensive guide to all common operations with examplespandas_migration.md - Complete migration guide from pandas to Polarsio_guide.md - Data I/O operations for all supported formatstransformations.md - Joins, concatenation, pivots, and reshaping operationsbest_practices.md - Performance optimization tips and common patternsLoad these references as needed when users require detailed information about specific topics.