Master declarative, no-code data dashboards with Lumen YAML specifications. Use this skill when building standard data exploration dashboards, connecting multiple data sources (files, databases, APIs), creating interactive filters and cross-filtering, designing responsive layouts with indicators and charts, or enabling rapid dashboard prototyping without writing code.
/plugin marketplace add uw-ssec/rse-agents/plugin install uw-ssec-holoviz-visualization-community-plugins-holoviz-visualization@uw-ssec/rse-agentsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Lumen is a declarative framework for creating data dashboards through YAML specifications. Build interactive data exploration dashboards without writing code - just configuration.
Lumen provides a declarative approach to building data dashboards:
| Feature | Lumen Dashboards | Panel | Lumen AI |
|---|---|---|---|
| Approach | Declarative YAML | Imperative Python | Conversational |
| Code Required | No | Yes | No |
| Use Case | Fixed dashboards | Custom apps | Ad-hoc exploration |
| Flexibility | Medium | High | High |
| Development Speed | Very fast | Medium | Very fast |
Use Lumen when:
Use Panel when:
Use Lumen AI when:
pip install lumen
File: dashboard.yaml
sources:
data:
type: file
tables:
penguins: https://datasets.holoviz.org/penguins/v1/penguins.csv
pipelines:
main:
source: data
table: penguins
filters:
- type: widget
field: species
layouts:
- title: Penguin Explorer
views:
- type: hvplot
pipeline: main
kind: scatter
x: bill_length_mm
y: bill_depth_mm
by: species
title: Bill Dimensions
Launch:
lumen serve dashboard.yaml --show
Data sources provide tables for your dashboard.
Supported sources:
Quick example:
sources:
mydata:
type: file
tables:
sales: ./data/sales.csv
See: Data Sources Reference for comprehensive source configuration.
Pipelines define data flows: Source → Filters → Transforms → Views
Basic pipeline:
pipelines:
sales_pipeline:
source: mydata
table: sales
filters:
- type: widget
field: region
transforms:
- type: aggregate
by: ['category']
aggregate:
total_sales: {revenue: sum}
Components:
Add interactive controls:
filters:
# Dropdown select
- type: widget
field: category
# Multi-select
- type: widget
field: region
multiple: true
# Date range
- type: widget
field: date
widget: date_range_slider
# Numeric slider
- type: param
parameter: min_revenue
widget_type: FloatSlider
start: 0
end: 100000
Process data in pipelines:
Common transforms:
columns: Select specific columnsquery: Filter rows with pandas queryaggregate: Group and aggregatesort: Sort datasql: Custom SQL queriesExample:
transforms:
- type: columns
columns: ['date', 'region', 'revenue']
- type: query
query: "revenue > 1000"
- type: aggregate
by: ['region']
aggregate:
total: {revenue: sum}
avg: {revenue: mean}
See: Data Transforms Reference for all transform types.
Visualize data:
View types:
hvplot: Interactive plots (line, scatter, bar, etc.)table: Data tablesindicator: KPI metricsvega: Vega-Lite specificationsaltair: Altair chartsplotly: Plotly chartsExample:
views:
- type: hvplot
pipeline: main
kind: line
x: date
y: revenue
by: category
- type: indicator
pipeline: main
field: total_revenue
title: Total Sales
format: '${value:,.0f}'
See: Views Reference for all view types and options.
Arrange views on the page:
layouts:
- title: Overview
layout: [[0, 1, 2], [3], [4, 5]] # Grid positions
views:
- type: indicator
# View 0 config...
- type: indicator
# View 1 config...
- type: hvplot
# View 2 config...
Layout types:
[[0, 1], [2, 3]]See: Layouts Reference for advanced layout patterns.
sources:
metrics:
type: file
tables:
data: ./metrics.csv
pipelines:
kpis:
source: metrics
table: data
transforms:
- type: aggregate
aggregate:
total_revenue: {revenue: sum}
total_orders: {orders: sum}
avg_order_value: {revenue: mean}
layouts:
- title: KPIs
layout: [[0, 1, 2]]
views:
- type: indicator
pipeline: kpis
field: total_revenue
format: '${value:,.0f}'
- type: indicator
pipeline: kpis
field: total_orders
format: '{value:,.0f}'
- type: indicator
pipeline: kpis
field: avg_order_value
format: '${value:.2f}'
pipelines:
explorer:
source: mydata
table: sales
filters:
- type: widget
field: region
label: Region
- type: widget
field: category
label: Category
multiple: true
- type: widget
field: date
widget: date_range_slider
views:
- type: hvplot
kind: scatter
x: price
y: quantity
by: category
- type: table
page_size: 20
sources:
sales_db:
type: postgres
connection_string: postgresql://localhost/sales
tables: [orders, customers]
inventory_file:
type: file
tables:
stock: ./inventory.csv
pipelines:
sales_pipeline:
source: sales_db
table: orders
inventory_pipeline:
source: inventory_file
table: stock
pipelines:
main:
source: data
table: sales
filters:
- type: widget
field: region
layouts:
- title: Analysis
views:
# Clicking bar filters other views
- type: hvplot
pipeline: main
kind: bar
x: category
y: revenue
selection_group: category_filter
# Responds to selection above
- type: hvplot
pipeline: main
kind: scatter
x: price
y: quantity
selection_group: category_filter
transforms:
- type: sql
query: |
SELECT
region,
category,
SUM(revenue) as total_revenue,
COUNT(*) as order_count,
AVG(revenue) as avg_order_value
FROM table
WHERE date >= '2024-01-01'
GROUP BY region, category
HAVING total_revenue > 10000
ORDER BY total_revenue DESC
While Lumen is designed for YAML, you can also use Python:
from lumen.sources import FileSource
from lumen.pipeline import Pipeline
from lumen.views import hvPlotView
from lumen.dashboard import Dashboard
import panel as pn
# Create source
source = FileSource(tables={'sales': './data/sales.csv'})
# Create pipeline
pipeline = Pipeline(source=source, table='sales')
# Create view
view = hvPlotView(
pipeline=pipeline,
kind='scatter',
x='price',
y='quantity'
)
# Create dashboard
dashboard = Dashboard(
pipelines={'main': pipeline},
layouts=[view]
)
# Serve
dashboard.servable()
See: Python API Reference for detailed API usage.
config:
title: My Dashboard
theme: dark # or 'default', 'material'
sizing_mode: stretch_width
logo: ./logo.png
favicon: ./favicon.ico
layout: column # or 'grid', 'tabs'
config:
theme: material
theme_json:
palette:
primary: '#00aa41'
secondary: '#616161'
# Serve with auth
lumen serve dashboard.yaml \
--oauth-provider=generic \
--oauth-key=${OAUTH_KEY} \
--oauth-secret=${OAUTH_SECRET}
# Local with auto-reload
lumen serve dashboard.yaml --autoreload --show
# Specific port
lumen serve dashboard.yaml --port 5007
# Production server
panel serve dashboard.yaml \
--port 80 \
--num-procs 4 \
--allow-websocket-origin=analytics.company.com
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY dashboard.yaml data/ ./
CMD ["lumen", "serve", "dashboard.yaml", "--port", "5006", "--address", "0.0.0.0"]
See: Deployment Guide for production deployment best practices.
# ✅ Good: Descriptive names
sources:
sales_database:
type: postgres
tables: [orders, customers]
inventory_files:
type: file
tables:
stock: ./inventory.csv
# ❌ Bad: Generic names
sources:
db1:
type: postgres
file1:
type: file
# Define reusable pipelines
pipelines:
base_sales:
source: data
table: sales
filters:
- type: widget
field: region
summary_sales:
pipeline: base_sales # Extends base_sales
transforms:
- type: aggregate
by: ['category']
aggregate:
total: {revenue: sum}
# Limit data size for large tables
sources:
bigdata:
type: postgres
tables:
events: "SELECT * FROM events WHERE date >= '2024-01-01' LIMIT 100000"
# Provide clear labels and formatting
filters:
- type: widget
field: region
label: "Sales Region" # Clear label
views:
- type: indicator
field: revenue
title: "Total Revenue"
format: '${value:,.0f}' # Formatted display
# Check YAML syntax
python -c "import yaml; yaml.safe_load(open('dashboard.yaml'))"
# Run with debug logging
lumen serve dashboard.yaml --log-level=debug
See: Troubleshooting Guide for common issues.
Resources:
Resources:
Resources:
Resources:
Lumen enables rapid dashboard development through declarative YAML specifications.
Strengths:
Ideal for:
Consider alternatives when:
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.