Enables AI-powered natural language queries to databases/CSVs, generating SQL, interactive visualizations, and insights via extensible LLM agents in Lumen framework.
npx claudepluginhub uw-ssec/rse-plugins --plugin holoviz-visualizationThis skill uses the workspace's default tool permissions.
Lumen AI is an open-source, agent-based framework for conversational data exploration. Users ask questions in plain English and receive visualizations, SQL queries, and insights automatically generated by large language models.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Lumen AI is an open-source, agent-based framework for conversational data exploration. Users ask questions in plain English and receive visualizations, SQL queries, and insights automatically generated by large language models.
| Feature | Lumen AI | Lumen Dashboards |
|---|---|---|
| Interface | Conversational | Declarative YAML |
| Use Case | Ad-hoc exploration | Fixed dashboards |
| Users | Non-technical | Developers |
Use Lumen AI when: Users need ad-hoc exploration, questions vary unpredictably, enabling self-service analytics.
Use Lumen Dashboards when: Dashboard structure is fixed, no LLM costs desired.
pip install lumen[ai]
pip install openai # or anthropic for Claude
export OPENAI_API_KEY="sk-..."
lumen-ai serve data/sales.csv
# Or with database
lumen-ai serve "postgresql://user:pass@localhost/mydb"
import lumen.ai as lmai
import panel as pn
from lumen.sources.duckdb import DuckDBSource
pn.extension()
# Configure LLM
lmai.llm.llm_type = "anthropic"
lmai.llm.model = "claude-3-5-sonnet-20241022"
# Load data
source = DuckDBSource(tables=["./data/sales.csv"])
# Create UI
ui = lmai.ExplorerUI(source=source, title="Sales Analytics AI")
ui.servable()
Specialized components that handle specific tasks:
See: Built-in Agents Reference
| Use Case | Provider | Model |
|---|---|---|
| Production | OpenAI | gpt-4o |
| Complex SQL | Anthropic | claude-3-5-sonnet |
| High volume | OpenAI | gpt-4o-mini |
| Sensitive data | Ollama | llama3.1 |
See: LLM Provider Configuration
Agents share a memory system for context persistence. Extend capabilities with tools:
import lumen.ai as lmai
from lumen.sources.duckdb import DuckDBSource
lmai.llm.llm_type = "openai"
lmai.llm.model = "gpt-4o"
source = DuckDBSource(tables=["sales.csv"])
ui = lmai.ExplorerUI(source=source, title="Business Analytics")
ui.servable()
source = DuckDBSource(
tables=["sales.csv", "products.parquet"],
documents=["./docs/data_dictionary.pdf", "./docs/business_rules.md"]
)
ui = lmai.ExplorerUI(source=source, tools=[lmai.tools.DocumentLookup])
from lumen.ai.agents import Agent
import param
class SentimentAgent(Agent):
"""Analyze sentiment in text data."""
requires = param.List(default=["current_source"])
provides = param.List(default=["sentiment_analysis"])
purpose = """
Analyzes sentiment in text columns.
Keywords: sentiment, emotion, positive, negative, tone
"""
async def respond(self, query: str):
source = self.memory["current_source"]
yield "Sentiment analysis results..."
ui = lmai.ExplorerUI(source=source, agents=[SentimentAgent, lmai.agents.ChatAgent])
See: Custom Agents Guide
from lumen.ai.analyses import Analysis
from lumen.pipeline import Pipeline
import param
class CohortAnalysis(Analysis):
"""Customer cohort retention analysis."""
columns = param.List(default=['customer_id', 'signup_date', 'purchase_date'])
def __call__(self, pipeline: Pipeline):
df = pipeline.data
# ... calculate cohorts ...
return results
ui = lmai.ExplorerUI(
source=source,
agents=[lmai.agents.AnalysisAgent(analyses=[CohortAnalysis])]
)
source = DuckDBSource(
tables={
"sales": "./data/sales.parquet",
"customers": "./data/customers.csv",
"products": "https://data.company.com/products.csv"
}
)
ui = lmai.ExplorerUI(source=source)
agents = [
lmai.agents.TableListAgent,
lmai.agents.SQLAgent,
lmai.agents.hvPlotAgent,
]
ui = lmai.ExplorerUI(source=source, agents=agents)
# DependencyResolver (default): Recursively resolves agent dependencies
ui = lmai.ExplorerUI(source=source, coordinator="dependency")
# Planner: Creates execution plan upfront
ui = lmai.ExplorerUI(source=source, coordinator="planner")
ui = lmai.ExplorerUI(
source=source,
title="Custom Analytics AI",
accent_color="#00aa41",
suggestions=["Show me revenue trends", "What are the top products?"]
)
import os
# Environment variables for API keys
lmai.llm.api_key = os.getenv("OPENAI_API_KEY")
# Never hardcode secrets
# Limit table sizes for exploration
source = DuckDBSource(
tables=["large_table.parquet"],
table_kwargs={"large_table": {"nrows": 100000}}
)
# Provide example queries
ui = lmai.ExplorerUI(
source=source,
suggestions=["Show me revenue trends", "Top 10 products by sales"]
)
lumen-ai serve app.py --autoreload --show
panel serve app.py \
--port 80 \
--num-procs 4 \
--allow-websocket-origin=analytics.company.com
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY app.py data/ ./
CMD ["panel", "serve", "app.py", "--port", "5006", "--address", "0.0.0.0"]
See: Deployment Guide
# Test API connection
curl https://api.openai.com/v1/models -H "Authorization: Bearer $OPENAI_API_KEY"
# Debug which agent was selected
print(ui.agent_manager.last_selected_agent)
# View agent purposes
for agent in ui.agents:
print(f"{agent.__class__.__name__}: {agent.purpose}")
Lumen AI transforms data exploration through natural language interfaces powered by LLMs.
Strengths: No SQL required for users, flexible LLM support, extensible architecture, privacy-focused options.
Ideal for: Ad-hoc exploration, non-technical users, rapid insights, self-service analytics.
Consider alternatives when: