Master AI-powered natural language data exploration with Lumen AI. Use this skill when building conversational data analysis interfaces, enabling natural language queries to databases, creating custom AI agents for domain-specific analytics, implementing RAG with document context, or deploying self-service analytics with LLM-generated SQL and visualizations.
/plugin marketplace add uw-ssec/rse-agents/plugin install holoviz-visualization@rse-agentsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Lumen AI is an open-source, agent-based framework for conversational data exploration. Users ask questions in plain English and receive visualizations, SQL queries, and insights automatically generated by large language models.
Lumen AI translates natural language queries into:
| Feature | Lumen AI | Lumen Dashboards |
|---|---|---|
| Interface | Conversational, natural language | Declarative YAML |
| Use Case | Ad-hoc exploration, varying questions | Fixed dashboards, repeated views |
| Users | Non-technical users, self-service | Developers, dashboard builders |
| Cost | LLM API costs | No LLM costs |
| Flexibility | High - generates any query | Fixed - predefined views |
Use Lumen AI when:
Use Lumen Dashboards when:
# Install Lumen with AI support
pip install lumen[ai]
# Install LLM provider (choose one or more)
pip install openai # OpenAI
pip install anthropic # Anthropic Claude
# Set API key
export OPENAI_API_KEY="sk-..."
# Launch with dataset
lumen-ai serve data/sales.csv
# Or with database
lumen-ai serve "postgresql://user:pass@localhost/mydb"
import lumen.ai as lmai
import panel as pn
from lumen.sources.duckdb import DuckDBSource
pn.extension()
# Configure LLM
lmai.llm.llm_type = "anthropic"
lmai.llm.model = "claude-3-5-sonnet-20241022"
# Load data
source = DuckDBSource(
tables=["./data/sales.csv", "./data/customers.csv"]
)
# Create UI
ui = lmai.ExplorerUI(
source=source,
title="Sales Analytics AI"
)
ui.servable()
Once running, try queries like:
Specialized components that handle specific tasks:
See: Built-in Agents Reference for complete agent documentation.
Lumen AI works with multiple LLM providers:
Cloud Providers:
Local Models:
See: LLM Provider Configuration for setup details and provider comparison.
Agents share a memory system:
Extend agent capabilities:
See: Custom Tools Guide for building tools.
import lumen.ai as lmai
from lumen.sources.duckdb import DuckDBSource
# Configure LLM
lmai.llm.llm_type = "openai"
lmai.llm.model = "gpt-4o"
# Load data
source = DuckDBSource(tables=["sales.csv"])
# Create UI
ui = lmai.ExplorerUI(
source=source,
title="Business Analytics"
)
ui.servable()
source = DuckDBSource(
tables=["sales.csv", "products.parquet"],
documents=[
"./docs/data_dictionary.pdf",
"./docs/business_rules.md"
]
)
ui = lmai.ExplorerUI(
source=source,
tools=[lmai.tools.DocumentLookup]
)
Agents will automatically search documents for context when needed.
from lumen.ai.agents import Agent
import param
class SentimentAgent(Agent):
"""Analyze sentiment in text data."""
requires = param.List(default=["current_source"])
provides = param.List(default=["sentiment_analysis"])
purpose = """
Analyzes sentiment in text columns.
Use when user asks about sentiment, emotions, or tone.
Keywords: sentiment, emotion, positive, negative, tone
"""
async def respond(self, query: str):
# Agent implementation
source = self.memory["current_source"]
# ... analyze sentiment ...
yield "Sentiment analysis results..."
# Use custom agent
ui = lmai.ExplorerUI(
source=source,
agents=[SentimentAgent, lmai.agents.ChatAgent]
)
See: Custom Agents Guide for detailed development guide.
from lumen.ai.analyses import Analysis
from lumen.pipeline import Pipeline
import param
class CohortAnalysis(Analysis):
"""Customer cohort retention analysis."""
columns = param.List(default=[
'customer_id', 'signup_date', 'purchase_date'
])
def __call__(self, pipeline: Pipeline):
# Cohort analysis logic
df = pipeline.data
# ... calculate cohorts ...
return results
# Register analysis
ui = lmai.ExplorerUI(
source=source,
agents=[
lmai.agents.AnalysisAgent(analyses=[CohortAnalysis])
]
)
See: Custom Analyses Guide for examples.
from lumen.sources.duckdb import DuckDBSource
source = DuckDBSource(
tables={
"sales": "./data/sales.parquet",
"customers": "./data/customers.csv",
"products": "https://data.company.com/products.csv"
}
)
ui = lmai.ExplorerUI(source=source)
Quick reference for choosing LLM:
| Use Case | Provider | Model | Why |
|---|---|---|---|
| Production analytics | OpenAI | gpt-4o | Best balance |
| Complex SQL | Anthropic | claude-3-5-sonnet | Superior reasoning |
| High volume | OpenAI | gpt-4o-mini | Cost-effective |
| Sensitive data | Ollama | llama3.1 | Local only |
| Development | OpenAI | gpt-4o-mini | Fast, cheap |
See: LLM Provider Configuration for complete setup.
# Use only specific agents
agents = [
lmai.agents.TableListAgent,
lmai.agents.SQLAgent,
lmai.agents.hvPlotAgent,
# Exclude VegaLiteAgent if not needed
]
ui = lmai.ExplorerUI(source=source, agents=agents)
DependencyResolver (default): Recursively resolves agent dependencies
ui = lmai.ExplorerUI(source=source, coordinator="dependency")
Planner: Creates execution plan upfront
ui = lmai.ExplorerUI(source=source, coordinator="planner")
ui = lmai.ExplorerUI(
source=source,
title="Custom Analytics AI",
accent_color="#00aa41",
suggestions=[
"Show me revenue trends",
"What are the top products?",
"Create customer segmentation"
]
)
import os
# ✅ Good: Environment variables
lmai.llm.api_key = os.getenv("OPENAI_API_KEY")
# ❌ Bad: Hardcoded secrets
lmai.llm.api_key = "sk-..."
# Limit table sizes for exploration
source = DuckDBSource(
tables=["large_table.parquet"],
table_kwargs={"large_table": {"nrows": 100000}}
)
# Provide example queries
ui = lmai.ExplorerUI(
source=source,
suggestions=[
"Show me revenue trends",
"Top 10 products by sales",
"Customer segmentation analysis"
]
)
lumen-ai serve app.py --autoreload --show
panel serve app.py \
--port 80 \
--num-procs 4 \
--allow-websocket-origin=analytics.company.com
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY app.py data/ ./
CMD ["panel", "serve", "app.py", "--port", "5006", "--address", "0.0.0.0"]
See: Deployment Guide for production deployment, Docker, Kubernetes, and security.
# Check API key
import os
print(os.getenv("OPENAI_API_KEY"))
# Test connection
curl https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY"
# Debug which agent was selected
print(ui.agent_manager.last_selected_agent)
# View agent purposes
for agent in ui.agents:
print(f"{agent.__class__.__name__}: {agent.purpose}")
See: Troubleshooting Guide for complete troubleshooting reference.
Resources:
Resources:
Resources:
Resources:
Lumen AI transforms data exploration through natural language interfaces powered by LLMs.
Strengths:
Ideal for:
Consider alternatives when:
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.