From langfuse
Debugs AI traces, finds exceptions, analyzes sessions, and manages prompts/datasets via Langfuse MCP server. Use for investigating errors, latency, and prompt versions in AI pipelines.
npx claudepluginhub avivsinai/skills-marketplace --plugin langfuseThis skill uses the workspace's default tool permissions.
Debug your AI systems through Langfuse observability.
Interact with Langfuse via CLI to query/modify traces, prompts, datasets, scores, sessions; access documentation, SDK usage, integrations, and features.
Provides expertise in Langfuse LLM observability: tracing, prompt management, evaluations, datasets. Integrates with LangChain, LlamaIndex, OpenAI for debugging and monitoring production LLM apps.
Provides expertise on Langfuse open-source LLM observability platform for tracing, prompt management, evaluation, datasets, and integrations with LangChain, LlamaIndex, OpenAI. For debugging and monitoring LLM apps.
Share bugs, ideas, or general feedback.
Debug your AI systems through Langfuse observability.
Triggers: langfuse, traces, debug AI, find exceptions, set up langfuse, what went wrong, why is it slow, datasets, evaluation sets
Step 1: Get credentials from https://cloud.langfuse.com → Settings → API Keys
If self-hosted, use your instance URL for LANGFUSE_HOST and create keys there.
Step 2: Install MCP (pick one):
# Claude Code (project-scoped, shared via .mcp.json)
claude mcp add \
--scope project \
--env LANGFUSE_PUBLIC_KEY=pk-... \
--env LANGFUSE_SECRET_KEY=sk-... \
--env LANGFUSE_HOST=https://cloud.langfuse.com \
langfuse -- uvx --python 3.11 langfuse-mcp
# Codex CLI (user-scoped, stored in ~/.codex/config.toml)
codex mcp add langfuse \
--env LANGFUSE_PUBLIC_KEY=pk-... \
--env LANGFUSE_SECRET_KEY=sk-... \
--env LANGFUSE_HOST=https://cloud.langfuse.com \
-- uvx --python 3.11 langfuse-mcp
Step 3: Restart CLI, verify with /mcp (Claude) or codex mcp list (Codex)
Step 4: Test: fetch_traces(age=60)
For safer observability without risk of modifying prompts or datasets, enable read-only mode:
# CLI flag
langfuse-mcp --read-only
# Or environment variable
LANGFUSE_MCP_READ_ONLY=true
This disables write tools: create_text_prompt, create_chat_prompt, update_prompt_labels, create_dataset, create_dataset_item, delete_dataset_item.
If you want MCP clients to default to writing full payloads to files when they omit output_mode, configure:
langfuse-mcp --default-output-mode full_json_file
# Or via environment variable
LANGFUSE_MCP_DEFAULT_OUTPUT_MODE=full_json_file
For manual .mcp.json setup or troubleshooting, see references/setup.md.
find_exceptions(age=1440, group_by="file")
→ Shows error counts by file. Pick the worst offender.
find_exceptions_in_file(filepath="src/ai/chat.py", age=1440)
→ Lists specific exceptions. Grab a trace_id.
get_exception_details(trace_id="...")
→ Full stacktrace and context.
fetch_traces(age=60, user_id="...")
→ Find the trace. Note the trace_id.
If you don't know the user_id, start with:
fetch_traces(age=60)
fetch_trace(trace_id="...", include_observations=true)
→ See all LLM calls in the trace.
fetch_observation(observation_id="...")
→ Inspect a specific generation's input/output.
fetch_observations(age=60, type="GENERATION")
→ Find recent LLM calls. Look for high latency.
fetch_observation(observation_id="...")
→ Check token counts, model, timing.
get_user_sessions(user_id="...", age=1440)
→ List their sessions.
get_session_details(session_id="...")
→ See all traces in the session.
list_datasets()
→ See all datasets.
get_dataset(name="evaluation-set-v1")
→ Get dataset details.
list_dataset_items(dataset_name="evaluation-set-v1", page=1, limit=10)
→ Browse items in the dataset.
create_dataset(name="qa-test-cases", description="QA evaluation set")
→ Create a new dataset.
create_dataset_item(
dataset_name="qa-test-cases",
input={"question": "What is 2+2?"},
expected_output={"answer": "4"}
)
→ Add test cases.
create_dataset_item(
dataset_name="qa-test-cases",
item_id="item_123",
input={"question": "What is 3+3?"},
expected_output={"answer": "6"}
)
→ Upsert: updates existing item by id or creates if missing.
list_prompts()
→ See all prompts with labels.
get_prompt(name="...", label="production")
→ Fetch current production version.
create_text_prompt(name="...", prompt="...", labels=["staging"])
→ Create new version in staging.
update_prompt_labels(name="...", version=N, labels=["production"])
→ Promote to production. (Rollback = re-apply label to older version)
| Task | Tool |
|---|---|
| List traces | fetch_traces(age=N) |
| Get trace details | fetch_trace(trace_id="...", include_observations=true) |
| List LLM calls | fetch_observations(age=N, type="GENERATION") |
| Get observation | fetch_observation(observation_id="...") |
| Error count | get_error_count(age=N) |
| Find exceptions | find_exceptions(age=N, group_by="file") |
| List sessions | fetch_sessions(age=N) |
| User sessions | get_user_sessions(user_id="...", age=N) |
| List prompts | list_prompts() |
| Get prompt | get_prompt(name="...", label="production") |
| List datasets | list_datasets() |
| Get dataset | get_dataset(name="...") |
| List dataset items | list_dataset_items(dataset_name="...", limit=N) |
| Create/update dataset item | create_dataset_item(dataset_name="...", item_id="...") |
age = minutes to look back (max 10080 = 7 days)
LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, LANGFUSE_HOSTfetch_traces(age=60) — if this fails, the issue is MCP, not the skillreferences/setup.md for detailed troubleshootingage parameter (default lookback may be too short)LANGFUSE_HOST points to the right instance (cloud vs self-hosted)references/tool-reference.md — Full parameter docs, filter semantics, response schemasreferences/setup.md — Manual setup, troubleshooting, advanced configuration