From logfire
Instruments apps with Logfire observability for traces, logs, metrics using Python, JavaScript/TypeScript, Rust SDKs and OpenTelemetry. Handles FastAPI, Django, Express, LLM calls correctly.
npx claudepluginhub pydantic/skillsThis skill uses the workspace's default tool permissions.
Invoke this skill when:
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
Invoke this skill when:
Logfire is an observability platform built on OpenTelemetry. It captures traces, logs, and metrics from applications. Logfire has native SDKs for Python, JavaScript/TypeScript, and Rust, plus support for any language via OpenTelemetry.
The reason this skill exists is that Claude tends to get a few things subtly wrong with Logfire - especially the ordering of configure() vs instrument_*() calls, the structured logging syntax, and which extras to install. These matter because a misconfigured setup silently drops traces.
Identify the project language and instrumentable libraries:
pyproject.toml or requirements.txt. Common instrumentable libraries: FastAPI, httpx, asyncpg, SQLAlchemy, psycopg, Redis, Celery, Django, Flask, requests, PydanticAI.package.json. Common frameworks: Express, Next.js, Fastify. Also check for Cloudflare Workers or Deno.Cargo.toml.Then follow the language-specific steps below.
Install logfire with extras matching the detected frameworks. Each instrumented library needs its corresponding extra - without it, the instrument_*() call will fail at runtime with a missing dependency error.
uv add 'logfire[fastapi,httpx,asyncpg]'
The full list of available extras: fastapi, starlette, django, flask, httpx, requests, asyncpg, psycopg, psycopg2, sqlalchemy, redis, pymongo, mysql, sqlite3, celery, aiohttp, aws-lambda, system-metrics, litellm, dspy, google-genai.
This is where ordering matters. logfire.configure() initializes the SDK and must come before everything else. The instrument_*() calls register hooks into each library. If you call instrument_*() before configure(), the hooks register but traces go nowhere.
import logfire
# 1. Configure first - always
logfire.configure()
# 2. Instrument libraries - after configure, before app starts
logfire.instrument_fastapi(app)
logfire.instrument_httpx()
logfire.instrument_asyncpg()
Placement rules:
logfire.configure() goes in the application entry point (main.py, or the module that creates the app)instrument_*() calls go right after configure()instrument_fastapi, instrument_flask, instrument_django) need the app instance as an argument. HTTP client and database instrumentors (instrument_httpx, instrument_asyncpg) are global and take no arguments.logfire.configure() inside the post_fork hook, not at module level - each worker is a separate processReplace print() and logging.*() calls with Logfire's structured logging. The key pattern: use {key} placeholders with keyword arguments, never f-strings.
# Correct - each {key} becomes a searchable attribute in the Logfire UI
logfire.info("Created user {user_id}", user_id=uid)
logfire.error("Payment failed {amount} {currency}", amount=100, currency="USD")
# Wrong - creates a flat string, nothing is searchable
logfire.info(f"Created user {uid}")
For grouping related operations and measuring duration, use spans:
with logfire.span("Processing order {order_id}", order_id=order_id):
items = await fetch_items(order_id)
total = calculate_total(items)
logfire.info("Calculated total {total}", total=total)
For exceptions, use logfire.exception() which automatically captures the traceback:
try:
await process_order(order_id)
except Exception:
logfire.exception("Failed to process order {order_id}", order_id=order_id)
raise
Logfire auto-instruments AI libraries to capture LLM calls, token usage, tool invocations, and agent runs.
uv add 'logfire[pydantic-ai]'
# or: uv add 'logfire[openai]' / uv add 'logfire[anthropic]'
Available AI extras: pydantic-ai, openai, anthropic, litellm, dspy, google-genai.
logfire.configure()
logfire.instrument_pydantic_ai() # captures agent runs, tool calls, LLM request/response
# or:
logfire.instrument_openai() # captures chat completions, embeddings, token counts
logfire.instrument_anthropic() # captures messages, token usage
For PydanticAI, each agent run becomes a parent span containing child spans for every tool call and LLM request.
# Node.js
npm install @pydantic/logfire-node
# Cloudflare Workers
npm install @pydantic/logfire-cf-workers logfire
# Next.js / generic
npm install logfire
Node.js (Express, Fastify, etc.) - create an instrumentation.ts loaded before your app:
import * as logfire from '@pydantic/logfire-node'
logfire.configure()
Launch with: node --require ./instrumentation.js app.js
The SDK auto-instruments common libraries when loaded before the app. Set LOGFIRE_TOKEN in your environment or pass token to configure().
Cloudflare Workers - wrap your handler with instrument():
import { instrument } from '@pydantic/logfire-cf-workers'
export default instrument(handler, {
service: { name: 'my-worker', version: '1.0.0' }
})
Next.js - set environment variables for OpenTelemetry export:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://logfire-api.pydantic.dev/v1/traces
OTEL_EXPORTER_OTLP_HEADERS=Authorization=<your-write-token>
// Structured attributes as second argument
logfire.info('Created user', { user_id: uid })
logfire.error('Payment failed', { amount: 100, currency: 'USD' })
// Spans
logfire.span('Processing order', { order_id }, {}, async () => {
logfire.info('Processing step completed')
})
// Error reporting
logfire.reportError('order processing', error)
Log levels: trace, debug, info, notice, warn, error, fatal.
[dependencies]
logfire = "0.6"
let shutdown_handler = logfire::configure()
.install_panic_handler()
.finish()?;
Set LOGFIRE_TOKEN in your environment or use the Logfire CLI to select a project.
The Rust SDK is built on tracing and opentelemetry - existing tracing macros work automatically.
// Spans
logfire::span!("processing order", order_id = order_id).in_scope(|| {
// traced code
});
// Events
logfire::info!("Created user {user_id}", user_id = uid);
Always call shutdown_handler.shutdown() before program exit to flush data.
After instrumentation, verify the setup works:
logfire auth to check authentication (or set LOGFIRE_TOKEN)If traces aren't appearing: check that configure() is called before instrument_*() (Python), check that LOGFIRE_TOKEN is set, and check that the correct packages/extras are installed.
Detailed patterns and integration tables, organized by language:
${CLAUDE_PLUGIN_ROOT}/skills/instrumentation/references/python/logging-patterns.md (log levels, spans, stdlib integration, metrics, capfire testing) and ${CLAUDE_PLUGIN_ROOT}/skills/instrumentation/references/python/integrations.md (full instrumentor table with extras)${CLAUDE_PLUGIN_ROOT}/skills/instrumentation/references/javascript/patterns.md (log levels, spans, error handling, config) and ${CLAUDE_PLUGIN_ROOT}/skills/instrumentation/references/javascript/frameworks.md (Node.js, Cloudflare Workers, Next.js, Deno setup)${CLAUDE_PLUGIN_ROOT}/skills/instrumentation/references/rust/patterns.md (macros, spans, tracing/log crate integration, async, shutdown)