npx claudepluginhub anthropics/claude-plugins-official --plugin posthogThis skill uses the workspace's default tool permissions.
Use this skill to add PostHog LLM analytics that trace AI model usage in new or changed code. Use it after implementing LLM features or reviewing PRs to ensure all generations are captured with token counts, latency, and costs. If PostHog is not yet installed, this skill also covers initial SDK setup. Supports any provider or framework.
references/anthropic.mdreferences/autogen.mdreferences/azure-openai.mdreferences/basics.mdreferences/calculating-costs.mdreferences/cerebras.mdreferences/cohere.mdreferences/crewai.mdreferences/deepseek.mdreferences/dspy.mdreferences/fireworks-ai.mdreferences/google.mdreferences/groq.mdreferences/helicone.mdreferences/hugging-face.mdreferences/instructor.mdreferences/langchain.mdreferences/langgraph.mdreferences/litellm.mdreferences/llamaindex.mdImplements observability for LLM applications: tracing (Langfuse/Helicone), cost tracking, token optimization, RAGAS evaluation metrics, hallucination detection, production monitoring. Use for debugging, cost optimization, AI quality.
Configures Sentry for monitoring LLM calls, AI agents, tool usage, and token consumption in JS/Python projects with OpenAI, Anthropic, LangChain, Vercel AI, and more. Detects installed SDKs.
Share bugs, ideas, or general feedback.
Use this skill to add PostHog LLM analytics that trace AI model usage in new or changed code. Use it after implementing LLM features or reviewing PRs to ensure all generations are captured with token counts, latency, and costs. If PostHog is not yet installed, this skill also covers initial SDK setup. Supports any provider or framework.
Supported providers: OpenAI, Azure OpenAI, Anthropic, Google, Cohere, Mistral, Perplexity, DeepSeek, Groq, Together AI, Fireworks AI, xAI, Cerebras, Hugging Face, Ollama, OpenRouter.
Supported frameworks: LangChain, LlamaIndex, CrewAI, AutoGen, DSPy, LangGraph, Pydantic AI, Vercel AI, LiteLLM, Instructor, Semantic Kernel, Mirascope, Mastra, SmolAgents, OpenAI Agents.
Proxy/gateway: Portkey, Helicone.
Follow these steps IN ORDER:
STEP 1: Analyze the codebase and detect the LLM stack.
STEP 2: Research instrumentation. (Skip if PostHog LLM tracing is already set up.) 2.1. Find the reference file below that matches the detected provider or framework — it is the source of truth for callback setup, middleware configuration, and event capture. Read it now. 2.2. If no reference matches, use manual-capture.md as a fallback — it covers the generic event capture approach that works with any provider.
STEP 3: Install the PostHog SDK. (Skip if PostHog is already set up.)
STEP 4: Add LLM tracing.
STEP 5: Link to users.
STEP 6: Set up environment variables.
.env, .env.local, or framework-specific env files). If valid values already exist, skip this step.projects-get tool to retrieve the project's api_token. If multiple projects are returned, ask the user which project to use. If the MCP server is not connected or not authenticated, ask the user for their PostHog project API key instead.https://us.i.posthog.com for US Cloud or https://eu.i.posthog.com for EU Cloud.references/openai.md - Openai llm analytics installation - docsreferences/azure-openai.md - Azure openai llm analytics installation - docsreferences/anthropic.md - Anthropic llm analytics installation - docsreferences/google.md - Google llm analytics installation - docsreferences/cohere.md - Cohere llm analytics installation - docsreferences/mistral.md - Mistral llm analytics installation - docsreferences/perplexity.md - Perplexity llm analytics installation - docsreferences/deepseek.md - Deepseek llm analytics installation - docsreferences/groq.md - Groq llm analytics installation - docsreferences/together-ai.md - Together ai llm analytics installation - docsreferences/fireworks-ai.md - Fireworks ai llm analytics installation - docsreferences/xai.md - Xai llm analytics installation - docsreferences/cerebras.md - Cerebras llm analytics installation - docsreferences/hugging-face.md - Hugging face llm analytics installation - docsreferences/ollama.md - Ollama llm analytics installation - docsreferences/openrouter.md - Openrouter llm analytics installation - docsreferences/langchain.md - Langchain llm analytics installation - docsreferences/llamaindex.md - Llamaindex llm analytics installation - docsreferences/crewai.md - Crewai llm analytics installation - docsreferences/autogen.md - Autogen llm analytics installation - docsreferences/dspy.md - Dspy llm analytics installation - docsreferences/langgraph.md - Langgraph llm analytics installation - docsreferences/pydantic-ai.md - Pydantic ai llm analytics installation - docsreferences/vercel-ai.md - Vercel ai SDK llm analytics installation - docsreferences/litellm.md - Litellm llm analytics installation - docsreferences/instructor.md - Instructor llm analytics installation - docsreferences/semantic-kernel.md - Semantic kernel llm analytics installation - docsreferences/mirascope.md - Mirascope llm analytics installation - docsreferences/mastra.md - Mastra llm analytics installation - docsreferences/smolagents.md - Smolagents llm analytics installation - docsreferences/openai-agents.md - Openai agents SDK llm analytics installation - docsreferences/portkey.md - Portkey llm analytics installation - docsreferences/helicone.md - Helicone llm analytics installation - docsreferences/manual-capture.md - Manual capture llm analytics installation - docsreferences/basics.md - Llm analytics basics - docsreferences/traces.md - Traces - docsreferences/calculating-costs.md - Calculating llm costs - docsEach provider reference contains installation instructions, SDK setup, and code examples specific to that provider or framework. Find the reference that matches the user's stack.
If the user's provider isn't listed, use manual-capture.md as a fallback — it covers the generic event capture approach that works with any provider.