LangSmith Tracing Plugin for Claude Code
A Claude Code plugin that traces conversations, tool calls, subagent executions, and context compaction to LangSmith.

Prerequisites
Installation
As a Claude Code plugin
From within Claude Code, run:
/plugin marketplace add langchain-ai/langsmith-claude-code-plugins
/plugin install langsmith-tracing@langsmith-claude-code-plugins
/reload-plugins
To update, run:
/plugin marketplace update langsmith-claude-code-plugins
/reload-plugins
As a Claude Cowork plugin
Claude Cowork runs Claude Code in a sandboxed VM, thus the plugin needs to be added separately.
- Allow network egress for
LANGSMITH_CC_ENDPOINT (eg. https://api.smith.langchain.com) or for all domains
- Add
langchain-ai/langsmith-claude-code-plugins marketplace in Customize > Personal Plugins (+) > Create Plugin > Add Marketplace
- Edit each of the Claude Code hooks by prepending LangSmith Claude Code environment variables to
command of every hook. Watch video to see step-by-step.
https://github.com/user-attachments/assets/1d44b30f-e0a8-4173-b60b-97a2d1fb95c5
[!IMPORTANT]
Make sure that 'Allow network egress' is either enabled for all domains or enabled for LANGSMITH_CC_ENDPOINT, otherwise Cowork might freeze.
From source (development)
pnpm install
pnpm build
claude --plugin-dir /path/to/langsmith-claude-code-plugins
Setting environment variables
Option 1: Claude Code settings file (recommended)
Add the following to a .claude/settings.local.json file in your project folder or ~/.claude/settings.json globally:
{
"env": {
"TRACE_TO_LANGSMITH": "true",
"CC_LANGSMITH_API_KEY": "lsv2_pt_...",
"CC_LANGSMITH_PROJECT": "my-project"
}
}
Option 2: Export to shell
Add to your ~/.zshrc, ~/.bashrc, or ~/.bash_profile:
export TRACE_TO_LANGSMITH="true"
export CC_LANGSMITH_API_KEY="lsv2_pt_..."
export CC_LANGSMITH_PROJECT="my-project"
Getting your LangSmith API key
- Go to smith.langchain.com
- Sign in or create an account
- Navigate to Settings → API Keys
- Click Create API Key
- Copy the key (starts with
lsv2_pt_...)
What gets traced
Each LLM run includes:
- Inputs: accumulated conversation messages
- Outputs: assistant response content
- Metadata:
ls_provider: "anthropic", ls_model_name, ls_invocation_params (model, stop reason), token usage
All runs (LLM, tool, turn, subagent) automatically include identity metadata so you can attribute traces in LangSmith:
anthropic_user_id — read from the userID field in ~/.claude.json (the Claude Code installation's stable hashed user ID). Omitted if the file is missing or unreadable.
local_username — the local OS username from os.userInfo().
To override either field, supply your own value via CC_LANGSMITH_METADATA — user-supplied keys always win.
Tool runs include the tool name, inputs, and output content.
Interrupted turns (where the user cancels mid-response) are marked with status "interrupted" in LangSmith.
Environment variables
The plugin respects the following environment variables: