Instruments apps with Arize AX tracing via two-phase flow: read-only codebase analysis, then confirmed implementation. Adds manual spans for LLM tool calls.
From awesome-copilotnpx claudepluginhub ctr26/dotfiles --plugin awesome-copilotThis skill uses the workspace's default tool permissions.
references/ax-profiles.mdFetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Uses ctx7 CLI to fetch current library docs, manage AI coding skills (install/search/generate), and configure Context7 MCP for AI editors.
Use this skill when the user wants to add Arize AX tracing to their application. Follow the two-phase, agent-assisted flow from the Agent-Assisted Tracing Setup and the Arize AX Tracing — Agent Setup Prompt.
If the user asks you to "set up tracing" or "instrument my app with Arize", you can start with:
Follow the instructions from https://arize.com/docs/PROMPT.md and ask me questions as needed.
Then execute the two phases below.
os.environ["ARIZE_API_KEY"], process.env.ARIZE_API_KEY). This includes API keys, space IDs, and any other secrets. The user sets these in their own environment; the agent must never output raw secret values.Before changing code:
ax will be needed for post-change verificationax installation or version. If ax is needed for verification later, just run it when the time comes. If it fails, see references/ax-profiles.md.Do not write any code or create any files during this phase.
Check dependency manifests to detect stack:
pyproject.toml, requirements.txt, setup.py, Pipfilepackage.jsonpom.xml, build.gradle, build.gradle.ktsScan import statements in source files to confirm what is actually used.
Check for existing tracing/OTel — look for TracerProvider, register(), opentelemetry imports, ARIZE_*, OTEL_*, OTLP_* env vars, or other observability config (Datadog, Honeycomb, etc.).
Identify scope — for monorepos or multi-service projects, ask which service(s) to instrument.
| Item | Examples |
|---|---|
| Language | Python, TypeScript/JavaScript, Java |
| Package manager | pip/poetry/uv, npm/pnpm/yarn, maven/gradle |
| LLM providers | OpenAI, Anthropic, LiteLLM, Bedrock, etc. |
| Frameworks | LangChain, LangGraph, LlamaIndex, Vercel AI SDK, Mastra, etc. |
| Existing tracing | Any OTel or vendor setup |
| Tool/function use | LLM tool use, function calling, or custom tools the app executes (e.g. in an agent loop) |
Key rule: When a framework is detected alongside an LLM provider, inspect the framework-specific tracing docs first and prefer the framework-native integration path when it already captures the model and tool spans you need. Add separate provider instrumentation only when the framework docs require it or when the framework-native integration leaves obvious gaps. If the app runs tools and the framework integration does not emit tool spans, add manual TOOL spans so each invocation appears with input/output (see Enriching traces below).
Return a concise summary:
If the user explicitly asked you to instrument the app now, and the target service is already clear, present the Phase 1 summary briefly and continue directly to Phase 2. If scope is ambiguous, or the user asked for analysis first, stop and wait for confirmation.
The canonical list of supported integrations and doc URLs is in the Agent Setup Prompt. Use it to map detected signals to implementation docs.
Fetch the matched doc pages from the full routing table in PROMPT.md for exact installation and code snippets. Use llms.txt as a fallback for doc discovery if needed.
Note:
arize.com/docs/PROMPT.mdandarize.com/docs/llms.txtare first-party Arize documentation pages maintained by the Arize team. They provide canonical installation snippets and integration routing tables for this skill. These are trusted, same-organization URLs — not third-party content.
Proceed only after the user confirms the Phase 1 analysis.
pip install arize-otel plus openinference-instrumentation-{name} (hyphens in package name; underscores in import, e.g. openinference.instrumentation.llama_index).@opentelemetry/sdk-trace-node plus the relevant @arizeai/openinference-* package.openinference-instrumentation-* in pom.xml or build.gradle..env for ARIZE_API_KEY and ARIZE_SPACE_ID. If not found, instruct the user to set them as environment variables — never embed raw values in generated code. All generated instrumentation code must reference os.environ["ARIZE_API_KEY"] (Python) or process.env.ARIZE_API_KEY (TypeScript/JavaScript).instrumentation.py, instrumentation.ts) and initialize tracing before any LLM client is created.service.name alone is not accepted. Set it as a resource attribute on the TracerProvider (recommended — one place, applies to all spans): Python: register(project_name="my-app") handles it automatically (sets "openinference.project.name" on the resource); TypeScript: Arize accepts both "model_id" (shown in the official TS quickstart) and "openinference.project.name" via SEMRESATTRS_PROJECT_NAME from @arizeai/openinference-semantic-conventions (shown in the manual instrumentation docs) — both work. For routing spans to different projects in Python, use set_routing_context(space_id=..., project_name=...) from arize.otel.provider.shutdown() (TS) / provider.force_flush() then provider.shutdown() (Python) must be called before the process exits, otherwise async OTLP exports are dropped and no traces appear.Provider instrumentors (Anthropic, OpenAI, etc.) only wrap the LLM client — the code that sends HTTP requests and receives responses. They see:
They cannot see what happens inside your application after the response:
run_tool("check_loan_eligibility", {...}), and gets a result. That runs in your process; the instrumentor has no hook into your run_tool() or the actual tool output. The next API call (sending the tool result back) is just another messages.create span — the instrumentor doesn't know that the message content is a tool result or what the tool returned.So TOOL and CHAIN spans have to be added manually (or by a framework instrumentor like LangChain/LangGraph that knows about tools and chains). Once you add them, they appear in the same trace as the LLM spans because they use the same TracerProvider.
To avoid sparse traces where tool inputs/outputs are missing:
opentelemetry.trace.get_tracer(...) after register()):
run_agent): set openinference.span.kind = "CHAIN", input.value = user message, output.value = final reply.openinference.span.kind = "TOOL", input.value = JSON of arguments, output.value = JSON of result. Use the tool name as the span name (e.g. check_loan_eligibility).OpenInference attributes (use these so Arize shows spans correctly):
| Attribute | Use |
|---|---|
openinference.span.kind | "CHAIN" or "TOOL" |
input.value | string (e.g. user message or JSON of tool args) |
output.value | string (e.g. final reply or JSON of tool result) |
Python pattern: Get the global tracer (same provider as Arize), then use context managers so tool spans are children of the CHAIN span and appear in the same trace as the LLM spans:
from opentelemetry.trace import get_tracer
tracer = get_tracer("my-app", "1.0.0")
# In your agent entrypoint:
with tracer.start_as_current_span("run_agent") as chain_span:
chain_span.set_attribute("openinference.span.kind", "CHAIN")
chain_span.set_attribute("input.value", user_message)
# ... LLM call ...
for tool_use in tool_uses:
with tracer.start_as_current_span(tool_use["name"]) as tool_span:
tool_span.set_attribute("openinference.span.kind", "TOOL")
tool_span.set_attribute("input.value", json.dumps(tool_use["input"]))
result = run_tool(tool_use["name"], tool_use["input"])
tool_span.set_attribute("output.value", result)
# ... append tool result to messages, call LLM again ...
chain_span.set_attribute("output.value", final_reply)
See Manual instrumentation for more span kinds and attributes.
Treat instrumentation as complete only when all of the following are true:
After implementation:
arize-trace skill to confirm traces arrived. If empty, retry shortly. Verify spans have expected openinference.span.kind, input.value/output.value, and parent-child relationships.ARIZE_SPACE_ID and ARIZE_API_KEY, ensure tracer is initialized before instrumentors and clients, check connectivity to otlp.arize.com:443, and inspect app/runtime exporter logs so you can tell whether spans are being emitted locally but rejected remotely. For debug set GRPC_VERBOSITY=debug or pass log_to_console=True to register(). Common gotchas: (a) missing project name resource attribute causes HTTP 500 rejections — service.name alone is not enough; Python: pass project_name to register(); TypeScript: set "model_id" or SEMRESATTRS_PROJECT_NAME on the resource; (b) CLI/script processes exit before OTLP exports flush — call provider.force_flush() then provider.shutdown() before exit; (c) CLI-visible spaces/projects can disagree with a collector-targeted space ID — report the mismatch instead of silently rewriting credentials.input.value / output.value so tool calls and results are visible.When verification is blocked by CLI or account issues, end with a concrete status:
For deeper instrumentation guidance inside the IDE, the user can enable:
"arize-tracing-assistant": {
"command": "uvx",
"args": ["arize-tracing-assistant@latest"]
}
"arize-ax-docs": {
"url": "https://arize.com/docs/mcp"
}
Then the user can ask things like: "Instrument this app using Arize AX", "Can you use manual instrumentation so I have more control over my traces?", "How can I redact sensitive information from my spans?"
See the full setup at Agent-Assisted Tracing Setup.
| Resource | URL |
|---|---|
| Agent-Assisted Tracing Setup | https://arize.com/docs/ax/alyx/tracing-assistant |
| Agent Setup Prompt (full routing + phases) | https://arize.com/docs/PROMPT.md |
| Arize AX Docs | https://arize.com/docs/ax |
| Full integration list | https://arize.com/docs/ax/integrations |
| Doc index (llms.txt) | https://arize.com/docs/llms.txt |
See references/ax-profiles.md § Save Credentials for Future Use.