From skills
Send traces and spans to Adaline for complete AI agent observability. Use when instrumenting LLM calls, tool executions, retrieval steps, or any operation in your AI application for monitoring.
npx claudepluginhub adaline/skills --plugin skillsThis skill uses the workspace's default tool permissions.
Adaline Logs provides structured observability for AI applications. Instrument any LLM call, tool execution, retrieval step, or custom operation and view it in the Adaline dashboard.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Adaline Logs provides structured observability for AI applications. Instrument any LLM call, tool execution, retrieval step, or custom operation and view it in the Adaline dashboard.
Key terms:
| Type | Use for |
|---|---|
| Model | Standard LLM chat/completion calls |
| ModelStream | Streaming LLM calls |
| Tool | Tool/function calls made by the model |
| Retrieval | Vector search, document lookup |
| Embeddings | Embedding generation |
| Function | Custom application logic |
| Guardrail | Content moderation, safety checks |
| Other | Anything else |
Set these environment variables when your Adaline credentials are available:
ADALINE_API_KEY — your workspace API key (from Settings > API Keys at app.adaline.ai)projectId — your project ID (from the dashboard sidebar)https://api.adaline.ai/v2You can start integrating before you have credentials. All code examples use placeholder values — replace them with real values when ready.
Choose one approach based on your language and preference.
Create a trace with one span in a single call:
curl -X POST https://api.adaline.ai/v2/logs/trace \
-H "Authorization: Bearer $ADALINE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"projectId": "your-project-id",
"trace": {
"name": "chat-request",
"status": "success",
"startedAt": "2024-01-15T10:00:00.000Z",
"endedAt": "2024-01-15T10:00:01.500Z"
},
"spans": [
{
"name": "gpt-4o-call",
"status": "success",
"startedAt": "2024-01-15T10:00:00.100Z",
"endedAt": "2024-01-15T10:00:01.400Z",
"content": {
"type": "Model",
"input": "{\"model\":\"gpt-4o\",\"messages\":[{\"role\":\"user\",\"content\":\"Hello\"}]}",
"output": "{\"choices\":[{\"message\":{\"role\":\"assistant\",\"content\":\"Hi there!\"}}]}"
}
}
]
}'
Response: { "traceId": "tr_abc123", "spanIds": ["sp_def456"] }
import { Adaline } from '@adaline/client';
import type { LogSpanContent } from '@adaline/api';
const adaline = new Adaline({ apiKey: process.env.ADALINE_API_KEY });
const monitor = adaline.initMonitor({ projectId: 'your-project-id' });
// logTrace is synchronous — no await
const trace = monitor.logTrace({
name: 'chat-request',
status: 'unknown',
});
// ... run your LLM call ...
// logSpan is synchronous — no await
trace.logSpan({
name: 'gpt-4o-call',
status: 'success',
content: {
type: 'Model',
input: JSON.stringify(openaiRequest),
output: JSON.stringify(openaiResponse),
} as LogSpanContent,
});
trace.update({ status: 'success' });
trace.end();
await monitor.flush(); // flush IS async
import os
from adaline import Adaline
adaline = Adaline(api_key=os.environ["ADALINE_API_KEY"])
monitor = adaline.init_monitor(project_id="your-project-id")
trace = monitor.log_trace(name="chat-request")
span = trace.log_span(
name="gpt-4o-call",
content={
"type": "Model",
"input": json.dumps(openai_request),
"output": json.dumps(openai_response),
},
)
span.end(status="success")
trace.end(status="success")
monitor.flush()
One service creates a trace and adds all its spans directly. Use monitor.logTrace() then trace.logSpan() for each operation.
When multiple services contribute spans to a single logical trace:
referenceId (e.g., the request UUID)traceReferenceId to find the trace without knowing the internal traceId// Service A
monitor.logTrace({ name: 'orchestrator', referenceId: requestId, status: 'unknown' }); // SYNC
// Service B (later, different process)
const trace = monitor.logTrace({ name: 'retrieval-service', referenceId: requestId, status: 'unknown' });
trace.logSpan({
name: 'retrieval-step',
...
});
Group related traces (e.g., turns in a conversation) under a sessionId:
monitor.logTrace({
name: 'turn-3',
sessionId: conversationId,
status: 'unknown',
});
Attach feedback after the fact using PATCH /v2/logs/trace:
curl -X PATCH https://api.adaline.ai/v2/logs/trace \
-H "Authorization: Bearer $ADALINE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"traceId": "tr_abc123",
"attributes": {
"operations": [
{ "operation": "create", "key": "user_feedback", "value": "thumbs_up" }
]
}
}'
Use parentReferenceId to create span trees within a trace:
trace.logSpan({ name: 'agent-loop', referenceId: 'loop-1', status: 'unknown' });
trace.logSpan({ name: 'tool-call', parentReferenceId: 'loop-1', status: 'success' });
trace.logSpan({ name: 'llm-response', parentReferenceId: 'loop-1', status: 'success' });
input/output — Adaline parses them for token counts, model names, and prompt renderingstartedAt and endedAt for accurate latency measurement — capture timestamps before and after the operationreferenceId for any distributed system where multiple services instrument the same tracesessionId for multi-turn conversations so you can replay full sessions in the dashboardmonitor.flush() before process exit to ensure buffered spans are sentstatus: 'failure' and include error details in output when operations fail — this enables error rate dashboardsSee references/api.md for the full REST API reference with all request/response schemas. See references/typescript-sdk.md for the complete TypeScript SDK reference with content type builders. See references/python-sdk.md for the complete Python SDK reference with async support.