From skills
High-level guide for integrating your AI application with Adaline. Use when starting a new Adaline integration, choosing between API/SDK approaches, or planning which Adaline features to adopt.
npx claudepluginhub adaline/skills --plugin skillsThis skill uses the workspace's default tool permissions.
Adaline is the single platform to iterate, evaluate, deploy, and monitor AI agents. It solves the AI Development Lifecycle (ADLC) — the end-to-end process of building, testing, shipping, and operating AI-powered applications.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Adaline is the single platform to iterate, evaluate, deploy, and monitor AI agents. It solves the AI Development Lifecycle (ADLC) — the end-to-end process of building, testing, shipping, and operating AI-powered applications.
Core pillars:
Set these environment variables when your Adaline credentials are available:
ADALINE_API_KEY — your workspace API key (from Settings > API Keys at app.adaline.ai)projectId — your project ID (from the dashboard sidebar)https://api.adaline.ai/v2You can start integrating before you have credentials. All code examples use placeholder values — replace them with real values when ready.
Choose the right skill based on your goal:
| Your goal | Skill to use | Approach |
|---|---|---|
| Send traces/spans from your app | adaline-logs | SDK (TS/Python) or REST API |
| Fetch deployed prompts at runtime | adaline-deployments | SDK or REST API |
| Create/manage prompts programmatically | adaline-prompts | REST API |
| Build evaluation datasets | adaline-datasets | REST API |
| Set up quality evaluators | adaline-evaluators | REST API |
| Run evaluations at scale | adaline-evaluations | REST API |
| Check available AI providers/models | adaline-providers | REST API |
Best for: Node.js and TypeScript applications. Provides a fluent API for creating traces and spans, handles buffering and flushing, and supports distributed tracing out of the box.
Install: npm install @adaline/client
Use for: logging (adaline-logs skill), fetching deployments (adaline-deployments skill). For management operations (creating prompts, datasets, evaluators) use the REST API directly.
Best for: Python applications. Full feature parity with the TypeScript SDK. Supports both sync and async usage via asyncio.
Install: pip install adaline-client
Use for: logging (adaline-logs skill), fetching deployments (adaline-deployments skill). For management operations use the REST API directly.
Best for: any language, serverless environments, custom integrations, and all management operations regardless of language. The SDK is a thin wrapper around this API.
Base URL: https://api.adaline.ai/v2
Authentication: Authorization: Bearer ADALINE_API_KEY
Best for: quick start with zero instrumentation code. Change your LLM client's base URL to gateway.adaline.ai — Adaline intercepts the request, forwards it to the provider, and automatically creates a trace and span. No SDK required.
Use this when you want observability immediately without modifying application code.
Work through these steps to get full value from Adaline:
adaline-logs — instrument your application to send traces and spans. This gives you immediate observability: latency, cost, errors, and prompt inputs/outputs visible in the dashboard.
adaline-deployments — move prompt text out of your code and into Adaline. Fetch the active deployed prompt at runtime using GET /v2/deployments. This decouples prompt iteration from code deploys.
adaline-datasets — build datasets of representative inputs (and optionally expected outputs) for your use cases. These become the inputs to evaluations.
adaline-evaluators — define how to score prompt outputs: exact match, regex, JSON schema, or LLM-as-judge with a custom rubric.
adaline-evaluations — run a prompt version against a dataset using your evaluators. Use this as a quality gate before promoting a prompt to production.
adaline-providers — discover which LLM providers and models are configured in your Adaline workspace. Use this to avoid hardcoding provider IDs or model names.
| Variable | Required | Description |
|---|---|---|
ADALINE_API_KEY | Required | Bearer token for all API calls |
ADALINE_PROJECT_ID | Recommended | Project to associate logs with |
ADALINE_PROMPT_ID | Conditional | Required when fetching a deployed prompt |
ADALINE_DEPLOYMENT_ENVIRONMENT_ID | Conditional | Required when fetching environment-specific deployments (dev/staging/prod) |
Determine the right approach for your environment:
gateway.adaline.aiflush() or await the POST before the function returns to avoid losing buffered spansEvery REST request requires a Bearer token in the Authorization header:
Authorization: Bearer ADALINE_API_KEY
All request and response bodies are JSON. All timestamps are ISO 8601 strings (e.g., "2024-01-15T10:00:00.000Z").
Retry strategy for REST API calls:
429 Too Many Requests and 5xx server errors4xx client errors (except 429) — these indicate a problem with the requestCommon errors:
| Status | Meaning | Action |
|---|---|---|
| 401 | Invalid or missing API key | Check ADALINE_API_KEY value |
| 403 | Key does not have access to this resource | Verify project membership and key permissions |
| 404 | Resource not found | Check IDs (projectId, promptId, datasetId, etc.) |
| 429 | Rate limited | Retry with exponential backoff |
| 422 | Validation error | Check request body against the skill's API reference |
| 500/502/503 | Server error | Retry with exponential backoff |
See references/api-context.md for all 39 API endpoints grouped by resource with the skill that covers each. See references/typescript-sdk-context.md for the TypeScript SDK overview and key methods. See references/python-sdk-context.md for the Python SDK overview with async support.