From skills
Fetch deployed prompts from Adaline to use in your AI applications. Use when integrating prompt deployments, managing environments, setting up CI/CD for prompts, or implementing rollback procedures.
npx claudepluginhub adaline/skills --plugin skillsThis skill uses the workspace's default tool permissions.
Adaline Deployments let you ship versioned prompt snapshots to your applications without redeploying code. You manage prompts in the Adaline editor and fetch them at runtime via API or SDK.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Adaline Deployments let you ship versioned prompt snapshots to your applications without redeploying code. You manage prompts in the Adaline editor and fetch them at runtime via API or SDK.
Key terms:
Set these environment variables when your Adaline credentials are available:
ADALINE_API_KEY — your workspace API key (from Settings > API Keys at app.adaline.ai)promptId — the ID of the prompt to fetch (found in the Adaline editor URL)deploymentEnvironmentId — the environment to fetch from (dev, staging, or prod environment ID)You can start integrating before you have credentials. All code examples use placeholder values — replace them with real values when ready.
This separates prompt iteration from code releases. Changing a prompt in production requires no code deploy.
| Symptom | First Fix |
|---|---|
| Fetch returns 404 | Verify promptId and deploymentEnvironmentId are correct |
| Wrong prompt version returned | Check you are using deploymentId=latest with the correct environmentId |
| Stale prompt served after deploy | Invalidate cache on webhook receipt |
| Webhook signature invalid | Confirm you are using raw request bytes (not parsed body) for HMAC |
| Variables not substituted | Confirm all variable names in messages match the keys you pass at inference time |
# Fetch latest deployment for an environment
curl -X GET "https://api.adaline.ai/v2/deployments?promptId=PROMPT_ID&deploymentId=latest&deploymentEnvironmentId=ENV_ID" \
-H "Authorization: Bearer $ADALINE_API_KEY"
# Fetch a specific deployment by ID
curl -X GET "https://api.adaline.ai/v2/deployments?promptId=PROMPT_ID&deploymentId=DEPLOYMENT_ID" \
-H "Authorization: Bearer $ADALINE_API_KEY"
Response fields: id, prompt.config (provider, model, settings), prompt.messages, prompt.tools, prompt.variables.
import { Adaline } from '@adaline/client';
const adaline = new Adaline({ apiKey: process.env.ADALINE_API_KEY });
// Use getLatestDeployment for environment-based lookup (latest in an env)
const deployment = await adaline.getLatestDeployment({
promptId: 'your-prompt-id',
deploymentEnvironmentId: 'your-env-id',
});
// deployment.prompt.config -> provider, model, temperature, etc.
// deployment.prompt.messages -> array of { role, content } objects
// deployment.prompt.variables -> variable definitions
// deployment.prompt.tools -> tool/function definitions
// Use getDeployment to fetch a specific deployment by ID
// const specific = await adaline.getDeployment({ promptId: '...', deploymentId: 'dep_abc123' });
Install: npm install @adaline/client
See references/typescript-sdk.md for full type definitions and a complete inference example.
import os
from adaline import Adaline
adaline = Adaline(api_key=os.environ["ADALINE_API_KEY"])
deployment = adaline.get_deployment(
prompt_id="your-prompt-id",
deployment_environment_id="your-env-id",
)
# deployment.prompt.config -> provider, model, temperature, etc.
# deployment.prompt.messages -> list of {"role": ..., "content": ...}
# deployment.prompt.variables -> variable definitions
# deployment.prompt.tools -> tool/function definitions
Install: pip install adaline-client
See references/python-sdk.md for full type definitions and a complete inference example.
Register a webhook URL in Adaline workspace settings. Adaline sends a POST request signed with HMAC-SHA256 on every create-deployment event.
Verify the signature:
import crypto from 'crypto';
function verifyWebhook(rawBody: Buffer, signature: string, secret: string): boolean {
const expected = crypto
.createHmac('sha256', secret)
.update(rawBody)
.digest('hex');
return crypto.timingSafeEqual(Buffer.from(signature), Buffer.from(expected));
}
Use the raw request bytes — not the parsed JSON — when computing the HMAC.
Trigger a deployment from your pipeline after tests pass:
# Example: deploy to staging after CI passes
curl -X POST "https://api.adaline.ai/v2/deployments" \
-H "Authorization: Bearer $ADALINE_API_KEY" \
-H "Content-Type: application/json" \
-d '{"promptId": "PROMPT_ID", "deploymentEnvironmentId": "STAGING_ENV_ID"}'
Promote to production after manual approval by repeating with your production environment ID.
deploymentEnvironmentId values for dev, staging, and prod — never share environment IDs across stagescreate-deployment events so updates take effect without a restartpromptId and deploymentEnvironmentId in environment variables, not hardcoded in sourceSee references/api.md for the full REST API reference and webhook payload format. See references/typescript-sdk.md for TypeScript SDK types and inference examples. See references/python-sdk.md for Python SDK types and inference examples.