From skills
Create and manage evaluation datasets in Adaline. Use when building test cases for prompt evaluation, importing CSV data, managing dataset columns and rows, or setting up dynamic columns.
npx claudepluginhub adaline/skills --plugin skillsThis skill uses the workspace's default tool permissions.
Datasets in Adaline are structured tables of test cases used for prompt evaluation. Each dataset maps to a prompt through column names that match prompt variable names exactly.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Datasets in Adaline are structured tables of test cases used for prompt evaluation. Each dataset maps to a prompt through column names that match prompt variable names exactly.
Key terms:
static (manual entry), prompt (computed by another prompt in Adaline), api (fetched from an external API)Set these environment variables when your Adaline credentials are available:
ADALINE_API_KEY — your workspace API key (from Settings > API Keys at app.adaline.ai)projectId — your project ID (from the dashboard sidebar)You can start integrating before you have credentials. All code examples use placeholder values — replace them with real values when ready.
There is no Adaline SDK for datasets. All operations use the REST API directly.
If a prompt contains {{user_question}}, the dataset column name must be user_question (no braces, exact casing). A mismatch means the dataset row values will not populate the prompt variables during evaluation.
| Symptom | First Fix |
|---|---|
| Row values not populating prompt | Check column name matches prompt variable exactly |
| API returns 400 on row add | Verify valuesBy=columnName param and value format { "col": { "value": "..." } } |
| More than 100 rows needed | Batch into multiple POST requests, max 100 rows each |
| Dynamic column not computing | Call POST /datasets/{id}/dynamic-columns/fetch to trigger |
| Pagination missing rows | Use cursor from previous response, keep limit consistent |
Datasets have no SDK. Use direct HTTP calls with your API key.
curl -X POST https://api.adaline.ai/v2/datasets \
-H "Authorization: Bearer $ADALINE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"projectId": "proj_abc123",
"title": "My Eval Dataset",
"description": "Test cases for the support prompt"
}'
curl -X POST https://api.adaline.ai/v2/datasets/{datasetId}/columns \
-H "Authorization: Bearer $ADALINE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "user_question",
"type": "static"
}'
curl -X POST "https://api.adaline.ai/v2/datasets/{datasetId}/rows?valuesBy=columnName" \
-H "Authorization: Bearer $ADALINE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"rows": [
{
"values": {
"user_question": { "value": "How do I reset my password?" }
}
},
{
"values": {
"user_question": { "value": "Where is my order?" }
}
}
]
}'
When using valuesBy=columnName (recommended), row values are keyed by column name:
{
"column_name": { "value": "the content" }
}
When using valuesBy=columnId (default), keys are column IDs from the dataset definition.
Always prefer valuesBy=columnName for readability and portability.
Dynamic columns compute their values automatically instead of requiring manual entry.
Type: prompt — another Adaline prompt generates the value for each row. Configure with:
{
"name": "expected_answer",
"type": "prompt",
"settings": {
"promptId": "prompt_xyz789"
}
}
Type: api — an external HTTP endpoint returns the value. Configure with:
{
"name": "retrieved_context",
"type": "api",
"settings": {
"method": "POST",
"url": "https://my-rag-service.com/retrieve",
"headers": { "Authorization": "Bearer {{API_KEY}}" },
"bodyTemplate": "{ \"query\": \"{{user_question}}\" }"
}
}
Trigger computation for all rows:
curl -X POST https://api.adaline.ai/v2/datasets/{datasetId}/dynamic-columns/fetch \
-H "Authorization: Bearer $ADALINE_API_KEY"
All list endpoints use cursor-based pagination:
GET /v2/datasets/{id}/rows?limit=50&cursor=<cursor>&sort=asc
Response includes nextCursor. Pass it as cursor in the next request. Stop when nextCursor is null.
valuesBy=columnName when adding rows — more readable, survives column reorderingdatasetId and columnId values — you will need them for updates and deletesSee references/api.md for the full REST API reference with all 13 endpoints, request schemas, and response examples.