From skills
List and manage AI providers and models configured in Adaline. Use when discovering available LLM providers, listing models, or checking provider configuration.
npx claudepluginhub adaline/skills --plugin skillsThis skill uses the workspace's default tool permissions.
Adaline Providers are LLM services configured in your Adaline workspace. Each provider represents a connection to an AI service (OpenAI, Anthropic, Google, Azure, Bedrock, Vertex, Groq, xAI, OpenRouter, Together AI, or a custom endpoint). Models are specific LLMs available through each configured provider.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Adaline Providers are LLM services configured in your Adaline workspace. Each provider represents a connection to an AI service (OpenAI, Anthropic, Google, Azure, Bedrock, Vertex, Groq, xAI, OpenRouter, Together AI, or a custom endpoint). Models are specific LLMs available through each configured provider.
Key terms:
id and associated settingId.gpt-4o via OpenAI). Has an enabled flag controlling availability.| Provider name | Description |
|---|---|
| openai | OpenAI GPT models |
| anthropic | Anthropic Claude models |
| Google Gemini models | |
| xai | xAI Grok models |
| azure | Azure OpenAI Service |
| bedrock | AWS Bedrock (multiple model families) |
| vertex | Google Vertex AI |
| groq | Groq ultra-low-latency inference |
| open-router | OpenRouter multi-provider gateway |
| togetherai | Together AI open-source models |
| custom | Custom or self-hosted LLM endpoints |
Set these environment variables when your Adaline credentials are available:
ADALINE_API_KEY — your workspace API key (from Settings > API Keys at app.adaline.ai)https://api.adaline.ai/v2You can start integrating before you have credentials. All code examples use placeholder values — replace them with real values when ready.
id and exposes its available modelsmodelSettings to understand which parameters (temperature, maxTokens, etc.) each provider supportsChecking available providers and models before creating prompts ensures you reference valid provider/model combinations and understand which model parameters are supported.
| Symptom | First Fix |
|---|---|
| Provider not in list | Verify credentials are configured in Adaline workspace settings |
| Model not found for provider | Use GET /models?providerId=ID to list models for that specific provider |
| Model shows enabled: false | Enable the model in Adaline workspace settings |
| modelSettings field missing a param | That provider does not support that parameter — omit it from prompt config |
curl -X GET "https://api.adaline.ai/v2/providers" \
-H "Authorization: Bearer $ADALINE_API_KEY"
Response:
{
"providers": [
{
"id": "prov_abc123",
"settingId": "set_xyz789",
"name": "openai",
"description": "OpenAI GPT models",
"modelSettings": {
"temperature": { "enabled": true },
"maxTokens": { "enabled": true }
}
},
{
"id": "prov_def456",
"settingId": "set_uvw012",
"name": "anthropic",
"description": "Anthropic Claude models",
"modelSettings": {
"temperature": { "enabled": true },
"maxTokens": { "enabled": true }
}
}
]
}
curl -X GET "https://api.adaline.ai/v2/providers/prov_abc123" \
-H "Authorization: Bearer $ADALINE_API_KEY"
Response:
{
"id": "prov_abc123",
"settingId": "set_xyz789",
"name": "openai",
"description": "OpenAI GPT models",
"modelSettings": {
"temperature": { "enabled": true },
"maxTokens": { "enabled": true }
}
}
# All models across all providers
curl -X GET "https://api.adaline.ai/v2/models" \
-H "Authorization: Bearer $ADALINE_API_KEY"
# Models for a specific provider
curl -X GET "https://api.adaline.ai/v2/models?providerId=prov_abc123" \
-H "Authorization: Bearer $ADALINE_API_KEY"
Response:
{
"models": [
{
"id": "mdl_111aaa",
"providerId": "prov_abc123",
"provider": "openai",
"name": "gpt-4o",
"enabled": true
},
{
"id": "mdl_222bbb",
"providerId": "prov_abc123",
"provider": "openai",
"name": "gpt-4o-mini",
"enabled": true
}
]
}
modelSettings to determine which parameters a provider supports before including them in prompt configuration — unsupported parameters are ignored or may cause errorsenabled flag on models to filter out unavailable models before presenting choices to usersSee references/api.md for the full REST API reference with request parameters, response schemas, and curl examples.