From telnyx-javascript
Access Telnyx AI inference APIs via JavaScript SDK for chat completions, speech-to-text transcription, embeddings, and call analytics.
npx claudepluginhub team-telnyx/skillsThis skill uses the workspace's default tool permissions.
<!-- Auto-generated from Telnyx OpenAPI specs. Do not edit. -->
Access Telnyx AI inference APIs via JavaScript SDK for chat completions, speech-to-text transcription, embeddings, and call analytics.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Share bugs, ideas, or general feedback.
npm install telnyx
import Telnyx from 'telnyx';
const client = new Telnyx({
apiKey: process.env['TELNYX_API_KEY'], // This is the default and can be omitted
});
All examples below assume client is already initialized as shown above.
All API calls can fail with network errors, rate limits (429), validation errors (422), or authentication errors (401). Always handle errors in production code:
try {
const result = await client.messages.send({ to: '+13125550001', from: '+13125550002', text: 'Hello' });
} catch (err) {
if (err instanceof Telnyx.APIConnectionError) {
console.error('Network error — check connectivity and retry');
} else if (err instanceof Telnyx.RateLimitError) {
// 429: rate limited — wait and retry with exponential backoff
const retryAfter = err.headers?.['retry-after'] || 1;
await new Promise(r => setTimeout(r, retryAfter * 1000));
} else if (err instanceof Telnyx.APIError) {
console.error(`API error ${err.status}: ${err.message}`);
if (err.status === 422) {
console.error('Validation error — check required fields and formats');
}
}
}
Common error codes: 401 invalid API key, 403 insufficient permissions,
404 resource not found, 422 validation error (check field formats),
429 rate limited (retry with exponential backoff).
for await (const item of result) { ... } to iterate through all pages automatically.Transcribe speech to text. This endpoint is consistent with the OpenAI Transcription API and may be used with the OpenAI JS or Python SDK.
POST /ai/audio/transcriptions
const response = await client.ai.audio.transcribe({ model: 'distil-whisper/distil-large-v2' });
console.log(response.text);
Returns: duration (number), segments (array[object]), text (string)
Chat with a language model. This endpoint is consistent with the OpenAI Chat Completions API and may be used with the OpenAI JS or Python SDK.
POST /ai/chat/completions — Required: messages
Optional: api_key_ref (string), best_of (integer), early_stopping (boolean), enable_thinking (boolean), frequency_penalty (number), guided_choice (array[string]), guided_json (object), guided_regex (string), length_penalty (number), logprobs (boolean), max_tokens (integer), min_p (number), model (string), n (number), presence_penalty (number), response_format (object), stream (boolean), temperature (number), tool_choice (enum: none, auto, required), tools (array[object]), top_logprobs (integer), top_p (number), use_beam_search (boolean)
const response = await client.ai.chat.createCompletion({
messages: [
{ role: 'system', content: 'You are a friendly chatbot.' },
{ role: 'user', content: 'Hello, world!' },
],
});
console.log(response);
Retrieve a list of all AI conversations configured by the user. Supports PostgREST-style query parameters for filtering. Examples are included for the standard metadata fields, but you can filter on any field in the metadata JSON object.
GET /ai/conversations
const conversations = await client.ai.conversations.list();
console.log(conversations.data);
Returns: created_at (date-time), id (uuid), last_message_at (date-time), metadata (object), name (string)
Create a new AI Conversation.
POST /ai/conversations
Optional: metadata (object), name (string)
const conversation = await client.ai.conversations.create();
console.log(conversation.id);
Returns: created_at (date-time), id (uuid), last_message_at (date-time), metadata (object), name (string)
Get all insight groups
GET /ai/conversations/insight-groups
// Automatically fetches more pages as needed.
for await (const insightTemplateGroup of client.ai.conversations.insightGroups.retrieveInsightGroups()) {
console.log(insightTemplateGroup.id);
}
Returns: created_at (date-time), description (string), id (uuid), insights (array[object]), name (string), webhook (string)
Create a new insight group
POST /ai/conversations/insight-groups — Required: name
Optional: description (string), webhook (string)
const insightTemplateGroupDetail = await client.ai.conversations.insightGroups.insightGroups({
name: 'my-resource',
});
console.log(insightTemplateGroupDetail.data);
Returns: created_at (date-time), description (string), id (uuid), insights (array[object]), name (string), webhook (string)
Get insight group by ID
GET /ai/conversations/insight-groups/{group_id}
const insightTemplateGroupDetail = await client.ai.conversations.insightGroups.retrieve(
'182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',
);
console.log(insightTemplateGroupDetail.data);
Returns: created_at (date-time), description (string), id (uuid), insights (array[object]), name (string), webhook (string)
Update an insight template group
PUT /ai/conversations/insight-groups/{group_id}
Optional: description (string), name (string), webhook (string)
const insightTemplateGroupDetail = await client.ai.conversations.insightGroups.update(
'182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',
);
console.log(insightTemplateGroupDetail.data);
Returns: created_at (date-time), description (string), id (uuid), insights (array[object]), name (string), webhook (string)
Delete insight group by ID
DELETE /ai/conversations/insight-groups/{group_id}
await client.ai.conversations.insightGroups.delete('182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e');
Assign an insight to a group
POST /ai/conversations/insight-groups/{group_id}/insights/{insight_id}/assign
await client.ai.conversations.insightGroups.insights.assign(
'182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',
{ group_id: '182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e' },
);
Remove an insight from a group
DELETE /ai/conversations/insight-groups/{group_id}/insights/{insight_id}/unassign
await client.ai.conversations.insightGroups.insights.deleteUnassign(
'182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',
{ group_id: '182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e' },
);
Get all insights
GET /ai/conversations/insights
// Automatically fetches more pages as needed.
for await (const insightTemplate of client.ai.conversations.insights.list()) {
console.log(insightTemplate.id);
}
Returns: created_at (date-time), id (uuid), insight_type (enum: custom, default), instructions (string), json_schema (object), name (string), webhook (string)
Create a new insight
POST /ai/conversations/insights — Required: instructions, name
Optional: json_schema (object), webhook (string)
const insightTemplateDetail = await client.ai.conversations.insights.create({
instructions: 'You are a helpful assistant.',
name: 'my-resource',
});
console.log(insightTemplateDetail.data);
Returns: created_at (date-time), id (uuid), insight_type (enum: custom, default), instructions (string), json_schema (object), name (string), webhook (string)
Get insight by ID
GET /ai/conversations/insights/{insight_id}
const insightTemplateDetail = await client.ai.conversations.insights.retrieve(
'182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',
);
console.log(insightTemplateDetail.data);
Returns: created_at (date-time), id (uuid), insight_type (enum: custom, default), instructions (string), json_schema (object), name (string), webhook (string)
Update an insight template
PUT /ai/conversations/insights/{insight_id}
Optional: instructions (string), json_schema (object), name (string), webhook (string)
const insightTemplateDetail = await client.ai.conversations.insights.update(
'182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',
);
console.log(insightTemplateDetail.data);
Returns: created_at (date-time), id (uuid), insight_type (enum: custom, default), instructions (string), json_schema (object), name (string), webhook (string)
Delete insight by ID
DELETE /ai/conversations/insights/{insight_id}
await client.ai.conversations.insights.delete('182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e');
Retrieve a specific AI conversation by its ID.
GET /ai/conversations/{conversation_id}
const conversation = await client.ai.conversations.retrieve('550e8400-e29b-41d4-a716-446655440000');
console.log(conversation.data);
Returns: created_at (date-time), id (uuid), last_message_at (date-time), metadata (object), name (string)
Update metadata for a specific conversation.
PUT /ai/conversations/{conversation_id}
Optional: metadata (object)
const conversation = await client.ai.conversations.update('550e8400-e29b-41d4-a716-446655440000');
console.log(conversation.data);
Returns: created_at (date-time), id (uuid), last_message_at (date-time), metadata (object), name (string)
Delete a specific conversation by its ID.
DELETE /ai/conversations/{conversation_id}
await client.ai.conversations.delete('550e8400-e29b-41d4-a716-446655440000');
Retrieve insights for a specific conversation
GET /ai/conversations/{conversation_id}/conversations-insights
const response = await client.ai.conversations.retrieveConversationsInsights('550e8400-e29b-41d4-a716-446655440000');
console.log(response.data);
Returns: conversation_insights (array[object]), created_at (date-time), id (string), status (enum: pending, in_progress, completed, failed)
Add a new message to the conversation. Used to insert a new messages to a conversation manually ( without using chat endpoint )
POST /ai/conversations/{conversation_id}/message — Required: role
Optional: content (string), metadata (object), name (string), sent_at (date-time), tool_call_id (string), tool_calls (array[object]), tool_choice (object)
await client.ai.conversations.addMessage('182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e', { role: 'user' });
Retrieve messages for a specific conversation, including tool calls made by the assistant.
GET /ai/conversations/{conversation_id}/messages
const messages = await client.ai.conversations.messages.list('550e8400-e29b-41d4-a716-446655440000');
console.log(messages.data);
Returns: created_at (date-time), role (enum: user, assistant, tool), sent_at (date-time), text (string), tool_calls (array[object])
Retrieve tasks for the user that are either queued, processing, failed, success or partial_success based on the query string. Defaults to queued and processing.
GET /ai/embeddings
const embeddings = await client.ai.embeddings.list();
console.log(embeddings.data);
Returns: bucket (string), created_at (date-time), finished_at (date-time), status (enum: queued, processing, success, failure, partial_success), task_id (string), task_name (string), user_id (string)
Perform embedding on a Telnyx Storage Bucket using an embedding model. The current supported file types are:
POST /ai/embeddings — Required: bucket_name
Optional: document_chunk_overlap_size (integer), document_chunk_size (integer), embedding_model (object), loader (object)
const embeddingResponse = await client.ai.embeddings.create({ bucket_name: 'bucket_name' });
console.log(embeddingResponse.data);
Returns: created_at (string), finished_at (string | null), status (string), task_id (uuid), task_name (string), user_id (uuid)
Get all embedding buckets for a user.
GET /ai/embeddings/buckets
const buckets = await client.ai.embeddings.buckets.list();
console.log(buckets.data);
Returns: buckets (array[string])
Get all embedded files for a given user bucket, including their processing status.
GET /ai/embeddings/buckets/{bucket_name}
const bucket = await client.ai.embeddings.buckets.retrieve('bucket_name');
console.log(bucket.data);
Returns: created_at (date-time), error_reason (string), filename (string), last_embedded_at (date-time), status (string), updated_at (date-time)
Deletes an entire bucket's embeddings and disables the bucket for AI-use, returning it to normal storage pricing.
DELETE /ai/embeddings/buckets/{bucket_name}
await client.ai.embeddings.buckets.delete('bucket_name');
Perform a similarity search on a Telnyx Storage Bucket, returning the most similar num_docs document chunks to the query. Currently the only available distance metric is cosine similarity which will return a distance between 0 and 1. The lower the distance, the more similar the returned document chunks are to the query.
POST /ai/embeddings/similarity-search — Required: bucket_name, query
Optional: num_of_docs (integer)
const response = await client.ai.embeddings.similaritySearch({
bucket_name: 'bucket_name',
query: 'What is Telnyx?',
});
console.log(response.data);
Returns: distance (number), document_chunk (string), metadata (object)
Embed website content from a specified URL, including child pages up to 5 levels deep within the same domain. The process crawls and loads content from the main URL and its linked pages into a Telnyx Cloud Storage bucket.
POST /ai/embeddings/url — Required: url, bucket_name
const embeddingResponse = await client.ai.embeddings.url({
bucket_name: 'bucket_name',
url: 'https://example.com/resource',
});
console.log(embeddingResponse.data);
Returns: created_at (string), finished_at (string | null), status (string), task_id (uuid), task_name (string), user_id (uuid)
Check the status of a current embedding task. Will be one of the following:
queued - Task is waiting to be picked up by a workerprocessing - The embedding task is runningsuccess - Task completed successfully and the bucket is embeddedfailure - Task failed and no files were embedded successfullypartial_success - Some files were embedded successfully, but at least one failedGET /ai/embeddings/{task_id}
const embedding = await client.ai.embeddings.retrieve('task_id');
console.log(embedding.data);
Returns: created_at (string), finished_at (string), status (enum: queued, processing, success, failure, partial_success), task_id (uuid), task_name (string)
Retrieve a list of all fine tuning jobs created by the user.
GET /ai/fine_tuning/jobs
const jobs = await client.ai.fineTuning.jobs.list();
console.log(jobs.data);
Returns: created_at (integer), finished_at (integer | null), hyperparameters (object), id (string), model (string), organization_id (string), status (enum: queued, running, succeeded, failed, cancelled), trained_tokens (integer | null), training_file (string)
Create a new fine tuning job.
POST /ai/fine_tuning/jobs — Required: model, training_file
Optional: hyperparameters (object), suffix (string)
const fineTuningJob = await client.ai.fineTuning.jobs.create({
model: 'openai/gpt-4o',
training_file: 'training_file',
});
console.log(fineTuningJob.id);
Returns: created_at (integer), finished_at (integer | null), hyperparameters (object), id (string), model (string), organization_id (string), status (enum: queued, running, succeeded, failed, cancelled), trained_tokens (integer | null), training_file (string)
Retrieve a fine tuning job by job_id.
GET /ai/fine_tuning/jobs/{job_id}
const fineTuningJob = await client.ai.fineTuning.jobs.retrieve('job_id');
console.log(fineTuningJob.id);
Returns: created_at (integer), finished_at (integer | null), hyperparameters (object), id (string), model (string), organization_id (string), status (enum: queued, running, succeeded, failed, cancelled), trained_tokens (integer | null), training_file (string)
Cancel a fine tuning job.
POST /ai/fine_tuning/jobs/{job_id}/cancel
const fineTuningJob = await client.ai.fineTuning.jobs.cancel('job_id');
console.log(fineTuningJob.id);
Returns: created_at (integer), finished_at (integer | null), hyperparameters (object), id (string), model (string), organization_id (string), status (enum: queued, running, succeeded, failed, cancelled), trained_tokens (integer | null), training_file (string)
This endpoint returns a list of Open Source and OpenAI models that are available for use. Note: Model id's will be in the form {source}/{model_name}. For example openai/gpt-4 or mistralai/Mistral-7B-Instruct-v0.1 consistent with HuggingFace naming conventions.
GET /ai/models
const response = await client.ai.retrieveModels();
console.log(response.data);
Returns: created (integer), id (string), object (string), owned_by (string)
Creates an embedding vector representing the input text. This endpoint is compatible with the OpenAI Embeddings API and may be used with the OpenAI JS or Python SDK by setting the base URL to https://api.telnyx.com/v2/ai/openai.
POST /ai/openai/embeddings — Required: input, model
Optional: dimensions (integer), encoding_format (enum: float, base64), user (string)
const response = await client.ai.openai.embeddings.createEmbeddings({
input: 'The quick brown fox jumps over the lazy dog',
model: 'thenlper/gte-large',
});
console.log(response.data);
Returns: data (array[object]), model (string), object (string), usage (object)
Returns a list of available embedding models. This endpoint is compatible with the OpenAI Models API format.
GET /ai/openai/embeddings/models
const response = await client.ai.openai.embeddings.listEmbeddingModels();
console.log(response.data);
Returns: created (integer), id (string), object (string), owned_by (string)
Generate a summary of a file's contents. Supports the following text formats:
Supports the following media formats (billed for both the transcription and summary):
POST /ai/summarize — Required: bucket, filename
Optional: system_prompt (string)
const response = await client.ai.summarize({ bucket: 'my-bucket', filename: 'data.csv' });
console.log(response.data);
Returns: summary (string)
Retrieves all Speech to Text batch report requests for the authenticated user
GET /legacy/reporting/batch_detail_records/speech_to_text
const speechToTexts = await client.legacy.reporting.batchDetailRecords.speechToText.list();
console.log(speechToTexts.data);
Returns: created_at (date-time), download_link (string), end_date (date-time), id (string), record_type (string), start_date (date-time), status (enum: PENDING, COMPLETE, FAILED, EXPIRED)
Creates a new Speech to Text batch report request with the specified filters
POST /legacy/reporting/batch_detail_records/speech_to_text — Required: start_date, end_date
const speechToText = await client.legacy.reporting.batchDetailRecords.speechToText.create({
end_date: '2020-07-01T00:00:00-06:00',
start_date: '2020-07-01T00:00:00-06:00',
});
console.log(speechToText.data);
Returns: created_at (date-time), download_link (string), end_date (date-time), id (string), record_type (string), start_date (date-time), status (enum: PENDING, COMPLETE, FAILED, EXPIRED)
Retrieves a specific Speech to Text batch report request by ID
GET /legacy/reporting/batch_detail_records/speech_to_text/{id}
const speechToText = await client.legacy.reporting.batchDetailRecords.speechToText.retrieve(
'182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',
);
console.log(speechToText.data);
Returns: created_at (date-time), download_link (string), end_date (date-time), id (string), record_type (string), start_date (date-time), status (enum: PENDING, COMPLETE, FAILED, EXPIRED)
Deletes a specific Speech to Text batch report request by ID
DELETE /legacy/reporting/batch_detail_records/speech_to_text/{id}
const speechToText = await client.legacy.reporting.batchDetailRecords.speechToText.delete(
'182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e',
);
console.log(speechToText.data);
Returns: created_at (date-time), download_link (string), end_date (date-time), id (string), record_type (string), start_date (date-time), status (enum: PENDING, COMPLETE, FAILED, EXPIRED)
Generate and fetch speech to text usage report synchronously. This endpoint will both generate and fetch the speech to text report over a specified time period.
GET /legacy/reporting/usage_reports/speech_to_text
const response = await client.legacy.reporting.usageReports.retrieveSpeechToText();
console.log(response.data);
Returns: data (object)
Generate synthesized speech audio from text input. Returns audio in the requested format (binary audio stream, base64-encoded JSON, or an audio URL for later retrieval). Authentication is provided via the standard Authorization: Bearer header.
POST /text-to-speech/speech
Optional: aws (object), azure (object), disable_cache (boolean), elevenlabs (object), language (string), minimax (object), output_type (enum: binary_output, base64_output), provider (enum: aws, telnyx, azure, elevenlabs, minimax, rime, resemble), resemble (object), rime (object), telnyx (object), text (string), text_type (enum: text, ssml), voice (string), voice_settings (object)
const response = await client.textToSpeech.generate();
console.log(response.base64_audio);
Returns: base64_audio (string)
Retrieve a list of available voices from one or all TTS providers. When provider is specified, returns voices for that provider only. Otherwise, returns voices from all providers.
GET /text-to-speech/voices
const response = await client.textToSpeech.listVoices();
console.log(response.voices);
Returns: voices (array[object])