From telnyx-python
Access Telnyx AI inference APIs via Python SDK: OpenAI-compatible chat completions, speech transcription, embeddings, and call analytics.
npx claudepluginhub team-telnyx/skillsThis skill uses the workspace's default tool permissions.
<!-- Auto-generated from Telnyx OpenAPI specs. Do not edit. -->
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
pip install telnyx
import os
from telnyx import Telnyx
client = Telnyx(
api_key=os.environ.get("TELNYX_API_KEY"), # This is the default and can be omitted
)
All examples below assume client is already initialized as shown above.
All API calls can fail with network errors, rate limits (429), validation errors (422), or authentication errors (401). Always handle errors in production code:
import telnyx
try:
result = client.messages.send(to="+13125550001", from_="+13125550002", text="Hello")
except telnyx.APIConnectionError:
print("Network error — check connectivity and retry")
except telnyx.RateLimitError:
# 429: rate limited — wait and retry with exponential backoff
import time
time.sleep(1) # Check Retry-After header for actual delay
except telnyx.APIStatusError as e:
print(f"API error {e.status_code}: {e.message}")
if e.status_code == 422:
print("Validation error — check required fields and formats")
Common error codes: 401 invalid API key, 403 insufficient permissions,
404 resource not found, 422 validation error (check field formats),
429 rate limited (retry with exponential backoff).
for item in page_result: to iterate through all pages automatically.Transcribe speech to text. This endpoint is consistent with the OpenAI Transcription API and may be used with the OpenAI JS or Python SDK.
POST /ai/audio/transcriptions
response = client.ai.audio.transcribe(
model="distil-whisper/distil-large-v2",
)
print(response.text)
Returns: duration (number), segments (array[object]), text (string)
Chat with a language model. This endpoint is consistent with the OpenAI Chat Completions API and may be used with the OpenAI JS or Python SDK.
POST /ai/chat/completions — Required: messages
Optional: api_key_ref (string), best_of (integer), early_stopping (boolean), enable_thinking (boolean), frequency_penalty (number), guided_choice (array[string]), guided_json (object), guided_regex (string), length_penalty (number), logprobs (boolean), max_tokens (integer), min_p (number), model (string), n (number), presence_penalty (number), response_format (object), stream (boolean), temperature (number), tool_choice (enum: none, auto, required), tools (array[object]), top_logprobs (integer), top_p (number), use_beam_search (boolean)
response = client.ai.chat.create_completion(
messages=[{
"role": "system",
"content": "You are a friendly chatbot.",
}, {
"role": "user",
"content": "Hello, world!",
}],
)
print(response)
Retrieve a list of all AI conversations configured by the user. Supports PostgREST-style query parameters for filtering. Examples are included for the standard metadata fields, but you can filter on any field in the metadata JSON object.
GET /ai/conversations
conversations = client.ai.conversations.list()
print(conversations.data)
Returns: created_at (date-time), id (uuid), last_message_at (date-time), metadata (object), name (string)
Create a new AI Conversation.
POST /ai/conversations
Optional: metadata (object), name (string)
conversation = client.ai.conversations.create()
print(conversation.id)
Returns: created_at (date-time), id (uuid), last_message_at (date-time), metadata (object), name (string)
Get all insight groups
GET /ai/conversations/insight-groups
page = client.ai.conversations.insight_groups.retrieve_insight_groups()
page = page.data[0]
print(page.id)
Returns: created_at (date-time), description (string), id (uuid), insights (array[object]), name (string), webhook (string)
Create a new insight group
POST /ai/conversations/insight-groups — Required: name
Optional: description (string), webhook (string)
insight_template_group_detail = client.ai.conversations.insight_groups.insight_groups(
name="my-resource",
)
print(insight_template_group_detail.data)
Returns: created_at (date-time), description (string), id (uuid), insights (array[object]), name (string), webhook (string)
Get insight group by ID
GET /ai/conversations/insight-groups/{group_id}
insight_template_group_detail = client.ai.conversations.insight_groups.retrieve(
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
print(insight_template_group_detail.data)
Returns: created_at (date-time), description (string), id (uuid), insights (array[object]), name (string), webhook (string)
Update an insight template group
PUT /ai/conversations/insight-groups/{group_id}
Optional: description (string), name (string), webhook (string)
insight_template_group_detail = client.ai.conversations.insight_groups.update(
group_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
print(insight_template_group_detail.data)
Returns: created_at (date-time), description (string), id (uuid), insights (array[object]), name (string), webhook (string)
Delete insight group by ID
DELETE /ai/conversations/insight-groups/{group_id}
client.ai.conversations.insight_groups.delete(
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
Assign an insight to a group
POST /ai/conversations/insight-groups/{group_id}/insights/{insight_id}/assign
client.ai.conversations.insight_groups.insights.assign(
insight_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
group_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
Remove an insight from a group
DELETE /ai/conversations/insight-groups/{group_id}/insights/{insight_id}/unassign
client.ai.conversations.insight_groups.insights.delete_unassign(
insight_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
group_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
Get all insights
GET /ai/conversations/insights
page = client.ai.conversations.insights.list()
page = page.data[0]
print(page.id)
Returns: created_at (date-time), id (uuid), insight_type (enum: custom, default), instructions (string), json_schema (object), name (string), webhook (string)
Create a new insight
POST /ai/conversations/insights — Required: instructions, name
Optional: json_schema (object), webhook (string)
insight_template_detail = client.ai.conversations.insights.create(
instructions="You are a helpful assistant.",
name="my-resource",
)
print(insight_template_detail.data)
Returns: created_at (date-time), id (uuid), insight_type (enum: custom, default), instructions (string), json_schema (object), name (string), webhook (string)
Get insight by ID
GET /ai/conversations/insights/{insight_id}
insight_template_detail = client.ai.conversations.insights.retrieve(
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
print(insight_template_detail.data)
Returns: created_at (date-time), id (uuid), insight_type (enum: custom, default), instructions (string), json_schema (object), name (string), webhook (string)
Update an insight template
PUT /ai/conversations/insights/{insight_id}
Optional: instructions (string), json_schema (object), name (string), webhook (string)
insight_template_detail = client.ai.conversations.insights.update(
insight_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
print(insight_template_detail.data)
Returns: created_at (date-time), id (uuid), insight_type (enum: custom, default), instructions (string), json_schema (object), name (string), webhook (string)
Delete insight by ID
DELETE /ai/conversations/insights/{insight_id}
client.ai.conversations.insights.delete(
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
Retrieve a specific AI conversation by its ID.
GET /ai/conversations/{conversation_id}
conversation = client.ai.conversations.retrieve(
"conversation_id",
)
print(conversation.data)
Returns: created_at (date-time), id (uuid), last_message_at (date-time), metadata (object), name (string)
Update metadata for a specific conversation.
PUT /ai/conversations/{conversation_id}
Optional: metadata (object)
conversation = client.ai.conversations.update(
conversation_id="550e8400-e29b-41d4-a716-446655440000",
)
print(conversation.data)
Returns: created_at (date-time), id (uuid), last_message_at (date-time), metadata (object), name (string)
Delete a specific conversation by its ID.
DELETE /ai/conversations/{conversation_id}
client.ai.conversations.delete(
"conversation_id",
)
Retrieve insights for a specific conversation
GET /ai/conversations/{conversation_id}/conversations-insights
response = client.ai.conversations.retrieve_conversations_insights(
"conversation_id",
)
print(response.data)
Returns: conversation_insights (array[object]), created_at (date-time), id (string), status (enum: pending, in_progress, completed, failed)
Add a new message to the conversation. Used to insert a new messages to a conversation manually ( without using chat endpoint )
POST /ai/conversations/{conversation_id}/message — Required: role
Optional: content (string), metadata (object), name (string), sent_at (date-time), tool_call_id (string), tool_calls (array[object]), tool_choice (object)
client.ai.conversations.add_message(
conversation_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
role="user",
)
Retrieve messages for a specific conversation, including tool calls made by the assistant.
GET /ai/conversations/{conversation_id}/messages
messages = client.ai.conversations.messages.list(
"conversation_id",
)
print(messages.data)
Returns: created_at (date-time), role (enum: user, assistant, tool), sent_at (date-time), text (string), tool_calls (array[object])
Retrieve tasks for the user that are either queued, processing, failed, success or partial_success based on the query string. Defaults to queued and processing.
GET /ai/embeddings
embeddings = client.ai.embeddings.list()
print(embeddings.data)
Returns: bucket (string), created_at (date-time), finished_at (date-time), status (enum: queued, processing, success, failure, partial_success), task_id (string), task_name (string), user_id (string)
Perform embedding on a Telnyx Storage Bucket using an embedding model. The current supported file types are:
POST /ai/embeddings — Required: bucket_name
Optional: document_chunk_overlap_size (integer), document_chunk_size (integer), embedding_model (object), loader (object)
embedding_response = client.ai.embeddings.create(
bucket_name="my-bucket",
)
print(embedding_response.data)
Returns: created_at (string), finished_at (string | null), status (string), task_id (uuid), task_name (string), user_id (uuid)
Get all embedding buckets for a user.
GET /ai/embeddings/buckets
buckets = client.ai.embeddings.buckets.list()
print(buckets.data)
Returns: buckets (array[string])
Get all embedded files for a given user bucket, including their processing status.
GET /ai/embeddings/buckets/{bucket_name}
bucket = client.ai.embeddings.buckets.retrieve(
"bucket_name",
)
print(bucket.data)
Returns: created_at (date-time), error_reason (string), filename (string), last_embedded_at (date-time), status (string), updated_at (date-time)
Deletes an entire bucket's embeddings and disables the bucket for AI-use, returning it to normal storage pricing.
DELETE /ai/embeddings/buckets/{bucket_name}
client.ai.embeddings.buckets.delete(
"bucket_name",
)
Perform a similarity search on a Telnyx Storage Bucket, returning the most similar num_docs document chunks to the query. Currently the only available distance metric is cosine similarity which will return a distance between 0 and 1. The lower the distance, the more similar the returned document chunks are to the query.
POST /ai/embeddings/similarity-search — Required: bucket_name, query
Optional: num_of_docs (integer)
response = client.ai.embeddings.similarity_search(
bucket_name="my-bucket",
query="What is Telnyx?",
)
print(response.data)
Returns: distance (number), document_chunk (string), metadata (object)
Embed website content from a specified URL, including child pages up to 5 levels deep within the same domain. The process crawls and loads content from the main URL and its linked pages into a Telnyx Cloud Storage bucket.
POST /ai/embeddings/url — Required: url, bucket_name
embedding_response = client.ai.embeddings.url(
bucket_name="my-bucket",
url="https://example.com/resource",
)
print(embedding_response.data)
Returns: created_at (string), finished_at (string | null), status (string), task_id (uuid), task_name (string), user_id (uuid)
Check the status of a current embedding task. Will be one of the following:
queued - Task is waiting to be picked up by a workerprocessing - The embedding task is runningsuccess - Task completed successfully and the bucket is embeddedfailure - Task failed and no files were embedded successfullypartial_success - Some files were embedded successfully, but at least one failedGET /ai/embeddings/{task_id}
embedding = client.ai.embeddings.retrieve(
"task_id",
)
print(embedding.data)
Returns: created_at (string), finished_at (string), status (enum: queued, processing, success, failure, partial_success), task_id (uuid), task_name (string)
Retrieve a list of all fine tuning jobs created by the user.
GET /ai/fine_tuning/jobs
jobs = client.ai.fine_tuning.jobs.list()
print(jobs.data)
Returns: created_at (integer), finished_at (integer | null), hyperparameters (object), id (string), model (string), organization_id (string), status (enum: queued, running, succeeded, failed, cancelled), trained_tokens (integer | null), training_file (string)
Create a new fine tuning job.
POST /ai/fine_tuning/jobs — Required: model, training_file
Optional: hyperparameters (object), suffix (string)
fine_tuning_job = client.ai.fine_tuning.jobs.create(
model="openai/gpt-4o",
training_file="training-data.jsonl",
)
print(fine_tuning_job.id)
Returns: created_at (integer), finished_at (integer | null), hyperparameters (object), id (string), model (string), organization_id (string), status (enum: queued, running, succeeded, failed, cancelled), trained_tokens (integer | null), training_file (string)
Retrieve a fine tuning job by job_id.
GET /ai/fine_tuning/jobs/{job_id}
fine_tuning_job = client.ai.fine_tuning.jobs.retrieve(
"job_id",
)
print(fine_tuning_job.id)
Returns: created_at (integer), finished_at (integer | null), hyperparameters (object), id (string), model (string), organization_id (string), status (enum: queued, running, succeeded, failed, cancelled), trained_tokens (integer | null), training_file (string)
Cancel a fine tuning job.
POST /ai/fine_tuning/jobs/{job_id}/cancel
fine_tuning_job = client.ai.fine_tuning.jobs.cancel(
"job_id",
)
print(fine_tuning_job.id)
Returns: created_at (integer), finished_at (integer | null), hyperparameters (object), id (string), model (string), organization_id (string), status (enum: queued, running, succeeded, failed, cancelled), trained_tokens (integer | null), training_file (string)
This endpoint returns a list of Open Source and OpenAI models that are available for use. Note: Model id's will be in the form {source}/{model_name}. For example openai/gpt-4 or mistralai/Mistral-7B-Instruct-v0.1 consistent with HuggingFace naming conventions.
GET /ai/models
response = client.ai.retrieve_models()
print(response.data)
Returns: created (integer), id (string), object (string), owned_by (string)
Creates an embedding vector representing the input text. This endpoint is compatible with the OpenAI Embeddings API and may be used with the OpenAI JS or Python SDK by setting the base URL to https://api.telnyx.com/v2/ai/openai.
POST /ai/openai/embeddings — Required: input, model
Optional: dimensions (integer), encoding_format (enum: float, base64), user (string)
response = client.ai.openai.embeddings.create_embeddings(
input="The quick brown fox jumps over the lazy dog",
model="thenlper/gte-large",
)
print(response.data)
Returns: data (array[object]), model (string), object (string), usage (object)
Returns a list of available embedding models. This endpoint is compatible with the OpenAI Models API format.
GET /ai/openai/embeddings/models
response = client.ai.openai.embeddings.list_embedding_models()
print(response.data)
Returns: created (integer), id (string), object (string), owned_by (string)
Generate a summary of a file's contents. Supports the following text formats:
Supports the following media formats (billed for both the transcription and summary):
POST /ai/summarize — Required: bucket, filename
Optional: system_prompt (string)
response = client.ai.summarize(
bucket="my-bucket",
filename="data.csv",
)
print(response.data)
Returns: summary (string)
Retrieves all Speech to Text batch report requests for the authenticated user
GET /legacy/reporting/batch_detail_records/speech_to_text
speech_to_texts = client.legacy.reporting.batch_detail_records.speech_to_text.list()
print(speech_to_texts.data)
Returns: created_at (date-time), download_link (string), end_date (date-time), id (string), record_type (string), start_date (date-time), status (enum: PENDING, COMPLETE, FAILED, EXPIRED)
Creates a new Speech to Text batch report request with the specified filters
POST /legacy/reporting/batch_detail_records/speech_to_text — Required: start_date, end_date
from datetime import datetime
speech_to_text = client.legacy.reporting.batch_detail_records.speech_to_text.create(
end_date=datetime.fromisoformat("2020-07-01T00:00:00-06:00"),
start_date=datetime.fromisoformat("2020-07-01T00:00:00-06:00"),
)
print(speech_to_text.data)
Returns: created_at (date-time), download_link (string), end_date (date-time), id (string), record_type (string), start_date (date-time), status (enum: PENDING, COMPLETE, FAILED, EXPIRED)
Retrieves a specific Speech to Text batch report request by ID
GET /legacy/reporting/batch_detail_records/speech_to_text/{id}
speech_to_text = client.legacy.reporting.batch_detail_records.speech_to_text.retrieve(
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
print(speech_to_text.data)
Returns: created_at (date-time), download_link (string), end_date (date-time), id (string), record_type (string), start_date (date-time), status (enum: PENDING, COMPLETE, FAILED, EXPIRED)
Deletes a specific Speech to Text batch report request by ID
DELETE /legacy/reporting/batch_detail_records/speech_to_text/{id}
speech_to_text = client.legacy.reporting.batch_detail_records.speech_to_text.delete(
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
print(speech_to_text.data)
Returns: created_at (date-time), download_link (string), end_date (date-time), id (string), record_type (string), start_date (date-time), status (enum: PENDING, COMPLETE, FAILED, EXPIRED)
Generate and fetch speech to text usage report synchronously. This endpoint will both generate and fetch the speech to text report over a specified time period.
GET /legacy/reporting/usage_reports/speech_to_text
response = client.legacy.reporting.usage_reports.retrieve_speech_to_text()
print(response.data)
Returns: data (object)
Generate synthesized speech audio from text input. Returns audio in the requested format (binary audio stream, base64-encoded JSON, or an audio URL for later retrieval). Authentication is provided via the standard Authorization: Bearer header.
POST /text-to-speech/speech
Optional: aws (object), azure (object), disable_cache (boolean), elevenlabs (object), language (string), minimax (object), output_type (enum: binary_output, base64_output), provider (enum: aws, telnyx, azure, elevenlabs, minimax, rime, resemble), resemble (object), rime (object), telnyx (object), text (string), text_type (enum: text, ssml), voice (string), voice_settings (object)
response = client.text_to_speech.generate()
print(response.base64_audio)
Returns: base64_audio (string)
Retrieve a list of available voices from one or all TTS providers. When provider is specified, returns voices for that provider only. Otherwise, returns voices from all providers.
GET /text-to-speech/voices
response = client.text_to_speech.list_voices()
print(response.voices)
Returns: voices (array[object])