From telnyx
Provides Ruby SDK examples for Telnyx AI APIs: speech-to-text transcription, chat completions, embeddings, and call insights analytics.
npx claudepluginhub team-telnyx/ai --plugin telnyxThis skill uses the workspace's default tool permissions.
<!-- Auto-generated from Telnyx OpenAPI specs. Do not edit. -->
Provides Ruby SDK examples for Telnyx AI APIs: speech-to-text transcription, chat completions, embeddings, and call insights analytics.
Access Telnyx AI inference APIs via JavaScript SDK for chat completions, speech-to-text transcription, embeddings, and call analytics.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Share bugs, ideas, or general feedback.
gem install telnyx
require "telnyx"
client = Telnyx::Client.new(
api_key: ENV["TELNYX_API_KEY"], # This is the default and can be omitted
)
All examples below assume client is already initialized as shown above.
All API calls can fail with network errors, rate limits (429), validation errors (422), or authentication errors (401). Always handle errors in production code:
begin
result = client.messages.send_(to: "+13125550001", from: "+13125550002", text: "Hello")
rescue Telnyx::Errors::APIConnectionError
puts "Network error — check connectivity and retry"
rescue Telnyx::Errors::RateLimitError
# 429: rate limited — wait and retry with exponential backoff
sleep(1) # Check Retry-After header for actual delay
rescue Telnyx::Errors::APIStatusError => e
puts "API error #{e.status}: #{e.message}"
if e.status == 422
puts "Validation error — check required fields and formats"
end
end
Common error codes: 401 invalid API key, 403 insufficient permissions,
404 resource not found, 422 validation error (check field formats),
429 rate limited (retry with exponential backoff).
.auto_paging_each for automatic iteration: page.auto_paging_each { |item| puts item.id }.Transcribe speech to text. This endpoint is consistent with the OpenAI Transcription API and may be used with the OpenAI JS or Python SDK.
POST /ai/audio/transcriptions
response = client.ai.audio.transcribe(model: :"distil-whisper/distil-large-v2")
puts(response)
Returns: duration (number), segments (array[object]), text (string)
Chat with a language model. This endpoint is consistent with the OpenAI Chat Completions API and may be used with the OpenAI JS or Python SDK.
POST /ai/chat/completions — Required: messages
Optional: api_key_ref (string), best_of (integer), early_stopping (boolean), enable_thinking (boolean), frequency_penalty (number), guided_choice (array[string]), guided_json (object), guided_regex (string), length_penalty (number), logprobs (boolean), max_tokens (integer), min_p (number), model (string), n (number), presence_penalty (number), response_format (object), stream (boolean), temperature (number), tool_choice (enum: none, auto, required), tools (array[object]), top_logprobs (integer), top_p (number), use_beam_search (boolean)
response = client.ai.chat.create_completion(
messages: [{content: "You are a friendly chatbot.", role: :system}, {content: "Hello, world!", role: :user}]
)
puts(response)
Retrieve a list of all AI conversations configured by the user. Supports PostgREST-style query parameters for filtering. Examples are included for the standard metadata fields, but you can filter on any field in the metadata JSON object.
GET /ai/conversations
conversations = client.ai.conversations.list
puts(conversations)
Returns: created_at (date-time), id (uuid), last_message_at (date-time), metadata (object), name (string)
Create a new AI Conversation.
POST /ai/conversations
Optional: metadata (object), name (string)
conversation = client.ai.conversations.create
puts(conversation)
Returns: created_at (date-time), id (uuid), last_message_at (date-time), metadata (object), name (string)
Get all insight groups
GET /ai/conversations/insight-groups
page = client.ai.conversations.insight_groups.retrieve_insight_groups
puts(page)
Returns: created_at (date-time), description (string), id (uuid), insights (array[object]), name (string), webhook (string)
Create a new insight group
POST /ai/conversations/insight-groups — Required: name
Optional: description (string), webhook (string)
insight_template_group_detail = client.ai.conversations.insight_groups.insight_groups(name: "my-resource")
puts(insight_template_group_detail)
Returns: created_at (date-time), description (string), id (uuid), insights (array[object]), name (string), webhook (string)
Get insight group by ID
GET /ai/conversations/insight-groups/{group_id}
insight_template_group_detail = client.ai.conversations.insight_groups.retrieve("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e")
puts(insight_template_group_detail)
Returns: created_at (date-time), description (string), id (uuid), insights (array[object]), name (string), webhook (string)
Update an insight template group
PUT /ai/conversations/insight-groups/{group_id}
Optional: description (string), name (string), webhook (string)
insight_template_group_detail = client.ai.conversations.insight_groups.update("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e")
puts(insight_template_group_detail)
Returns: created_at (date-time), description (string), id (uuid), insights (array[object]), name (string), webhook (string)
Delete insight group by ID
DELETE /ai/conversations/insight-groups/{group_id}
result = client.ai.conversations.insight_groups.delete("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e")
puts(result)
Assign an insight to a group
POST /ai/conversations/insight-groups/{group_id}/insights/{insight_id}/assign
result = client.ai.conversations.insight_groups.insights.assign(
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
group_id: "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e"
)
puts(result)
Remove an insight from a group
DELETE /ai/conversations/insight-groups/{group_id}/insights/{insight_id}/unassign
result = client.ai.conversations.insight_groups.insights.delete_unassign(
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
group_id: "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e"
)
puts(result)
Get all insights
GET /ai/conversations/insights
page = client.ai.conversations.insights.list
puts(page)
Returns: created_at (date-time), id (uuid), insight_type (enum: custom, default), instructions (string), json_schema (object), name (string), webhook (string)
Create a new insight
POST /ai/conversations/insights — Required: instructions, name
Optional: json_schema (object), webhook (string)
insight_template_detail = client.ai.conversations.insights.create(instructions: "You are a helpful assistant.", name: "my-resource")
puts(insight_template_detail)
Returns: created_at (date-time), id (uuid), insight_type (enum: custom, default), instructions (string), json_schema (object), name (string), webhook (string)
Get insight by ID
GET /ai/conversations/insights/{insight_id}
insight_template_detail = client.ai.conversations.insights.retrieve("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e")
puts(insight_template_detail)
Returns: created_at (date-time), id (uuid), insight_type (enum: custom, default), instructions (string), json_schema (object), name (string), webhook (string)
Update an insight template
PUT /ai/conversations/insights/{insight_id}
Optional: instructions (string), json_schema (object), name (string), webhook (string)
insight_template_detail = client.ai.conversations.insights.update("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e")
puts(insight_template_detail)
Returns: created_at (date-time), id (uuid), insight_type (enum: custom, default), instructions (string), json_schema (object), name (string), webhook (string)
Delete insight by ID
DELETE /ai/conversations/insights/{insight_id}
result = client.ai.conversations.insights.delete("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e")
puts(result)
Retrieve a specific AI conversation by its ID.
GET /ai/conversations/{conversation_id}
conversation = client.ai.conversations.retrieve("550e8400-e29b-41d4-a716-446655440000")
puts(conversation)
Returns: created_at (date-time), id (uuid), last_message_at (date-time), metadata (object), name (string)
Update metadata for a specific conversation.
PUT /ai/conversations/{conversation_id}
Optional: metadata (object)
conversation = client.ai.conversations.update("550e8400-e29b-41d4-a716-446655440000")
puts(conversation)
Returns: created_at (date-time), id (uuid), last_message_at (date-time), metadata (object), name (string)
Delete a specific conversation by its ID.
DELETE /ai/conversations/{conversation_id}
result = client.ai.conversations.delete("550e8400-e29b-41d4-a716-446655440000")
puts(result)
Retrieve insights for a specific conversation
GET /ai/conversations/{conversation_id}/conversations-insights
response = client.ai.conversations.retrieve_conversations_insights("550e8400-e29b-41d4-a716-446655440000")
puts(response)
Returns: conversation_insights (array[object]), created_at (date-time), id (string), status (enum: pending, in_progress, completed, failed)
Add a new message to the conversation. Used to insert a new messages to a conversation manually ( without using chat endpoint )
POST /ai/conversations/{conversation_id}/message — Required: role
Optional: content (string), metadata (object), name (string), sent_at (date-time), tool_call_id (string), tool_calls (array[object]), tool_choice (object)
result = client.ai.conversations.add_message("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e", role: "user")
puts(result)
Retrieve messages for a specific conversation, including tool calls made by the assistant.
GET /ai/conversations/{conversation_id}/messages
messages = client.ai.conversations.messages.list("550e8400-e29b-41d4-a716-446655440000")
puts(messages)
Returns: created_at (date-time), role (enum: user, assistant, tool), sent_at (date-time), text (string), tool_calls (array[object])
Retrieve tasks for the user that are either queued, processing, failed, success or partial_success based on the query string. Defaults to queued and processing.
GET /ai/embeddings
embeddings = client.ai.embeddings.list
puts(embeddings)
Returns: bucket (string), created_at (date-time), finished_at (date-time), status (enum: queued, processing, success, failure, partial_success), task_id (string), task_name (string), user_id (string)
Perform embedding on a Telnyx Storage Bucket using an embedding model. The current supported file types are:
POST /ai/embeddings — Required: bucket_name
Optional: document_chunk_overlap_size (integer), document_chunk_size (integer), embedding_model (object), loader (object)
embedding_response = client.ai.embeddings.create(bucket_name: "my-bucket")
puts(embedding_response)
Returns: created_at (string), finished_at (string | null), status (string), task_id (uuid), task_name (string), user_id (uuid)
Get all embedding buckets for a user.
GET /ai/embeddings/buckets
buckets = client.ai.embeddings.buckets.list
puts(buckets)
Returns: buckets (array[string])
Get all embedded files for a given user bucket, including their processing status.
GET /ai/embeddings/buckets/{bucket_name}
bucket = client.ai.embeddings.buckets.retrieve("bucket_name")
puts(bucket)
Returns: created_at (date-time), error_reason (string), filename (string), last_embedded_at (date-time), status (string), updated_at (date-time)
Deletes an entire bucket's embeddings and disables the bucket for AI-use, returning it to normal storage pricing.
DELETE /ai/embeddings/buckets/{bucket_name}
result = client.ai.embeddings.buckets.delete("bucket_name")
puts(result)
Perform a similarity search on a Telnyx Storage Bucket, returning the most similar num_docs document chunks to the query. Currently the only available distance metric is cosine similarity which will return a distance between 0 and 1. The lower the distance, the more similar the returned document chunks are to the query.
POST /ai/embeddings/similarity-search — Required: bucket_name, query
Optional: num_of_docs (integer)
response = client.ai.embeddings.similarity_search(bucket_name: "my-bucket", query: "What is Telnyx?")
puts(response)
Returns: distance (number), document_chunk (string), metadata (object)
Embed website content from a specified URL, including child pages up to 5 levels deep within the same domain. The process crawls and loads content from the main URL and its linked pages into a Telnyx Cloud Storage bucket.
POST /ai/embeddings/url — Required: url, bucket_name
embedding_response = client.ai.embeddings.url(bucket_name: "my-bucket", url: "https://example.com/resource")
puts(embedding_response)
Returns: created_at (string), finished_at (string | null), status (string), task_id (uuid), task_name (string), user_id (uuid)
Check the status of a current embedding task. Will be one of the following:
queued - Task is waiting to be picked up by a workerprocessing - The embedding task is runningsuccess - Task completed successfully and the bucket is embeddedfailure - Task failed and no files were embedded successfullypartial_success - Some files were embedded successfully, but at least one failedGET /ai/embeddings/{task_id}
embedding = client.ai.embeddings.retrieve("task_id")
puts(embedding)
Returns: created_at (string), finished_at (string), status (enum: queued, processing, success, failure, partial_success), task_id (uuid), task_name (string)
Retrieve a list of all fine tuning jobs created by the user.
GET /ai/fine_tuning/jobs
jobs = client.ai.fine_tuning.jobs.list
puts(jobs)
Returns: created_at (integer), finished_at (integer | null), hyperparameters (object), id (string), model (string), organization_id (string), status (enum: queued, running, succeeded, failed, cancelled), trained_tokens (integer | null), training_file (string)
Create a new fine tuning job.
POST /ai/fine_tuning/jobs — Required: model, training_file
Optional: hyperparameters (object), suffix (string)
fine_tuning_job = client.ai.fine_tuning.jobs.create(model: "openai/gpt-4o", training_file: "training-data.jsonl")
puts(fine_tuning_job)
Returns: created_at (integer), finished_at (integer | null), hyperparameters (object), id (string), model (string), organization_id (string), status (enum: queued, running, succeeded, failed, cancelled), trained_tokens (integer | null), training_file (string)
Retrieve a fine tuning job by job_id.
GET /ai/fine_tuning/jobs/{job_id}
fine_tuning_job = client.ai.fine_tuning.jobs.retrieve("job_id")
puts(fine_tuning_job)
Returns: created_at (integer), finished_at (integer | null), hyperparameters (object), id (string), model (string), organization_id (string), status (enum: queued, running, succeeded, failed, cancelled), trained_tokens (integer | null), training_file (string)
Cancel a fine tuning job.
POST /ai/fine_tuning/jobs/{job_id}/cancel
fine_tuning_job = client.ai.fine_tuning.jobs.cancel("job_id")
puts(fine_tuning_job)
Returns: created_at (integer), finished_at (integer | null), hyperparameters (object), id (string), model (string), organization_id (string), status (enum: queued, running, succeeded, failed, cancelled), trained_tokens (integer | null), training_file (string)
This endpoint returns a list of Open Source and OpenAI models that are available for use. Note: Model id's will be in the form {source}/{model_name}. For example openai/gpt-4 or mistralai/Mistral-7B-Instruct-v0.1 consistent with HuggingFace naming conventions.
GET /ai/models
response = client.ai.retrieve_models
puts(response)
Returns: created (integer), id (string), object (string), owned_by (string)
Creates an embedding vector representing the input text. This endpoint is compatible with the OpenAI Embeddings API and may be used with the OpenAI JS or Python SDK by setting the base URL to https://api.telnyx.com/v2/ai/openai.
POST /ai/openai/embeddings — Required: input, model
Optional: dimensions (integer), encoding_format (enum: float, base64), user (string)
response = client.ai.openai.embeddings.create_embeddings(
input: "The quick brown fox jumps over the lazy dog",
model: "thenlper/gte-large"
)
puts(response)
Returns: data (array[object]), model (string), object (string), usage (object)
Returns a list of available embedding models. This endpoint is compatible with the OpenAI Models API format.
GET /ai/openai/embeddings/models
response = client.ai.openai.embeddings.list_embedding_models
puts(response)
Returns: created (integer), id (string), object (string), owned_by (string)
Generate a summary of a file's contents. Supports the following text formats:
Supports the following media formats (billed for both the transcription and summary):
POST /ai/summarize — Required: bucket, filename
Optional: system_prompt (string)
response = client.ai.summarize(bucket: "my-bucket", filename: "data.csv")
puts(response)
Returns: summary (string)
Retrieves all Speech to Text batch report requests for the authenticated user
GET /legacy/reporting/batch_detail_records/speech_to_text
speech_to_texts = client.legacy.reporting.batch_detail_records.speech_to_text.list
puts(speech_to_texts)
Returns: created_at (date-time), download_link (string), end_date (date-time), id (string), record_type (string), start_date (date-time), status (enum: PENDING, COMPLETE, FAILED, EXPIRED)
Creates a new Speech to Text batch report request with the specified filters
POST /legacy/reporting/batch_detail_records/speech_to_text — Required: start_date, end_date
speech_to_text = client.legacy.reporting.batch_detail_records.speech_to_text.create(
end_date: "2020-07-01T00:00:00-06:00",
start_date: "2020-07-01T00:00:00-06:00"
)
puts(speech_to_text)
Returns: created_at (date-time), download_link (string), end_date (date-time), id (string), record_type (string), start_date (date-time), status (enum: PENDING, COMPLETE, FAILED, EXPIRED)
Retrieves a specific Speech to Text batch report request by ID
GET /legacy/reporting/batch_detail_records/speech_to_text/{id}
speech_to_text = client.legacy.reporting.batch_detail_records.speech_to_text.retrieve(
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e"
)
puts(speech_to_text)
Returns: created_at (date-time), download_link (string), end_date (date-time), id (string), record_type (string), start_date (date-time), status (enum: PENDING, COMPLETE, FAILED, EXPIRED)
Deletes a specific Speech to Text batch report request by ID
DELETE /legacy/reporting/batch_detail_records/speech_to_text/{id}
speech_to_text = client.legacy.reporting.batch_detail_records.speech_to_text.delete("182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e")
puts(speech_to_text)
Returns: created_at (date-time), download_link (string), end_date (date-time), id (string), record_type (string), start_date (date-time), status (enum: PENDING, COMPLETE, FAILED, EXPIRED)
Generate and fetch speech to text usage report synchronously. This endpoint will both generate and fetch the speech to text report over a specified time period.
GET /legacy/reporting/usage_reports/speech_to_text
response = client.legacy.reporting.usage_reports.retrieve_speech_to_text
puts(response)
Returns: data (object)
Generate synthesized speech audio from text input. Returns audio in the requested format (binary audio stream, base64-encoded JSON, or an audio URL for later retrieval). Authentication is provided via the standard Authorization: Bearer header.
POST /text-to-speech/speech
Optional: aws (object), azure (object), disable_cache (boolean), elevenlabs (object), language (string), minimax (object), output_type (enum: binary_output, base64_output), provider (enum: aws, telnyx, azure, elevenlabs, minimax, rime, resemble), resemble (object), rime (object), telnyx (object), text (string), text_type (enum: text, ssml), voice (string), voice_settings (object)
response = client.text_to_speech.generate
puts(response)
Returns: base64_audio (string)
Retrieve a list of available voices from one or all TTS providers. When provider is specified, returns voices for that provider only. Otherwise, returns voices from all providers.
GET /text-to-speech/voices
response = client.text_to_speech.list_voices
puts(response)
Returns: voices (array[object])