openai
pixeltable.functions.openai
Pixeltable UDFs
that wrap various endpoints from the OpenAI API. In order to use them, you must
first pip install openai
and configure your OpenAI credentials, as described in
the Working with OpenAI tutorial.
chat_completions
async
chat_completions(
messages: Json,
*,
model: String,
frequency_penalty: Optional[Float] = None,
logit_bias: Optional[Json] = None,
logprobs: Optional[Bool] = None,
top_logprobs: Optional[Int] = None,
max_completion_tokens: Optional[Int] = None,
max_tokens: Optional[Int] = None,
n: Optional[Int] = None,
presence_penalty: Optional[Float] = None,
reasoning_effort: Optional[String] = None,
response_format: Optional[Json] = None,
seed: Optional[Int] = None,
stop: Optional[Json] = None,
temperature: Optional[Float] = None,
tools: Optional[Json] = None,
tool_choice: Optional[Json] = None,
top_p: Optional[Float] = None,
user: Optional[String] = None,
timeout: Optional[Float] = None
) -> Json
Creates a model response for the given chat conversation.
Equivalent to the OpenAI chat/completions
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/chat-completions
Request throttling: Uses the rate limit-related headers returned by the API to throttle requests adaptively, based on available request and token capacity. No configuration is necessary.
Requirements:
pip install openai
Parameters:
-
messages
(Json
) –A list of messages to use for chat completion, as described in the OpenAI API documentation.
-
model
(String
) –The model to use for chat completion.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/chat
Returns:
-
Json
–A dictionary containing the response and other metadata.
Examples:
Add a computed column that applies the model gpt-4o-mini
to an existing Pixeltable column tbl.prompt
of the table tbl
:
>>> messages = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': tbl.prompt}
]
tbl.add_computed_column(response=chat_completions(messages, model='gpt-4o-mini'))
embeddings
async
embeddings(
input: String,
*,
model: String,
dimensions: Optional[Int] = None,
user: Optional[String] = None,
timeout: Optional[Float] = None
) -> Array[(None,), Float]
Creates an embedding vector representing the input text.
Equivalent to the OpenAI embeddings
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/embeddings
Request throttling: Uses the rate limit-related headers returned by the API to throttle requests adaptively, based on available request and token capacity. No configuration is necessary.
Requirements:
pip install openai
Parameters:
-
input
(String
) –The text to embed.
-
model
(String
) –The model to use for the embedding.
-
dimensions
(Optional[Int]
, default:None
) –The vector length of the embedding. If not specified, Pixeltable will use a default value based on the model.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/embeddings
Returns:
-
Array[(None,), Float]
–An array representing the application of the given embedding to
input
.
Examples:
Add a computed column that applies the model text-embedding-3-small
to an existing
Pixeltable column tbl.text
of the table tbl
:
>>> tbl.add_computed_column(embed=embeddings(tbl.text, model='text-embedding-3-small'))
Add an embedding index to an existing column text
, using the model text-embedding-3-small
:
>>> tbl.add_embedding_index(embedding=embeddings.using(model='text-embedding-3-small'))
image_generations
async
image_generations(
prompt: String,
*,
model: String = "dall-e-2",
quality: Optional[String] = None,
size: Optional[String] = None,
style: Optional[String] = None,
user: Optional[String] = None,
timeout: Optional[Float] = None
) -> Image
Creates an image given a prompt.
Equivalent to the OpenAI images/generations
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/images
Request throttling:
Applies the rate limit set in the config (section openai.rate_limits
; use the model id as the key). If no rate
limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
Parameters:
-
prompt
(String
) –Prompt for the image.
-
model
(String
, default:'dall-e-2'
) –The model to use for the generations.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/images/create
Returns:
-
Image
–The generated image.
Examples:
Add a computed column that applies the model dall-e-2
to an existing
Pixeltable column tbl.text
of the table tbl
:
>>> tbl.add_computed_column(gen_image=image_generations(tbl.text, model='dall-e-2'))
invoke_tools
invoke_tools(tools: Tools, response: Expr) -> InlineDict
Converts an OpenAI response dict to Pixeltable tool invocation format and calls tools._invoke()
.
moderations
async
moderations(input: String, *, model: String = 'omni-moderation-latest') -> Json
Classifies if text is potentially harmful.
Equivalent to the OpenAI moderations
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/moderation
Request throttling:
Applies the rate limit set in the config (section openai.rate_limits
; use the model id as the key). If no rate
limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
Parameters:
-
input
(String
) –Text to analyze with the moderations model.
-
model
(String
, default:'omni-moderation-latest'
) –The model to use for moderations.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/moderations
Returns:
-
Json
–Details of the moderations results.
Examples:
Add a computed column that applies the model text-moderation-stable
to an existing
Pixeltable column tbl.input
of the table tbl
:
>>> tbl.add_computed_column(moderations=moderations(tbl.text, model='text-moderation-stable'))
speech
async
speech(
input: String,
*,
model: String,
voice: String,
response_format: Optional[String] = None,
speed: Optional[Float] = None,
timeout: Optional[Float] = None
) -> Audio
Generates audio from the input text.
Equivalent to the OpenAI audio/speech
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/text-to-speech
Request throttling:
Applies the rate limit set in the config (section openai.rate_limits
; use the model id as the key). If no rate
limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
Parameters:
-
input
(String
) –The text to synthesize into speech.
-
model
(String
) –The model to use for speech synthesis.
-
voice
(String
) –The voice profile to use for speech synthesis. Supported options include:
alloy
,echo
,fable
,onyx
,nova
, andshimmer
.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/audio/createSpeech
Returns:
-
Audio
–An audio file containing the synthesized speech.
Examples:
Add a computed column that applies the model tts-1
to an existing Pixeltable column tbl.text
of the table tbl
:
>>> tbl.add_computed_column(audio=speech(tbl.text, model='tts-1', voice='nova'))
transcriptions
async
transcriptions(
audio: Audio,
*,
model: String,
language: Optional[String] = None,
prompt: Optional[String] = None,
temperature: Optional[Float] = None,
timeout: Optional[Float] = None
) -> Json
Transcribes audio into the input language.
Equivalent to the OpenAI audio/transcriptions
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/speech-to-text
Request throttling:
Applies the rate limit set in the config (section openai.rate_limits
; use the model id as the key). If no rate
limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
Parameters:
-
audio
(Audio
) –The audio to transcribe.
-
model
(String
) –The model to use for speech transcription.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/audio/createTranscription
Returns:
-
Json
–A dictionary containing the transcription and other metadata.
Examples:
Add a computed column that applies the model whisper-1
to an existing Pixeltable column tbl.audio
of the table tbl
:
>>> tbl.add_computed_column(transcription=transcriptions(tbl.audio, model='whisper-1', language='en'))
translations
async
translations(
audio: Audio,
*,
model: String,
prompt: Optional[String] = None,
temperature: Optional[Float] = None,
timeout: Optional[Float] = None
) -> Json
Translates audio into English.
Equivalent to the OpenAI audio/translations
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/speech-to-text
Request throttling:
Applies the rate limit set in the config (section openai.rate_limits
; use the model id as the key). If no rate
limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
Parameters:
-
audio
(Audio
) –The audio to translate.
-
model
(String
) –The model to use for speech transcription and translation.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/audio/createTranslation
Returns:
-
Json
–A dictionary containing the translation and other metadata.
Examples:
Add a computed column that applies the model whisper-1
to an existing Pixeltable column tbl.audio
of the table tbl
:
>>> tbl.add_computed_column(translation=translations(tbl.audio, model='whisper-1', language='en'))
vision
async
vision(
prompt: String,
image: Image,
*,
model: String,
max_completion_tokens: Optional[Int] = None,
max_tokens: Optional[Int] = None,
n: Optional[Int] = 1,
timeout: Optional[Float] = None
) -> String
Analyzes an image with the OpenAI vision capability. This is a convenience function that takes an image and prompt, and constructs a chat completion request that utilizes OpenAI vision.
For additional details, see: https://platform.openai.com/docs/guides/vision
Request throttling: Uses the rate limit-related headers returned by the API to throttle requests adaptively, based on available request and token capacity. No configuration is necessary.
Requirements:
pip install openai
Parameters:
-
prompt
(String
) –A prompt for the OpenAI vision request.
-
image
(Image
) –The image to analyze.
-
model
(String
) –The model to use for OpenAI vision.
Returns:
-
String
–The response from the OpenAI vision API.
Examples:
Add a computed column that applies the model gpt-4o-mini
to an existing Pixeltable column tbl.image
of the table tbl
:
>>> tbl.add_computed_column(response=vision("What's in this image?", tbl.image, model='gpt-4o-mini'))