openai
pixeltable.functions.openai
Pixeltable UDFs
that wrap various endpoints from the OpenAI API. In order to use them, you must
first pip install openai
and configure your OpenAI credentials, as described in
the Working with OpenAI tutorial.
chat_completions
chat_completions(
messages: Json,
*,
model: String,
frequency_penalty: Optional[Float] = None,
logit_bias: Optional[Json] = None,
logprobs: Optional[Bool] = None,
top_logprobs: Optional[Int] = None,
max_tokens: Optional[Int] = None,
n: Optional[Int] = None,
presence_penalty: Optional[Float] = None,
response_format: Optional[Json] = None,
seed: Optional[Int] = None,
stop: Optional[Json] = None,
temperature: Optional[Float] = None,
top_p: Optional[Float] = None,
tools: Optional[Json] = None,
tool_choice: Optional[Json] = None,
user: Optional[String] = None
) -> Json
Creates a model response for the given chat conversation.
Equivalent to the OpenAI chat/completions
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/chat-completions
Requirements:
pip install openai
Parameters:
-
messages
(Json
) –A list of messages to use for chat completion, as described in the OpenAI API documentation.
-
model
(String
) –The model to use for chat completion.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/chat
Returns:
-
Json
–A dictionary containing the response and other metadata.
Examples:
Add a computed column that applies the model gpt-4o-mini
to an existing Pixeltable column tbl.prompt
of the table tbl
:
>>> messages = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': tbl.prompt}
]
tbl['response'] = chat_completions(messages, model='gpt-4o-mini')
embeddings
embeddings(
input: String,
*,
model: String,
dimensions: Optional[Int] = None,
user: Optional[String] = None
) -> Array[(None,), Float]
Creates an embedding vector representing the input text.
Equivalent to the OpenAI embeddings
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/embeddings
Requirements:
pip install openai
Parameters:
-
input
(String
) –The text to embed.
-
model
(String
) –The model to use for the embedding.
-
dimensions
(Optional[Int]
, default:None
) –The vector length of the embedding. If not specified, Pixeltable will use a default value based on the model.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/embeddings
Returns:
-
Array[(None,), Float]
–An array representing the application of the given embedding to
input
.
Examples:
Add a computed column that applies the model text-embedding-3-small
to an existing
Pixeltable column tbl.text
of the table tbl
:
>>> tbl['embed'] = embeddings(tbl.text, model='text-embedding-3-small')
image_generations
image_generations(
prompt: String,
*,
model: Optional[String] = None,
quality: Optional[String] = None,
size: Optional[String] = None,
style: Optional[String] = None,
user: Optional[String] = None
) -> Image
Creates an image given a prompt.
Equivalent to the OpenAI images/generations
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/images
Requirements:
pip install openai
Parameters:
-
prompt
(String
) –Prompt for the image.
-
model
(Optional[String]
, default:None
) –The model to use for the generations.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/images/create
Returns:
-
Image
–The generated image.
Examples:
Add a computed column that applies the model dall-e-2
to an existing
Pixeltable column tbl.text
of the table tbl
:
>>> tbl['gen_image'] = image_generations(tbl.text, model='dall-e-2')
moderations
moderations(input: String, *, model: Optional[String] = None) -> Json
Classifies if text is potentially harmful.
Equivalent to the OpenAI moderations
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/moderation
Requirements:
pip install openai
Parameters:
-
input
(String
) –Text to analyze with the moderations model.
-
model
(Optional[String]
, default:None
) –The model to use for moderations.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/moderations
Returns:
-
Json
–Details of the moderations results.
Examples:
Add a computed column that applies the model text-moderation-stable
to an existing
Pixeltable column tbl.input
of the table tbl
:
>>> tbl['moderations'] = moderations(tbl.text, model='text-moderation-stable')
speech
speech(
input: String,
*,
model: String,
voice: String,
response_format: Optional[String] = None,
speed: Optional[Float] = None
) -> Audio
Generates audio from the input text.
Equivalent to the OpenAI audio/speech
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/text-to-speech
Requirements:
pip install openai
Parameters:
-
input
(String
) –The text to synthesize into speech.
-
model
(String
) –The model to use for speech synthesis.
-
voice
(String
) –The voice profile to use for speech synthesis. Supported options include:
alloy
,echo
,fable
,onyx
,nova
, andshimmer
.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/audio/createSpeech
Returns:
-
Audio
–An audio file containing the synthesized speech.
Examples:
Add a computed column that applies the model tts-1
to an existing Pixeltable column tbl.text
of the table tbl
:
>>> tbl['audio'] = speech(tbl.text, model='tts-1', voice='nova')
transcriptions
transcriptions(
audio: Audio,
*,
model: String,
language: Optional[String] = None,
prompt: Optional[String] = None,
temperature: Optional[Float] = None
) -> Json
Transcribes audio into the input language.
Equivalent to the OpenAI audio/transcriptions
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/speech-to-text
Requirements:
pip install openai
Parameters:
-
audio
(Audio
) –The audio to transcribe.
-
model
(String
) –The model to use for speech transcription.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/audio/createTranscription
Returns:
-
Json
–A dictionary containing the transcription and other metadata.
Examples:
Add a computed column that applies the model whisper-1
to an existing Pixeltable column tbl.audio
of the table tbl
:
>>> tbl['transcription'] = transcriptions(tbl.audio, model='whisper-1', language='en')
translations
translations(
audio: Audio,
*,
model: String,
prompt: Optional[String] = None,
temperature: Optional[Float] = None
) -> Json
Translates audio into English.
Equivalent to the OpenAI audio/translations
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/speech-to-text
Requirements:
pip install openai
Parameters:
-
audio
(Audio
) –The audio to translate.
-
model
(String
) –The model to use for speech transcription and translation.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/audio/createTranslation
Returns:
-
Json
–A dictionary containing the translation and other metadata.
Examples:
Add a computed column that applies the model whisper-1
to an existing Pixeltable column tbl.audio
of the table tbl
:
>>> tbl['translation'] = translations(tbl.audio, model='whisper-1', language='en')
vision
vision(prompt: String, image: Image, *, model: String) -> String
Analyzes an image with the OpenAI vision capability. This is a convenience function that takes an image and prompt, and constructs a chat completion request that utilizes OpenAI vision.
For additional details, see: https://platform.openai.com/docs/guides/vision
Requirements:
pip install openai
Parameters:
-
prompt
(String
) –A prompt for the OpenAI vision request.
-
image
(Image
) –The image to analyze.
-
model
(String
) –The model to use for OpenAI vision.
Returns:
-
String
–The response from the OpenAI vision API.
Examples:
Add a computed column that applies the model gpt-4o-mini
to an existing Pixeltable column tbl.image
of the table tbl
:
>>> tbl['response'] = vision("What's in this image?", tbl.image, model='gpt-4o-mini')