openai
pixeltable.functions.openai
Pixeltable UDFs
that wrap various endpoints from the OpenAI API. In order to use them, you must
first pip install openai
and configure your OpenAI credentials, as described in
the Working with OpenAI tutorial.
chat_completions
chat_completions(
messages: JsonT,
*,
model: str,
frequency_penalty: Optional[float] = None,
logit_bias: Optional[JsonT] = None,
logprobs: Optional[bool] = None,
top_logprobs: Optional[int] = None,
max_tokens: Optional[int] = None,
n: Optional[int] = None,
presence_penalty: Optional[float] = None,
response_format: Optional[JsonT] = None,
seed: Optional[int] = None,
stop: Optional[JsonT] = None,
temperature: Optional[float] = None,
top_p: Optional[float] = None,
tools: Optional[JsonT] = None,
tool_choice: Optional[JsonT] = None,
user: Optional[str] = None
) -> JsonT
Creates a model response for the given chat conversation.
Equivalent to the OpenAI chat/completions
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/chat-completions
Requirements:
pip install openai
Parameters:
-
messages
(JsonT
) –A list of messages to use for chat completion, as described in the OpenAI API documentation.
-
model
(str
) –The model to use for chat completion.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/chat
Returns:
-
JsonT
–A dictionary containing the response and other metadata.
Examples:
Add a computed column that applies the model gpt-4o-mini
to an existing Pixeltable column tbl.prompt
of the table tbl
:
>>> messages = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': tbl.prompt}
]
tbl['response'] = chat_completions(messages, model='gpt-4o-mini')
embeddings
embeddings(
input: str,
*,
model: str,
dimensions: Optional[int] = None,
user: Optional[str] = None
) -> ArrayT
Creates an embedding vector representing the input text.
Equivalent to the OpenAI embeddings
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/embeddings
Requirements:
pip install openai
Parameters:
-
input
(str
) –The text to embed.
-
model
(str
) –The model to use for the embedding.
-
dimensions
(Optional[int]
, default:None
) –The vector length of the embedding. If not specified, Pixeltable will use a default value based on the model.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/embeddings
Returns:
-
ArrayT
–An array representing the application of the given embedding to
input
.
Examples:
Add a computed column that applies the model text-embedding-3-small
to an existing
Pixeltable column tbl.text
of the table tbl
:
>>> tbl['embed'] = embeddings(tbl.text, model='text-embedding-3-small')
image_generations
image_generations(
prompt: str,
*,
model: Optional[str] = None,
quality: Optional[str] = None,
size: Optional[str] = None,
style: Optional[str] = None,
user: Optional[str] = None
) -> ImageT
Creates an image given a prompt.
Equivalent to the OpenAI images/generations
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/images
Requirements:
pip install openai
Parameters:
-
prompt
(str
) –Prompt for the image.
-
model
(Optional[str]
, default:None
) –The model to use for the generations.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/images/create
Returns:
-
ImageT
–The generated image.
Examples:
Add a computed column that applies the model dall-e-2
to an existing
Pixeltable column tbl.text
of the table tbl
:
>>> tbl['gen_image'] = image_generations(tbl.text, model='dall-e-2')
moderations
moderations(input: str, *, model: Optional[str] = None) -> JsonT
Classifies if text is potentially harmful.
Equivalent to the OpenAI moderations
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/moderation
Requirements:
pip install openai
Parameters:
-
input
(str
) –Text to analyze with the moderations model.
-
model
(Optional[str]
, default:None
) –The model to use for moderations.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/moderations
Returns:
-
JsonT
–Details of the moderations results.
Examples:
Add a computed column that applies the model text-moderation-stable
to an existing
Pixeltable column tbl.input
of the table tbl
:
>>> tbl['moderations'] = moderations(tbl.text, model='text-moderation-stable')
speech
speech(
input: str,
*,
model: str,
voice: str,
response_format: Optional[str] = None,
speed: Optional[float] = None
) -> AudioT
Generates audio from the input text.
Equivalent to the OpenAI audio/speech
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/text-to-speech
Requirements:
pip install openai
Parameters:
-
input
(str
) –The text to synthesize into speech.
-
model
(str
) –The model to use for speech synthesis.
-
voice
(str
) –The voice profile to use for speech synthesis. Supported options include:
alloy
,echo
,fable
,onyx
,nova
, andshimmer
.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/audio/createSpeech
Returns:
-
AudioT
–An audio file containing the synthesized speech.
Examples:
Add a computed column that applies the model tts-1
to an existing Pixeltable column tbl.text
of the table tbl
:
>>> tbl['audio'] = speech(tbl.text, model='tts-1', voice='nova')
transcriptions
transcriptions(
audio: AudioT,
*,
model: str,
language: Optional[str] = None,
prompt: Optional[str] = None,
temperature: Optional[float] = None
) -> JsonT
Transcribes audio into the input language.
Equivalent to the OpenAI audio/transcriptions
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/speech-to-text
Requirements:
pip install openai
Parameters:
-
audio
(AudioT
) –The audio to transcribe.
-
model
(str
) –The model to use for speech transcription.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/audio/createTranscription
Returns:
-
JsonT
–A dictionary containing the transcription and other metadata.
Examples:
Add a computed column that applies the model whisper-1
to an existing Pixeltable column tbl.audio
of the table tbl
:
>>> tbl['transcription'] = transcriptions(tbl.audio, model='whisper-1', language='en')
translations
translations(
audio: AudioT,
*,
model: str,
prompt: Optional[str] = None,
temperature: Optional[float] = None
) -> JsonT
Translates audio into English.
Equivalent to the OpenAI audio/translations
API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/speech-to-text
Requirements:
pip install openai
Parameters:
-
audio
(AudioT
) –The audio to translate.
-
model
(str
) –The model to use for speech transcription and translation.
For details on the other parameters, see: https://platform.openai.com/docs/api-reference/audio/createTranslation
Returns:
-
JsonT
–A dictionary containing the translation and other metadata.
Examples:
Add a computed column that applies the model whisper-1
to an existing Pixeltable column tbl.audio
of the table tbl
:
>>> tbl['translation'] = translations(tbl.audio, model='whisper-1', language='en')
vision
vision(prompt: str, image: ImageT, *, model: str) -> str
Analyzes an image with the OpenAI vision capability. This is a convenience function that takes an image and prompt, and constructs a chat completion request that utilizes OpenAI vision.
For additional details, see: https://platform.openai.com/docs/guides/vision
Requirements:
pip install openai
Parameters:
-
prompt
(str
) –A prompt for the OpenAI vision request.
-
image
(ImageT
) –The image to analyze.
-
model
(str
) –The model to use for OpenAI vision.
Returns:
-
str
–The response from the OpenAI vision API.
Examples:
Add a computed column that applies the model gpt-4o-mini
to an existing Pixeltable column tbl.image
of the table tbl
:
>>> tbl['response'] = vision("What's in this image?", tbl.image, model='gpt-4o-mini')