Skip to content

together

pixeltable.functions.together

Pixeltable UDFs that wrap various endpoints from the Together AI API. In order to use them, you must first pip install together and configure your Together AI credentials, as described in the Working with Together AI tutorial.

chat_completions async

chat_completions(
    messages: Json,
    *,
    model: String,
    max_tokens: Optional[Int] = None,
    stop: Optional[Json] = None,
    temperature: Optional[Float] = None,
    top_p: Optional[Float] = None,
    top_k: Optional[Int] = None,
    repetition_penalty: Optional[Float] = None,
    logprobs: Optional[Int] = None,
    echo: Optional[Bool] = None,
    n: Optional[Int] = None,
    safety_model: Optional[String] = None,
    response_format: Optional[Json] = None,
    tools: Optional[Json] = None,
    tool_choice: Optional[Json] = None
) -> Json

Generate chat completions based on a given prompt using a specified model.

Equivalent to the Together AI chat/completions API endpoint. For additional details, see: https://docs.together.ai/reference/chat-completions-1

Request throttling: Applies the rate limit set in the config (section together.rate_limits, key chat). If no rate limit is configured, uses a default of 600 RPM.

Requirements:

  • pip install together

Parameters:

  • messages (Json) –

    A list of messages comprising the conversation so far.

  • model (String) –

    The name of the model to query.

For details on the other parameters, see: https://docs.together.ai/reference/chat-completions-1

Returns:

  • Json

    A dictionary containing the response and other metadata.

Examples:

Add a computed column that applies the model mistralai/Mixtral-8x7B-v0.1 to an existing Pixeltable column tbl.prompt of the table tbl:

>>> messages = [{'role': 'user', 'content': tbl.prompt}]
... tbl.add_computed_column(response=chat_completions(messages, model='mistralai/Mixtral-8x7B-v0.1'))

completions async

completions(
    prompt: String,
    *,
    model: String,
    max_tokens: Optional[Int] = None,
    stop: Optional[Json] = None,
    temperature: Optional[Float] = None,
    top_p: Optional[Float] = None,
    top_k: Optional[Int] = None,
    repetition_penalty: Optional[Float] = None,
    logprobs: Optional[Int] = None,
    echo: Optional[Bool] = None,
    n: Optional[Int] = None,
    safety_model: Optional[String] = None
) -> Json

Generate completions based on a given prompt using a specified model.

Equivalent to the Together AI completions API endpoint. For additional details, see: https://docs.together.ai/reference/completions-1

Request throttling: Applies the rate limit set in the config (section together.rate_limits, key chat). If no rate limit is configured, uses a default of 600 RPM.

Requirements:

  • pip install together

Parameters:

  • prompt (String) –

    A string providing context for the model to complete.

  • model (String) –

    The name of the model to query.

For details on the other parameters, see: https://docs.together.ai/reference/completions-1

Returns:

  • Json

    A dictionary containing the response and other metadata.

Examples:

Add a computed column that applies the model mistralai/Mixtral-8x7B-v0.1 to an existing Pixeltable column tbl.prompt of the table tbl:

>>> tbl.add_computed_column(response=completions(tbl.prompt, model='mistralai/Mixtral-8x7B-v0.1'))

embeddings async

embeddings(input: String, *, model: String) -> Array[(None,), Float]

Query an embedding model for a given string of text.

Equivalent to the Together AI embeddings API endpoint. For additional details, see: https://docs.together.ai/reference/embeddings-2

Request throttling: Applies the rate limit set in the config (section together.rate_limits, key embeddings). If no rate limit is configured, uses a default of 600 RPM.

Requirements:

  • pip install together

Parameters:

  • input (String) –

    A string providing the text for the model to embed.

  • model (String) –

    The name of the embedding model to use.

Returns:

  • Array[(None,), Float]

    An array representing the application of the given embedding to input.

Examples:

Add a computed column that applies the model togethercomputer/m2-bert-80M-8k-retrieval to an existing Pixeltable column tbl.text of the table tbl:

>>> tbl.add_computed_column(response=embeddings(tbl.text, model='togethercomputer/m2-bert-80M-8k-retrieval'))

image_generations async

image_generations(
    prompt: String,
    *,
    model: String,
    steps: Optional[Int] = None,
    seed: Optional[Int] = None,
    height: Optional[Int] = None,
    width: Optional[Int] = None,
    negative_prompt: Optional[String] = None
) -> Image

Generate images based on a given prompt using a specified model.

Equivalent to the Together AI images/generations API endpoint. For additional details, see: https://docs.together.ai/reference/post_images-generations

Request throttling: Applies the rate limit set in the config (section together.rate_limits, key images). If no rate limit is configured, uses a default of 600 RPM.

Requirements:

  • pip install together

Parameters:

  • prompt (String) –

    A description of the desired images.

  • model (String) –

    The model to use for image generation.

For details on the other parameters, see: https://docs.together.ai/reference/post_images-generations

Returns:

  • Image

    The generated image.

Examples:

Add a computed column that applies the model stabilityai/stable-diffusion-xl-base-1.0 to an existing Pixeltable column tbl.prompt of the table tbl:

>>> tbl.add_computed_column(
...     response=image_generations(tbl.prompt, model='stabilityai/stable-diffusion-xl-base-1.0')
... )