Skip to content

ollama

Pixeltable integrates with the popular Ollama model server. To use these endpoints, you need to either have an Ollama server running locally, or explicitly specify an Ollama host in your Pixeltable configration. To specify an explicit host, either set the OLLAMA_HOST environment variable, or add an entry for host in the ollama section of your $PIXELTABLE_HOME/config.toml configuration file.

pixeltable.functions.ollama

chat

chat(
    messages: Json,
    *,
    model: String,
    tools: Optional[Json] = None,
    format: Optional[String] = None,
    options: Optional[Json] = None
) -> Json

Generate the next message in a chat with a provided model.

Parameters:

  • messages (Json) –

    The messages of the chat.

  • model (String) –

    The model name.

  • tools (Optional[Json], default: None ) –

    Tools for the model to use.

  • format (Optional[String], default: None ) –

    The format of the response; must be one of 'json' or None.

  • options (Optional[Json], default: None ) –

    Additional options to pass to the chat call, such as max_tokens, temperature, top_p, and top_k. For details, see the Valid Parameters and Values section of the Ollama documentation.

embed

embed(
    input: String,
    *,
    model: String,
    truncate: Bool = True,
    options: Optional[Json] = None
) -> Array[(None,), Float]

Generate embeddings from a model.

Parameters:

  • input (String) –

    The input text to generate embeddings for.

  • model (String) –

    The model name.

  • truncate (Bool, default: True ) –

    Truncates the end of each input to fit within context length. Returns error if false and context length is exceeded.

  • options (Optional[Json], default: None ) –

    Additional options to pass to the embed call. For details, see the Valid Parameters and Values section of the Ollama documentation.

generate

generate(
    prompt: String,
    *,
    model: String,
    suffix: String = "",
    system: String = "",
    template: String = "",
    context: Optional[Json] = None,
    raw: Bool = False,
    format: Optional[String] = None,
    options: Optional[Json] = None
) -> Json

Generate a response for a given prompt with a provided model.

Parameters:

  • prompt (String) –

    The prompt to generate a response for.

  • model (String) –

    The model name.

  • suffix (String, default: '' ) –

    The text after the model response.

  • format (Optional[String], default: None ) –

    The format of the response; must be one of 'json' or None.

  • system (String, default: '' ) –

    System message.

  • template (String, default: '' ) –

    Prompt template to use.

  • context (Optional[Json], default: None ) –

    The context parameter returned from a previous call to generate().

  • raw (Bool, default: False ) –

    If True, no formatting will be applied to the prompt.

  • options (Optional[Json], default: None ) –

    Additional options to pass to the chat call, such as max_tokens, temperature, top_p, and top_k. For details, see the Valid Parameters and Values section of the Ollama documentation.