ollama
Pixeltable integrates with the popular Ollama model server. To use these endpoints, you need to
either have an Ollama server running locally, or explicitly specify an Ollama host in your Pixeltable configration.
To specify an explicit host, either set the OLLAMA_HOST environment variable, or add an entry for host in the
ollama section of your $PIXELTABLE_HOME/config.toml configuration file.
pixeltable.functions.ollama
Pixeltable UDFs for Ollama local models.
Provides integration with Ollama for running large language models locally, including chat completions and embeddings.
chat
chat(
messages: Json,
*,
model: String,
tools: Json | None = None,
format: String | None = None,
options: Json | None = None
) -> Json
Generate the next message in a chat with a provided model.
Parameters:
-
messages(Json) –The messages of the chat.
-
model(String) –The model name.
-
tools(Json | None, default:None) –Tools for the model to use.
-
format(String | None, default:None) –The format of the response; must be one of
'json'orNone. -
options(Json | None, default:None) –Additional options to pass to the
chatcall, such asmax_tokens,temperature,top_p, andtop_k. For details, see the Valid Parameters and Values section of the Ollama documentation.
embed
embed(
input: String,
*,
model: String,
truncate: Bool = True,
options: Json | None = None
) -> Array[(None,), Float]
Generate embeddings from a model.
Parameters:
-
input(String) –The input text to generate embeddings for.
-
model(String) –The model name.
-
truncate(Bool, default:True) –Truncates the end of each input to fit within context length. Returns error if false and context length is exceeded.
-
options(Json | None, default:None) –Additional options to pass to the
embedcall. For details, see the Valid Parameters and Values section of the Ollama documentation.
generate
generate(
prompt: String,
*,
model: String,
suffix: String = "",
system: String = "",
template: String = "",
context: Json | None = None,
raw: Bool = False,
format: String | None = None,
options: Json | None = None
) -> Json
Generate a response for a given prompt with a provided model.
Parameters:
-
prompt(String) –The prompt to generate a response for.
-
model(String) –The model name.
-
suffix(String, default:'') –The text after the model response.
-
format(String | None, default:None) –The format of the response; must be one of
'json'orNone. -
system(String, default:'') –System message.
-
template(String, default:'') –Prompt template to use.
-
context(Json | None, default:None) –The context parameter returned from a previous call to
generate(). -
raw(Bool, default:False) –If
True, no formatting will be applied to the prompt. -
options(Json | None, default:None) –Additional options for the Ollama
chatcall, such asmax_tokens,temperature,top_p, andtop_k. For details, see the Valid Parameters and Values section of the Ollama documentation.