Skip to main content

OLLAMA

OLLAMA is a lightweight model inference platform that simplifies deploying large language models (LLMs). This tool streamlines interacting with LLMs by eliminating the need to create or run scripts, gives access to a wide variety of models, and reduces the resources required to run the models by downloading quantized versions.

Running OLLAMA

  1. Ollama will not run on a login node. Request an compute node with a GPU.
$ interactive -t 30 -G 1 -p htc
  1. Load the ollama module (from available versions among module avail ollama).
$ module load ollama/0.3.12
  1. Start Ollama server in the background
$ ollama-start
  1. Run the model. You can find a list of available models here. The first time the model is run, Ollama automatically performs an ollama pull and downloads the model. If the model is downloaded, it loads it into memory and starts the chat.
$ ollama run llama3.2
  1. To stop the model you can type \bye on the prompt input.
>>> \bye
  1. To stop the Ollama server:
ollama-stop