Get up and running with large language models locally.
Coming soon!
curl https://ollama.ai/install.sh | sh
The official Ollama Docker image ollama/ollama is available on Docker Hub.
To run and chat with Llama 2:
ollama run llama2
Ollama supports a list of open-source models available on ollama.ai/library
Here are some example open-source models that can be downloaded:
| Model | Parameters | Size | Download | 
|---|---|---|---|
| Mistral | 7B | 4.1GB | ollama run mistral | 
| Llama 2 | 7B | 3.8GB | ollama run llama2 | 
| Code Llama | 7B | 3.8GB | ollama run codellama | 
| Llama 2 Uncensored | 7B | 3.8GB | ollama run llama2-uncensored | 
| Llama 2 13B | 13B | 7.3GB | ollama run llama2:13b | 
| Llama 2 70B | 70B | 39GB | ollama run llama2:70b | 
| Orca Mini | 3B | 1.9GB | ollama run orca-mini | 
| Vicuna | 7B | 3.8GB | ollama run vicuna | 
Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.
Ollama supports importing GGUF models in the Modelfile:
- 
Create a file named
Modelfile, with aFROMinstruction with the local filepath to the model you want to import.FROM ./vicuna-33b.Q4_0.gguf - 
Create the model in Ollama
ollama create example -f Modelfile - 
Run the model
ollama run example 
See the guide on importing models for more information.
Models from the Ollama library can be customized with a prompt. For example, to customize the llama2 model:
ollama pull llama2
Create a Modelfile:
FROM llama2
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1
# set the system prompt
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""
Next, create and run the model:
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
For more examples, see the examples directory. For more information on working with a Modelfile, see the Modelfile documentation.
ollama create is used to create a model from a Modelfile.
ollama pull llama2
This command can also be used to update a local model. Only the diff will be pulled.
ollama rm llama2
ollama cp llama2 my-llama2
For multiline input, you can wrap text with """:
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
$ ollama run llama2 "summarize this file:" "$(cat README.md)"
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
ollama list
ollama serve is used when you want to start ollama without running the desktop application.
Install cmake and go:
brew install cmake go
Then generate dependencies and build:
go generate ./...
go build .
Next, start the server:
./ollama serve
Finally, in a separate shell, run a model:
./ollama run llama2
Ollama has a REST API for running and managing models. For example, to generate text from a model:
curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'
See the API documentation for all endpoints.
- LangChain and LangChain.js with example
 - LlamaIndex
 - Raycast extension
 - Discollama (Discord bot inside the Ollama discord channel)
 - Continue
 - Obsidian Ollama plugin
 - Dagger Chatbot
 - LiteLLM
 - Discord AI Bot
 - Chatbot UI
 - HTML UI
 - Typescript UI
 - Dumbar
 - Emacs client
 - oterm
 - Ellama Emacs client
 - OllamaSharp for .NET
 - Minimalistic React UI for Ollama Models
 - Ollama-rs for Rust