Welcome to the Ollama Chatbot, a private AI-powered chatbot built using LLaMA 3.0 and Django. It runs no online using Ollama’s local LLM engine, making it fast, secure, and ideal for personal or experimental use.
This project combines:
- ⚙️ Ollama (LLaMA 3.0) – Local LLM runtime for intelligent text generation
- 🌐 Django – Backend framework to manage logic and routes
- 🎨 HTML/CSS/JavaScript – For a lightweight frontend chat interface
- 🐍 Python – The programming core of both backend and integration
- 📴 Runs 100% Offline – No API keys or internet required
- 🔐 Private by Design – Your conversations never leave your device
- ⚡ Fast & Local – Powered by LLaMA 3.0 via Ollama
- 🧩 Simple & Modular – Easy to extend with new features
CHATBOT/
├── chat/
│ ├── templates/chat/ # HTML for chat UI
│ │ └── index.html
│ ├── migrations/
│ ├── __init__.py
│ ├── admin.py
│ ├── apps.py
│ ├── models.py
│ ├── urls.py
│ └── views.py
│
├── chatbot_project/ # Django project settings
│ ├── __init__.py
│ ├── asgi.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
│
├── db.sqlite3 # SQLite database
├── manage.py # Django management tool
└── myvenv/ # Virtual environment
- Visit https://ollama.com and download Ollama for Windows
- During setup, download the LLaMA 3.0 model
- After installation, run the following in CMD/PowerShell:
ollama run llama3-
Open the project folder in VS Code
-
Create and activate a virtual environment
python -m venv myvenv
myvenv\Scripts\activate- Install Django
pip install django- Run database migrations
python manage.py migrate- Start the Django development server
python manage.py runserver- Run the LLaMA 3.0 model (in a separate terminal)
ollama run llama3- Open your browser and go to:
http://127.0.0.1:8000
- The user sends a message via the web chat interface
- Django captures the message and sends it to the local Ollama API
- LLaMA 3.0 processes the message and returns a response
- The response is displayed back in the browser
This project is licensed under the MIT License.
