Simple, practical setup scripts for common developer environments.
What Brev already provides: NVIDIA drivers, CUDA toolkit, Docker, NVIDIA Container Toolkit
cd python-dev && bash setup.shInstalls: pyenv, Python 3.11, Jupyter Lab, common packages (pandas, numpy, matplotlib)
Time: ~3-5 minutes
cd nodejs-dev && bash setup.shInstalls: nvm, Node LTS, pnpm, TypeScript, ESLint, Prettier
Time: ~2-3 minutes
cd terminal-setup && bash setup.shInstalls: zsh, oh-my-zsh, fzf, ripgrep, bat, eza (modern CLI tools)
Time: ~2-3 minutes
Note: Automatically switches to zsh when complete
cd k8s-local && bash setup.shInstalls: microk8s, kubectl, helm, k9s, GPU operator
Time: ~3-5 minutes
Note: All tools work immediately, no group membership or logout needed
cd ml-quickstart && bash setup.shInstalls: Miniconda, PyTorch with CUDA, Jupyter Lab, transformers
Time: ~5-8 minutes (PyTorch is large)
cd rapids && bash setup.shInstalls: GPU-accelerated pandas (cuDF), scikit-learn (cuML), example notebooks
Time: ~8-12 minutes
Note: 10-50x faster data processing on GPU. Requires NVIDIA GPU
cd ollama && bash setup.shInstalls: Ollama with GPU support, llama3.2 model (pre-downloaded)
Time: ~3-5 minutes
Port: 11434/tcp for API access
cd unsloth && bash setup.shInstalls: Unsloth for fast fine-tuning, PyTorch with CUDA, LoRA/QLoRA support
Time: ~5-8 minutes
Note: Requires NVIDIA GPU
cd litellm && bash setup.shInstalls: Universal LLM proxy (use any LLM with OpenAI API format)
Time: ~1-2 minutes
Port: 4000/tcp for API access
cd qdrant && bash setup.shInstalls: Vector database for RAG and semantic search
Time: ~1-2 minutes
Port: 6333/tcp for API + dashboard
cd comfyui && bash setup.shInstalls: Node-based UI for Stable Diffusion, SD 1.5 model
Time: ~5-10 minutes
Port: 8188/tcp for web interface
Note: Requires NVIDIA GPU
cd databases && bash setup.shInstalls: PostgreSQL 16, Redis 7 (in Docker containers)
Time: ~1-2 minutes
cd marimo && bash setup.shInstalls: Marimo reactive notebooks as systemd service
Time: ~2-3 minutes
Port: 8080/tcp for web access
cd earlyoom && bash setup.shInstalls: Early OOM daemon to prevent system freezes
Time: ~1-2 minutes
Note: Monitors memory/swap and kills processes before OOM hangs
Pick what you need:
# Python ML developer
cd ml-quickstart && bash setup.sh
# Web developer
cd nodejs-dev && bash setup.sh
cd databases && bash setup.sh
# Terminal power user
cd terminal-setup && bash setup.sh
# Kubernetes developer
cd k8s-local && bash setup.shEach script is:
- β Simple - One purpose, no complexity
- β Short - Under 150 lines each
- β Fast - Takes 2-8 minutes
- β Standalone - No dependencies between scripts
- β Practical - Installs what developers actually use
We don't:
- β Install what Brev already provides (NVIDIA drivers, CUDA, Docker)
- β Add complex GPU detection logic
- β Support multi-node/HPC scenarios
- β Over-engineer solutions
Python data science:
cd python-dev && bash setup.sh
# Then:
ipython
jupyter lab --ip=0.0.0.0Machine learning with GPU:
cd ml-quickstart && bash setup.sh
# Then:
conda activate ml
python gpu_check.pyGPU-accelerated data science with RAPIDS:
cd rapids && bash setup.sh
# Then:
conda activate rapids
python ~/rapids-examples/benchmark.py # See 20x+ speedup!Local LLM with Ollama:
cd ollama && bash setup.sh
# Then:
ollama run llama3.2
ollama listFast LLM fine-tuning with Unsloth:
cd unsloth && bash setup.sh
# Then:
conda activate unsloth
python ~/unsloth-examples/test_install.pyUniversal LLM proxy with LiteLLM:
cd litellm && bash setup.sh
# Then use any LLM with OpenAI SDK:
# openai.api_base = "http://localhost:4000"Vector database with Qdrant:
cd qdrant && bash setup.sh
# Then:
pip install qdrant-client
python ~/qdrant_example.pyImage generation with ComfyUI:
cd comfyui && bash setup.sh
# Then open: http://localhost:8188Modern terminal:
cd terminal-setup && bash setup.sh
# Automatically drops you into zsh, then:
ll # Better ls
cat file.txt # Syntax highlighting
fzf # Fuzzy finderLocal database:
cd databases && bash setup.sh
# Then:
docker exec -it postgres psql -U postgres
docker exec -it redis redis-cliOOM protection with earlyoom:
cd earlyoom && bash setup.sh
# Then:
sudo systemctl status earlyoom
sudo journalctl -u earlyoom -f # Watch memory monitoringbrev-setup-scripts/
βββ README.md # This file
βββ python-dev/
β βββ setup.sh # Python development environment
β βββ README.md
βββ nodejs-dev/
β βββ setup.sh # Node.js development environment
β βββ README.md
βββ terminal-setup/
β βββ setup.sh # Modern terminal with zsh
β βββ README.md
βββ k8s-local/
β βββ setup.sh # Local Kubernetes
β βββ README.md
βββ ml-quickstart/
β βββ setup.sh # PyTorch ML environment
β βββ README.md
βββ ollama/
β βββ setup.sh # Ollama LLM inference
β βββ README.md
βββ unsloth/
β βββ setup.sh # Unsloth fast fine-tuning
β βββ README.md
βββ litellm/
β βββ setup.sh # Universal LLM proxy
β βββ README.md
βββ qdrant/
β βββ setup.sh # Vector database
β βββ README.md
βββ comfyui/
β βββ setup.sh # ComfyUI for Stable Diffusion
β βββ README.md
βββ databases/
β βββ setup.sh # PostgreSQL + Redis
β βββ README.md
βββ marimo/
β βββ setup.sh # Marimo reactive notebooks
β βββ README.md
βββ earlyoom/
β βββ setup.sh # Early OOM daemon
β βββ README.md
βββ rapids/
βββ setup.sh # RAPIDS GPU-accelerated data science
βββ README.md
Want to add a script? Keep it simple:
- One purpose - Install one thing well
- Short - Under 150 lines
- Fast - Completes in < 10 minutes
- Verify - Include a verification step
- Document - Show quick start commands
Apache 2.0