Run interactive Python notebooks with Marimo on high-performance NVIDIA GPU instances powered by Brev. Perfect for AI/ML experimentation, model training, and data science workflows that require GPU acceleration.
Marimo is a modern, reactive Python notebook that runs as an interactive web app. Unlike traditional notebooks:
- π Reactive - Cells automatically re-run when dependencies change
- π Pure Python - Notebooks are
.pyfiles that can be versioned and imported - π Production-ready - Deploy notebooks as apps with a single command
- π¨ Interactive - Rich UI components and real-time visualizations
- π Reproducible - No hidden state, guaranteed execution order
- Instant GPU Access - Launch NVIDIA GPU instances with one click
- Pre-configured Environment - Python, CUDA drivers, and ML libraries ready to go
- Cost Effective - Pay only for what you use
- Powerful Hardware - Access to L40S, H100, H200, B200 and other high-end GPUs
- Interactive Development - Experiment with models and visualize results in real-time
Perfect for:
- Training and fine-tuning ML models
- Running inference at scale
- Computer vision and image processing
- LLM experimentation and deployment
- Data analysis with GPU-accelerated libraries
Deploy Marimo with GPU access instantly using these pre-configured environments:
Each deployment includes:
- β Marimo notebook server (running on port 8080)
- β NVIDIA GPU drivers and CUDA toolkit
- β GPU validation notebook
- β Example notebooks from marimo-team/examples
- β Pre-installed ML/AI libraries (PyTorch, TensorFlow, etc.)
- β Data science toolkit (pandas, numpy, polars, altair, plotly)
- β No password authentication for ease of use
- Choose your GPU configuration - Click the Deploy Now button for your desired environment from the table above
- Review and deploy - On the launchable page, click Deploy Launchable
- Sign in - Create an account or log in to Brev with your email (NVIDIA account required)
- Monitor deployment - Click Go to Instance Page to watch your environment spin up
- Wait for completion - Watch for three green status indicators:
- β Running - Instance is live
- β Built - Environment setup complete
- β Completed - Post-install script finished (typically 2-3 minutes)
- Access Marimo - Navigate to the Access tab and click the secure link for port 8080
- Authenticate - Log in to Marimo using your Brev account email
- Start building - You're ready to experiment with GPU-accelerated notebooks!
Once inside Marimo:
- Validate GPU - Open
gpu_validation.pyto verify your GPU is detected and working - Run the benchmark - Click the GPU test button to see CPU vs GPU performance
- Explore examples - Browse the notebook directory for inspiration
- Create your own - Click Create a new notebook to start experimenting
After deployment, your environment will be organized as follows:
~/marimo-examples/ # All notebooks (examples + GPU validation)
βββ gpu_validation.py # GPU testing and monitoring notebook
βββ youtube_summary/ # Example: YouTube video summarization
βββ nlp_span_comparison/ # Example: NLP analysis
βββ ... # More example notebooks
Marimo runs automatically as a systemd service and serves notebooks from ~/marimo-examples/.
- Check GPU availability and specifications for all GPUs
- Real-time auto-refreshing GPU metrics (utilization, memory, temperature)
- Auto-refreshes every 2 seconds - seamless updates, no user interaction needed
- Smooth CSS transitions - progress bars animate fluidly, no layout jumps
- Modern card-based design with gradient progress bars
- Shows metrics for all GPUs in multi-GPU systems
- Timestamp shows exact time of last update
- Industry-standard gpu-burn stress test with toggle switch
- Uses the actual gpu-burn tool (not a custom implementation!)
- Automatically installs gpu-burn on first use (via source compile)
- Automatically stresses ALL GPUs simultaneously
- Runs as background process - metrics update in real-time!
- Turn on/off to start/stop GPU stress (clean process management)
- Uses 95% GPU memory + double-precision operations
- Battle-tested tool used in datacenters worldwide
- Watch metrics auto-refresh and see GPUs hit 100% utilization live
- See temperature, utilization, and memory spike across all GPUs
- Shows process ID (PID) for monitoring
- nvidia-smi output (collapsed by default)
Includes curated notebooks from marimo-team/examples:
- LLM and AI workflows
- Data visualization
- Interactive dashboards
- SQL and database integration
- And more...
Marimo runs as a systemd service and starts automatically:
# Check service status
sudo systemctl status marimo
# View logs
sudo journalctl -u marimo -f
# Restart the service
sudo systemctl restart marimoSet these environment variables before running the setup script:
| Variable | Description | Default |
|---|---|---|
MARIMO_REPO_URL |
Git repository URL for notebooks | https://github.com/marimo-team/examples.git |
MARIMO_NOTEBOOKS_DIR |
Directory name for notebooks | marimo-examples |
MARIMO_PORT |
Port for Marimo server | 8080 |
export MARIMO_REPO_URL="https://github.com/your-username/your-notebooks.git"
bash setup.shThe environment includes:
- Data manipulation:
polars,pandas,numpy,scipy,pyarrow - Visualization:
altair,plotly,matplotlib,seaborn - Machine learning:
scikit-learn,torch,tensorflow - AI/LLM:
openai,anthropic,instructor,openai-whisper - Database:
marimo[sql],duckdb,sqlalchemy - Media processing:
opencv-python,yt-dlp - Utilities:
requests,beautifulsoup4,pillow,python-dotenv
# Check service status
sudo systemctl status marimo
# View logs
sudo journalctl -u marimo -n 50
# Restart
sudo systemctl restart marimo# Check NVIDIA driver
nvidia-smi
# Check CUDA
nvcc --version
# Verify PyTorch GPU access
python3 -c "import torch; print(torch.cuda.is_available())"- Ensure port 8080 is open in your firewall
- Check if marimo is running:
sudo systemctl status marimo - View logs for errors:
sudo journalctl -u marimo -f
If you want to use this setup script in your own repo:
# Download the setup script
curl -O https://raw.githubusercontent.com/brevdev/setup-scripts/main/marimo/setup.sh
chmod +x setup.sh
# Run it
bash setup.shThe setup script is maintained in the brevdev/setup-scripts repository.
Have ideas for improving this setup or want to add more GPU examples? Contributions are welcome!
- Setup script improvements: github.com/brevdev/setup-scripts
- GPU notebooks and examples: github.com/brevdev/marimo