Skip to content

brevdev/marimo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Marimo for GPU Experimentation on Brev

Run interactive Python notebooks with Marimo on high-performance NVIDIA GPU instances powered by Brev. Perfect for AI/ML experimentation, model training, and data science workflows that require GPU acceleration.

What is Marimo?

Marimo is a modern, reactive Python notebook that runs as an interactive web app. Unlike traditional notebooks:

  • πŸ”„ Reactive - Cells automatically re-run when dependencies change
  • 🐍 Pure Python - Notebooks are .py files that can be versioned and imported
  • πŸš€ Production-ready - Deploy notebooks as apps with a single command
  • 🎨 Interactive - Rich UI components and real-time visualizations
  • πŸ”’ Reproducible - No hidden state, guaranteed execution order

Why Marimo + GPU on Brev?

  • Instant GPU Access - Launch NVIDIA GPU instances with one click
  • Pre-configured Environment - Python, CUDA drivers, and ML libraries ready to go
  • Cost Effective - Pay only for what you use
  • Powerful Hardware - Access to L40S, H100, H200, B200 and other high-end GPUs
  • Interactive Development - Experiment with models and visualize results in real-time

Perfect for:

  • Training and fine-tuning ML models
  • Running inference at scale
  • Computer vision and image processing
  • LLM experimentation and deployment
  • Data analysis with GPU-accelerated libraries

πŸš€ Quick Deploy - GPU Launchables

Deploy Marimo with GPU access instantly using these pre-configured environments:

GPU Configuration vRAM Use Case Deploy
1x L4 24GB Entry-Level, Learning, Small Models, Cost-Efficient Experimentation Click here to deploy.
2x L4 48GB Budget Multi-GPU Learning, Distributed Training Basics, Affordable Dual GPU Click here to deploy.
4x L4 96GB Affordable 4-Way Parallelism, Budget Advanced Distributed Training Click here to deploy.
8x L4 192GB Maximum Affordable Multi-GPU, Full-Scale Budget Distributed Training Click here to deploy.
1x L40S 48GB General ML, Training, Inference Click here to deploy.
2x L40S 96GB Cost-Effective Multi-GPU, Dual Workloads, Medium-Scale Training Click here to deploy.
4x L40S 192GB Budget-Friendly 4-Way Training, Cost-Optimized Distributed Workloads Click here to deploy.
8x L40S 384GB Full-Scale Budget Training, Maximum Cost-Efficient Multi-GPU Click here to deploy.
1x A100 80GB 80GB High-Performance Training, Research, Large Models Click here to deploy.
2x A100 80GB 160GB Multi-GPU Training, Model Parallelism, Distributed Workloads Click here to deploy.
4x A100 80GB 320GB Advanced Distributed Training, Large Model Development Click here to deploy.
8x A100 80GB 640GB Full-Scale Distributed Training, Production LLM Training Click here to deploy.
1x H100 80GB Latest Architecture, Maximum Single-GPU Performance, Fastest Training Click here to deploy.
4x H100 320GB Advanced Next-Gen Training, Extreme Performance Multi-GPU Click here to deploy.
8x H100 640GB Next-Gen Performance, Maximum Throughput, Cutting-Edge Workloads Click here to deploy.
1x H200 141GB Newest Architecture, Maximum Memory, Flagship Single-GPU Click here to deploy.
8x H200 1.13TB Ultimate Configuration, Maximum Memory & Performance, Absolute Peak Click here to deploy.
8x B200 1.44TB Next-Gen Blackwell Architecture, Future-Proof, Maximum Innovation Click here to deploy.

What's Included

Each deployment includes:

  • βœ… Marimo notebook server (running on port 8080)
  • βœ… NVIDIA GPU drivers and CUDA toolkit
  • βœ… GPU validation notebook
  • βœ… Example notebooks from marimo-team/examples
  • βœ… Pre-installed ML/AI libraries (PyTorch, TensorFlow, etc.)
  • βœ… Data science toolkit (pandas, numpy, polars, altair, plotly)
  • βœ… No password authentication for ease of use

Getting Started

Deploying Your Environment

  1. Choose your GPU configuration - Click the Deploy Now button for your desired environment from the table above
  2. Review and deploy - On the launchable page, click Deploy Launchable
  3. Sign in - Create an account or log in to Brev with your email (NVIDIA account required)
  4. Monitor deployment - Click Go to Instance Page to watch your environment spin up
  5. Wait for completion - Watch for three green status indicators:
    • βœ… Running - Instance is live
    • βœ… Built - Environment setup complete
    • βœ… Completed - Post-install script finished (typically 2-3 minutes)
  6. Access Marimo - Navigate to the Access tab and click the secure link for port 8080
  7. Authenticate - Log in to Marimo using your Brev account email
  8. Start building - You're ready to experiment with GPU-accelerated notebooks!

First Steps

Once inside Marimo:

  1. Validate GPU - Open gpu_validation.py to verify your GPU is detected and working
  2. Run the benchmark - Click the GPU test button to see CPU vs GPU performance
  3. Explore examples - Browse the notebook directory for inspiration
  4. Create your own - Click Create a new notebook to start experimenting

Environment Layout

After deployment, your environment will be organized as follows:

~/marimo-examples/           # All notebooks (examples + GPU validation)
  β”œβ”€β”€ gpu_validation.py      # GPU testing and monitoring notebook
  β”œβ”€β”€ youtube_summary/       # Example: YouTube video summarization
  β”œβ”€β”€ nlp_span_comparison/   # Example: NLP analysis
  └── ...                    # More example notebooks

Marimo runs automatically as a systemd service and serves notebooks from ~/marimo-examples/.

Example Notebooks

GPU Validation (gpu_validation.py)

  • Check GPU availability and specifications for all GPUs
  • Real-time auto-refreshing GPU metrics (utilization, memory, temperature)
    • Auto-refreshes every 2 seconds - seamless updates, no user interaction needed
    • Smooth CSS transitions - progress bars animate fluidly, no layout jumps
    • Modern card-based design with gradient progress bars
    • Shows metrics for all GPUs in multi-GPU systems
    • Timestamp shows exact time of last update
  • Industry-standard gpu-burn stress test with toggle switch
    • Uses the actual gpu-burn tool (not a custom implementation!)
    • Automatically installs gpu-burn on first use (via source compile)
    • Automatically stresses ALL GPUs simultaneously
    • Runs as background process - metrics update in real-time!
    • Turn on/off to start/stop GPU stress (clean process management)
    • Uses 95% GPU memory + double-precision operations
    • Battle-tested tool used in datacenters worldwide
    • Watch metrics auto-refresh and see GPUs hit 100% utilization live
    • See temperature, utilization, and memory spike across all GPUs
    • Shows process ID (PID) for monitoring
  • nvidia-smi output (collapsed by default)

Marimo Examples

Includes curated notebooks from marimo-team/examples:

  • LLM and AI workflows
  • Data visualization
  • Interactive dashboards
  • SQL and database integration
  • And more...

Advanced Usage

Service Management

Marimo runs as a systemd service and starts automatically:

# Check service status
sudo systemctl status marimo

# View logs
sudo journalctl -u marimo -f

# Restart the service
sudo systemctl restart marimo

Customization

Set these environment variables before running the setup script:

Variable Description Default
MARIMO_REPO_URL Git repository URL for notebooks https://github.com/marimo-team/examples.git
MARIMO_NOTEBOOKS_DIR Directory name for notebooks marimo-examples
MARIMO_PORT Port for Marimo server 8080

Use Your Own Notebooks

export MARIMO_REPO_URL="https://github.com/your-username/your-notebooks.git"
bash setup.sh

Pre-installed Packages

The environment includes:

  • Data manipulation: polars, pandas, numpy, scipy, pyarrow
  • Visualization: altair, plotly, matplotlib, seaborn
  • Machine learning: scikit-learn, torch, tensorflow
  • AI/LLM: openai, anthropic, instructor, openai-whisper
  • Database: marimo[sql], duckdb, sqlalchemy
  • Media processing: opencv-python, yt-dlp
  • Utilities: requests, beautifulsoup4, pillow, python-dotenv

Troubleshooting

Service Issues

# Check service status
sudo systemctl status marimo

# View logs
sudo journalctl -u marimo -n 50

# Restart
sudo systemctl restart marimo

GPU Not Detected

# Check NVIDIA driver
nvidia-smi

# Check CUDA
nvcc --version

# Verify PyTorch GPU access
python3 -c "import torch; print(torch.cuda.is_available())"

Can't Access Marimo

  • Ensure port 8080 is open in your firewall
  • Check if marimo is running: sudo systemctl status marimo
  • View logs for errors: sudo journalctl -u marimo -f

Manual Setup

If you want to use this setup script in your own repo:

# Download the setup script
curl -O https://raw.githubusercontent.com/brevdev/setup-scripts/main/marimo/setup.sh
chmod +x setup.sh

# Run it
bash setup.sh

The setup script is maintained in the brevdev/setup-scripts repository.

Resources

Contributing

Have ideas for improving this setup or want to add more GPU examples? Contributions are welcome!

About

Examples of utilizing Marimo notebooks on Brev

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages