Skip to content

modular/max-agentic-cookbook

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

MAX Agentic Cookbook

This repo contains a modern fullstack cookbook app showcasing the agentic AI capabilities of Modular MAX as a complete LLM serving solution. It's built with FastAPI (Python) and React (TypeScript), optimizing for developer familiarity and flexibility.

Screenshot of the MAX Agentic Cookbook web application showing the multi-turn chat recipe demo.

πŸ“¦ Looking for legacy recipes? Older standalone recipes have been moved to the archive branch. These are provided as-is for historical reference only and are no longer maintained.

Requirements

  • Python 3.11 or higher; we recommend uv 0.7+ for working with Python
  • Node.js 22.x or higher; we recommend pnpm 10.17+ for working with Node.js

Quick Start

Clone the repo

git clone https://github.com/modular/max-agentic-cookbook.git
cd max-agentic-cookbook

Set up your LLM endpoint

cp backend/.sample.env backend/.env.local

Open backend/.env.local in your favorite text editor and supply a valid MAX or OpenAI-compatible endpoint:

COOKBOOK_ENDPOINTS='[
  {
    "id": "max-local",
    "baseUrl": "http://localhost:8000/v1",
    "apiKey": "EMPTY"
  }
]'

Install dependencies

cd backend && uv sync
cd ..
cd frontend && npm install

Run the app

Run with VS Code

The Cookbook contains a VS Code configuration in .vscode, preconfigured for full-stack debugging.

  1. Open the max-agentic-cookbook folder in VS Code
  2. Open the Run & Debug panel
  3. Choose Full-Stack Debug

Run in the terminal

You can run the backend and frontend separately by using two terminal sessions.

Terminal 1 (Python Backend):

cd backend
uv run dev

You will find the FastAPI backend server running at http://localhost:8010.

Terminal 2 (React Frontend):

cd frontend
npm run dev

The React frontend app will be available in your browser at http://localhost:5173. (Note: vite may server the frontend on a port other than 5173; always refer to your actual terminal output.)

Run with Docker

You can run the complete stack with MAX model serving + backend + frontend in a single container.

Note: For the best experience, we recommend running on a machine with a compatible GPU. Visit MAX Builds to see which MAX models on available for GPU and CPU.

# Build
docker build -t max-cookbook .

# Run (NVIDIA GPU)
docker run --gpus all \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    -e "HF_TOKEN=your-huggingface-token" \
    -e "MAX_MODEL=mistral-community/pixtral-12b" \
    -p 8000:8000 -p 8010:8010 \
    max-cookbook

# Run (AMD GPU)
docker run \
    --group-add keep-groups \
    --device /dev/kfd --device /dev/dri \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    -e "HF_TOKEN=your-huggingface-token" \
    -e "MAX_MODEL=mistral-community/pixtral-12b" \
    -p 8000:8000 -p 8010:8010 \
    max-cookbook

Once up and running, visit http://localhost:8010 to use the app.

Architecture

The following is a summary of the Cookbook's architeture. See the Contributing Guide for more details about how its recipe system works.

Python Backend

backend/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ main.py                 # FastAPI entry point
β”‚   β”œβ”€β”€ core/                   # Config, utilities
β”‚   β”‚   β”œβ”€β”€ endpoints.py        # Endpoint management
β”‚   β”‚   β”œβ”€β”€ models.py           # Model listing
β”‚   β”‚   └── code_reader.py      # Source code reader
β”‚   └── recipes/                # Recipe routers
β”‚       β”œβ”€β”€ multiturn_chat.py   # Multi-turn chat
β”‚       └── image_captioning.py # Image captioning
└── pyproject.toml              # Python dependencies

Backend Features & Technologies

  • FastAPI - Modern Python web framework
  • uvicorn - ASGI server
  • uv - Fast Python package manager
  • openai - OpenAI Python client for LLM proxying

Backend Routes

  • GET /api/health - Health check
  • GET /api/recipes - List available recipe slugs
  • GET /api/endpoints - List configured LLM endpoints
  • GET /api/models?endpointId=xxx - List models for endpoint
  • POST /api/recipes/multiturn-chat - Multi-turn chat endpoint
  • POST /api/recipes/batch-text-classification - Batch text classification endpoint
  • POST /api/recipes/image-captioning - Image captioning endpoint
  • GET /api/recipes/{slug}/code - Get recipe backend source code

React Frontend

frontend/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ recipes/                # Recipe components + data
β”‚   β”‚   β”œβ”€β”€ registry.ts         # Recipe metadata (pure data)
β”‚   β”‚   β”œβ”€β”€ components.ts       # React component mapping
β”‚   β”‚   β”œβ”€β”€ multiturn-chat/     # Multi-turn chat UI
β”‚   β”‚   └── image-captioning/   # Image captioning UI
β”‚   β”œβ”€β”€ components/             # Shared UI (Header, Navbar, etc.)
β”‚   β”œβ”€β”€ routing/                # Routing infrastructure
β”‚   β”œβ”€β”€ lib/                    # Custom hooks, API, types
β”‚   └── App.tsx                 # Entry point
└── package.json                # Frontend dependencies

Frontend Features & Technologies

  • React 18 + TypeScript - Type-safe component development
  • Vite - Lightning-fast dev server and optimized production builds
  • React Router v7 - Auto-generated routing with lazy loading
  • Mantine v7 - Comprehensive UI component library with dark/light themes
  • SWR - Lightweight data fetching with automatic caching
  • Vercel AI SDK - Streaming chat UI with token-by-token responses
  • MDX - Markdown documentation with JSX support
  • Recipe Registry - Single source of truth for all recipes (pure data + React components)

Frontend Routes

  • / - Recipe index
  • /:slug - Recipe demo (interactive UI)
  • /:slug/readme - Recipe documentation
  • /:slug/code - Recipe source code view

Documentation

License

Apache-2.0 WITH LLVM-exception

See LICENSE for details.