A high-performance HTTP proxy server for IPTV content with true live proxying, per-client connection management, and seamless failover support. Built with FastAPI and optimized for efficiency.
- π Pure HTTP Proxy: Zero transcoding, direct byte-for-byte streaming
- π― Per-Client Connections: Each client gets independent provider connection
- β‘ Truly Ephemeral: Provider connections open only when client consuming
- πΊ HLS Support: Optimized playlist and segment handling (.m3u8)
- π‘ Continuous Streams: Direct proxy for .ts, .mp4, .mkv, .webm, .avi files
- π Real-time URL Rewriting: Automatic playlist modification for proxied content
- π± Full VOD Support: Byte-range requests, seeking, multiple positions
- β‘ uvloop Integration: 2-4x faster async I/O operations
- π Seamless Failover: <100ms transparent URL switching per client
- π― Immediate Cleanup: Connections close instantly when client stops
- π¬ FFmpeg Integration: Built-in hardware-accelerated video processing
- π GPU Acceleration: Automatic detection of NVIDIA, Intel, and AMD GPUs
- β‘ VAAPI Support: Intel/AMD hardware encoding (3-8x faster than CPU)
- π― NVENC Support: NVIDIA hardware encoding (10-20x faster than CPU)
- π§ Auto-Configuration: Zero-config hardware acceleration setup
- π Multiple Codecs: H.264, H.265/HEVC, VP8, VP9, AV1 support
- π₯ Client Tracking: Individual client sessions and bandwidth monitoring
- π Real-time Statistics: Live metrics on streams, clients, and data usage
- π Stream Type Detection: Automatic HLS/VOD/Live detection
- π§Ή Automatic Cleanup: Inactive streams and clients auto-removed
- π£ Event System: Real-time events and webhook notifications
- π©Ί Health Checks: Built-in health endpoints for monitoring
- π·οΈ Custom Metadata: Attach arbitrary key/value pairs to streams for identification
Use the below example to run using the precompiled Dockerhub image.
You can also replace latest with dev or experimental to try another branch.
services:
m3u-proxy:
image: sparkison/m3u-proxy:latest
container_name: m3u-proxy
ports:
- "8085:8085"
# Hardware acceleration (optional)
devices:
- /dev/dri:/dev/dri # Intel/AMD GPU support
# For NVIDIA GPUs, use this instead:
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: all
# capabilities: [gpu]
environment:
# Server Configuration
- M3U_PROXY_HOST=0.0.0.0
- M3U_PROXY_PORT=8085
- LOG_LEVEL=INFO
# Base path (default: /m3u-proxy for m3u-editor integration)
# Set to empty string if not using reverse proxy: ROOT_PATH=
- ROOT_PATH=/m3u-proxy
# Hardware acceleration (optional)
- LIBVA_DRIVER_NAME=i965 # For older Intel GPUs
# - LIBVA_DRIVER_NAME=iHD # For newer Intel GPUs
# Timeouts (optional)
- CLIENT_TIMEOUT=300
- CLEANUP_INTERVAL=60
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8085/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10spythoninstalled on your system:>=3.10pipinstalled on your system:>=23
git clone https://github.com/sparkison/m3u-proxy.git && cd m3u-proxy
pip install -r requirements.txtpython main.py --debugServer will start on http://localhost:8085
# HLS stream with custom user agent
curl -X POST "http://localhost:8085/streams" \
-H "Content-Type: application/json" \
-d '{"url": "https://your-stream.m3u8", "user_agent": "MyApp/1.0"}'
# Direct IPTV stream with failover
curl -X POST "http://localhost:8085/streams" \
-H "Content-Type: application/json" \
-d '{
"url": "http://server.com/stream.ts",
"failover_urls": ["http://backup.com/stream.ts"],
"user_agent": "VLC/3.0.18"
}'
# Using the CLI client
python m3u_client.py create "https://your-stream.m3u8" --user-agent "MyApp/1.0"
python m3u_client.py create "http://server.com/movie.mkv" --failover "http://backup.com/movie.mkv"POST /streams
Content-Type: application/json
{
"url": "stream_url",
"failover_urls": ["backup_url1", "backup_url2"],
"user_agent": "Custom User Agent String"
}GET /streamsGET /streams/{stream_id}DELETE /streams/{stream_id}POST /streams/{stream_id}/failoverGET /statsGET /healthGET /clients
GET /clients/{client_id}The included CLI client (m3u_client.py) provides easy access to all proxy features:
# Create a stream with failover
python m3u_client.py create "https://primary.m3u8" --failover "https://backup1.m3u8" "https://backup2.m3u8"
# List all active streams
python m3u_client.py list
# View comprehensive statistics
python m3u_client.py stats
# Monitor in real-time (updates every 5 seconds)
python m3u_client.py monitor
# Check health status
python m3u_client.py health
# Get detailed stream information
python m3u_client.py info <stream_id>
# Trigger manual failover
python m3u_client.py failover <stream_id>
# Delete a stream
python m3u_client.py delete <stream_id># Server configuration
M3U_PROXY_HOST=0.0.0.0
M3U_PROXY_PORT=8085
# Base path for API routes (useful for reverse proxy integration)
# Default: /m3u-proxy (optimized for m3u-editor integration)
# Set to empty string for root path
ROOT_PATH=/m3u-proxy
# API Authentication (optional)
# Set API_TOKEN to require authentication for management endpoints
# Leave unset or empty to disable authentication
API_TOKEN=your_secret_token_here
# Client timeout (seconds)
CLIENT_TIMEOUT=300
# Cleanup interval (seconds)
CLEANUP_INTERVAL=60When API_TOKEN is set in the environment, all management endpoints require authentication via the X-API-Token header. This includes:
/- Root endpoint/streams- Create, list, get, delete streams/stats/*- All statistics endpoints/clients- Client management/health- Health check endpoint/webhooks- Webhook management/streams/{stream_id}/failover- Failover control/hls/{stream_id}/clients/{client_id}- Client disconnect
Stream endpoints (the actual streaming URLs) do NOT require authentication since they are accessed by media players that identify streams via stream_id.
Example with authentication:
# Set your API token
export API_TOKEN="my_secret_token"
# Method 1: Using header (recommended for API calls)
curl -X POST "http://localhost:8085/streams" \
-H "Content-Type: application/json" \
-H "X-API-Token: my_secret_token" \
-d '{"url": "https://your-stream.m3u8"}'
# Method 2: Using query parameter (useful for browser access)
curl -X POST "http://localhost:8085/streams?api_token=my_secret_token" \
-H "Content-Type: application/json" \
-d '{"url": "https://your-stream.m3u8"}'
# Browser access example
# Visit: http://localhost:8085/stats?api_token=my_secret_token
# Without token - will get 401 error
curl -X POST "http://localhost:8085/streams" \
-H "Content-Type: application/json" \
-d '{"url": "https://your-stream.m3u8"}'To disable authentication, simply leave API_TOKEN unset or set it to an empty string.
m3u-proxy includes comprehensive hardware acceleration support for video transcoding operations using FFmpeg with GPU acceleration.
- π₯ NVIDIA GPUs: CUDA, NVENC, NVDEC (10-20x faster than CPU)
- β‘ Intel GPUs: VAAPI, QuickSync (QSV) (3-8x faster than CPU)
- π AMD GPUs: VAAPI acceleration (3-5x faster than CPU)
- π» CPU Fallback: Software encoding when no GPU available
The container automatically detects available hardware on startup:
π Running hardware acceleration check...
β
Device /dev/dri/renderD128 is accessible.
π° Intel GPU: Intel GPU (Device ID: 0x041e)
β
FFmpeg VAAPI acceleration: AVAILABLE
π‘ For older Intel GPUs, try: LIBVA_DRIVER_NAME=i965
services:
m3u-proxy:
image: sparkison/m3u-proxy:latest
devices:
- /dev/dri:/dev/dri
environment:
- LIBVA_DRIVER_NAME=i965 # For older Intel GPUs
# - LIBVA_DRIVER_NAME=iHD # For newer Intel GPUsservices:
m3u-proxy:
image: sparkison/m3u-proxy:latest
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]# Intel/AMD GPU
docker run -d --name m3u-proxy \
--device /dev/dri:/dev/dri \
-e LIBVA_DRIVER_NAME=i965 \
-p 8085:8085 \
sparkison/m3u-proxy:latest
# NVIDIA GPU
docker run -d --name m3u-proxy \
--gpus all \
-p 8085:8085 \
sparkison/m3u-proxy:latestThe hardware acceleration is available through Python APIs:
from hwaccel import get_ffmpeg_hwaccel_args, is_hwaccel_available
# Check if hardware acceleration is available
if is_hwaccel_available():
# Get optimized FFmpeg arguments
hwaccel_args = get_ffmpeg_hwaccel_args("h264")
# Example: Hardware-accelerated transcoding
cmd = ["ffmpeg"] + hwaccel_args + [
"-i", "input_stream.m3u8",
"-c:v", "h264_vaapi", # Hardware encoder
"-preset", "fast",
"-b:v", "2M",
"output_stream.mp4"
]| Hardware | Encoding Speed | Concurrent Streams | CPU Usage Reduction |
|---|---|---|---|
| NVIDIA GPU | 10-20x faster | 4-8 streams | 95%+ |
| Intel GPU | 3-8x faster | 2-4 streams | 90%+ |
| AMD GPU | 3-5x faster | 2-3 streams | 85%+ |
| CPU Only | Baseline | 1 stream | N/A |
- H.264/AVC: High compatibility, universal support
- H.265/HEVC: Better compression, 4K/8K content
- VP8/VP9: WebM containers, streaming optimized
- AV1: Next-gen codec, best compression
- MJPEG: Low latency, surveillance applications
For detailed hardware acceleration setup and troubleshooting, see Hardware Acceleration Guide.
# Main server with all features
python main.py
# With custom options
python main.py --port 8002 --debug --reloadDirect Per-Client Proxy - Each client gets independent provider connections. No shared buffers, no buffering at all. True ephemeral architecture where provider connections exist only when actively serving a client.
-
Stream Manager (
src/stream_manager.py)- Per-Client Direct Proxy: Independent provider connection per client
- Stream Type Detection: Automatic HLS vs continuous stream identification
- Seamless Failover: <100ms transparent URL switching with connection handoff
- Connection Pooling: httpx client with optimized keepalive (20 connections)
- Automatic Cleanup: Instant connection closure on client disconnect
-
Stream Handling Approaches
Continuous Streams (.ts, .mp4, .mkv direct files):
- Each client β Separate provider connection
- Direct byte-for-byte streaming (StreamingResponse)
- Zero buffering, zero shared state
- Failover per-client without affecting others
- Connection closes immediately when client stops
HLS Streams (playlists and segments):
- Playlist parsing and URL rewriting
- Segment proxying with connection pooling
- Efficient small-file handling
- Real-time playlist modification
-
FastAPI Application (
src/api.py)- RESTful endpoints for all operations
- Client tracking and bandwidth monitoring
- Statistics aggregation and reporting
- Event emission for external monitoring
# Client tracking
ClientInfo(
client_id: str,
stream_id: Optional[str],
user_agent: str,
ip_address: str,
first_seen: datetime,
last_seen: datetime,
bytes_served: int,
segments_served: int
)
# Stream information
StreamInfo(
stream_id: str,
original_url: str,
current_url: str,
failover_urls: List[str],
client_count: int,
total_bytes_served: int,
total_segments_served: int,
error_count: int,
created_at: datetime,
last_access: datetime
)-
Stream Won't Load
- Check original URL accessibility
- Verify CORS headers if accessing from browser
- Check server logs for detailed errors
-
High Memory Usage
- Reduce
CLIENT_TIMEOUTfor faster cleanup - Monitor client connections and cleanup inactive ones
- Consider horizontal scaling for high loads
- Reduce
-
Failover Not Working
- Verify failover URLs are accessible
- Check failover trigger conditions in logs
- Test manual failover via API
# Enable detailed logging
export LOG_LEVEL=DEBUG
python main.py --debug<video controls>
<source src="http://localhost:8085/hls/{stream_id}/playlist.m3u8" type="application/x-mpegURL">
</video>ffplay "http://localhost:8085/hls/{stream_id}/playlist.m3u8"vlc "http://localhost:8085/hls/{stream_id}/playlist.m3u8"The proxy includes a comprehensive event system for monitoring and integration:
# Add webhook to receive events
curl -X POST "http://localhost:8085/webhooks" \
-H "Content-Type: application/json" \
-d '{
"url": "https://your-server.com/webhook",
"events": ["stream_started", "client_connected", "failover_triggered"],
"timeout": 10,
"retry_attempts": 3
}'stream_started- New stream createdstream_stopped- Stream endedclient_connected- Client joined streamclient_disconnected- Client left streamfailover_triggered- Switched to backup URL
{
"event_id": "uuid",
"event_type": "stream_started",
"stream_id": "abc123",
"timestamp": "2025-09-25T22:38:34.392830",
"data": {
"primary_url": "http://example.com/stream.m3u8",
"user_agent": "MyApp/1.0"
}
}# Try the event system demo
python demo_events.pyπ Full Documentation: See EVENT_SYSTEM.md for complete webhook integration guide.
- Architecture Overview - System design and components
- Event System - Webhook notifications and events
- Testing Guide - Test suite and development
- Authentication - API token authentication
βββ src/
β βββ stream_manager.py # v2.0 Core: Per-client direct proxy
β βββ api.py # FastAPI server application
β βββ models.py # Data models and schemas
β βββ config.py # Configuration management
β βββ events.py # Event system with webhooks
βββ docs/
β βββ ARCHITECTURE.md # Architecture design overview
β βββ EVENT_SYSTEM.md # Webhook integration guide
β βββ TESTING.md # Testing documentation
βββ tests/ # Test suite
β βββ integration/ # Integration tests
β βββ test_*.py # Unit tests
βββ tools/ # Utility scripts and tools
β βββ performance_test.py # Performance testing
β βββ m3u_client.py # CLI client
β βββ demo_events.py # Event system demo
β βββ run_tests.py # Enhanced test runner
βββ main.py # Server entry point (uvloop support)
βββ README.md # This file
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
MIT License - see LICENSE file for details.
Built with FastAPI and inspired by MediaFlow Proxy. Designed for production IPTV streaming with emphasis on efficiency, correctness, and zero transcoding.
For issues, feature requests, or questions, please open a GitHub issue.