Experimentation to stream WebRTC media sources like capture devices, screen capture, mkv files and RMTP/RTSP sources using simple signaling mechanism (see api). It is also compatible with WHEP interface.
*** Notice *** Live demo are stopped till I migrate to a european web hosting.
- packages are available from https://github.com/mpromonet/webrtc-streamer/releases/latest
- container image are available from https://hub.docker.com/r/mpromonet/webrtc-streamer
Usage:
./webrtc-streamer [OPTION...] [urls...]
General options:
-h, --help Print help
-V, --version Print version
-v, --verbose Verbosity level (use multiple times for more verbosity)
-C, --config arg Load urls from JSON config file
-n, --name arg Register a stream with name
-u, --video arg Video URL for the named stream
-U, --audio arg Audio URL for the named stream
HTTP options:
-H, --http arg HTTP server binding (default 0.0.0.0:8000)
-w, --webroot arg Path to get static files
-c, --cert arg Path to private key and certificate for HTTPS
-N, --threads arg Number of threads for HTTP server
-A, --passwd arg Password file for HTTP server access
-D, --domain arg Authentication domain for HTTP server access
(default:mydomain.com)
-X, --disable-xframe Disable X-Frame-Options header
-B, --base-path arg Base path for HTTP server
WebRTC options:
-m, --maxpc arg Maximum number of peer connections
-I, --ice-transport arg Set ice transport type
-T, --turn-server [=arg(=turn:turn@0.0.0.0:3478)]
Start embedded TURN server
-t, --turn arg Use an external TURN relay server
-S, --stun-server [=arg(=0.0.0.0:3478)]
Start embedded STUN server bind to address
-s, --stun [=arg(=0.0.0.0:3478)]
Use an external STUN server
-R, --udp-range arg Set the webrtc udp port range
-W, --trials arg Set the webrtc trials fields
-a, --audio-layer [=arg(=)] Specify audio capture layer to use (omit
value for dummy audio)
-q, --publish-filter arg Specify publish filter
-o, --null-codec Use null codec (keep frame encoded)
-b, --plan-b Use sdp plan-B (default use unifiedPlan)Arguments of '-H' are forwarded to option
listening_ports
of civetweb, allowing use of the civetweb syntax like -H8000,9000 or
-H8080r,8443s.
Using -o allows storing compressed frame data from the backend stream using
webrtc::VideoFrameBuffer::Type::kNative. This hacks the stucture
webrtc::VideoFrameBuffer storing data in a override of the i420 buffer. This
allows forwarding H264 frames from V4L2 device or RTSP stream to WebRTC stream.
It uses less CPU, but has less features (resize, codec, and bandwidth are
disabled).
Options for the WebRTC stream name:
- an alias defined using
-nargument then the corresponding-uargument will be used to create the capturer - an "rtsp://" url that will be opened using an RTSP capturer based on live555
- an "file://" url that will be opened using an MKV capturer based on live555
- an "rmtp://" url that will be opened using an RMTP capturer based on librmtp
- an "screen://" url that will be opened by
webrtc::DesktopCapturer::CreateScreenCapturer - an "window://" url that will be opened by
webrtc::DesktopCapturer::CreateWindowCapturer - an "v4l2://" url that will capture H264 frames and store it using webrtc::VideoFrameBuffer::Type::kNative type (not supported on Windows)
- an "videocap://" url video capture device name
- an "audiocap://" url audio capture device name
./webrtc-streamer -C config.jsonWe can access to the WebRTC stream using webrtcstreamer.html. For instance:
- webrtcstreamer.html?rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov
- webrtcstreamer.html?Bunny
An example displaying grid of WebRTC Streams is available using option
layout=<lines>x<columns>

You can start the application using the docker image:
docker run -p 8000:8000 -it mpromonet/webrtc-streamerYou can expose V4L2 devices from your host using:
docker run --device=/dev/video0 -p 8000:8000 -it mpromonet/webrtc-streamerThe container entry point is the webrtc-streamer application, then you can:
-
view all commands
docker run -p 8000:8000 -it mpromonet/webrtc-streamer --help
-
run the container registering a RTSP url:
docker run -p 8000:8000 -it mpromonet/webrtc-streamer -n raspicam -u rtsp://pi2.local:8554/unicast
-
run the container giving config.json file:
docker run -p 8000:8000 -v $PWD/config.json:/usr/local/share/webrtc-streamer/config.json mpromonet/webrtc-streamer -
run the container using network host:
docker run --net host mpromonet/webrtc-streamer
It is possible to start an embeded STUN and TURN server and publish its URL:
./webrtc-streamer --stun-server=0.0.0.0:3478 --stun=$(curl -s ifconfig.me):3478
./webrtc-streamer --stun=- --turn-server=0.0.0.0:3478 -tturn:turn@$(curl -s ifconfig.me):3478
./webrtc-streamer --stun-server=0.0.0.0:3478 --stun=$(curl -s ifconfig.me):3478 --turn-server=0.0.0.0:3479 --turn=turn:turn@$(curl -s ifconfig.me):3479The command curl -s ifconfig.me is getting the public IP, it could also given
as a static parameter.
In order to configure the NAT rules using the upnp feature of the router, it is possible to use upnpc like this:
upnpc -r 8000 tcp 3478 tcp 3478 udpAdapting with the HTTP port, STUN port, TURN port.
Instead of using the internal HTTP server, it is easy to display a WebRTC stream in a HTML page served by another HTTP server. The URL of the WebRTC-streamer to use should be given creating the WebRtcStreamer instance:
var webRtcServer = new WebRtcStreamer(<video tag>, <webrtc-streamer url>);A short sample HTML page using webrtc-streamer running locally on port 8000:
<html>
<head>
<script src="libs/adapter.min.js" ></script>
<script src="webrtcstreamer.js" ></script>
<script>
var webRtcServer = null;
window.onload = function() {
webRtcServer = new WebRtcStreamer("video",location.protocol+"//"+location.hostname+":8000");
webRtcServer.connect("rtsp://196.21.92.82/axis-media/media.amp","", "rtptransport=tcp&timeout=60");
}
window.onbeforeunload = function() { webRtcServer.disconnect(); }
</script>
</head>
<body>
<video id="video" muted playsinline />
</body>
</html>WebRTC-streamer provides its own Web Components as an alternative way to display a WebRTC stream in an HTML page. For example:
<html>
<head>
<script type="module" src="webrtc-streamer-element.js"></script>
</head>
<body>
<webrtc-streamer url="rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov"></webrtc-streamer>
</body>
</html>Using the webcomponent with a stream selector:
Using the webcomponent over google map:
It allow to stream using draft standard WHEP
WebRTC player can display WebRTC stream from webrtc-streamer.
A minimal example:
<html>
<head>
<script src="https://unpkg.com/@eyevinn/whep-video-component@latest/dist/whep-video.component.js"></script>
</head>
<body>
<whep-video id="video" muted autoplay></whep-video>
<script>
video.setAttribute('src', `${location.origin}/api/whep?url=Asahi&options=rtptransport%3dtcp%26timeout%3d60`);
</script>
</body>
</html>A simple way to publish WebRTC stream to a Janus Gateway Video Room is to use the JanusVideoRoom interface
var janus = new JanusVideoRoom(<janus url>, <webrtc-streamer url>)A short sample to publish WebRTC streams to Janus Video Room could be:
<html>
<head>
<script src="janusvideoroom.js" ></script>
<script>
var janus = new JanusVideoRoom("https://janus.conf.meetecho.com/janus", null);
janus.join(1234, "rtsp://pi2.local:8554/unicast","pi2");
janus.join(1234, "rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov","media");
</script>
</head>
</html>This way the communication between Janus API and WebRTC Streamer API is implemented in Javascript running in browser.
The same logic could be implemented in NodeJS using the same JS API:
global.request = require("then-request");
var JanusVideoRoom = require("./html/janusvideoroom.js");
var janus = new JanusVideoRoom(
"http://192.168.0.15:8088/janus",
"http://192.168.0.15:8000",
);
janus.join(1234, "videocap://0", "video");A simple way to publish WebRTC stream to a Jitsi Video Room is to use the XMPPVideoRoom interface
var xmpp = new XMPPVideoRoom(<xmpp server url>, <webrtc-streamer url>)A short sample to publish WebRTC streams to a Jitsi Video Room could be:
<html>
<head>
<script src="libs/strophe.min.js" ></script>
<script src="libs/strophe.muc.min.js" ></script>
<script src="libs/strophe.disco.min.js" ></script>
<script src="libs/strophe.jingle.sdp.js"></script>
<script src="libs/jquery-3.5.1.min.js"></script>
<script src="xmppvideoroom.js" ></script>
<script>
var xmpp = new XMPPVideoRoom("meet.jit.si", null);
xmpp.join("testroom", "rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov","Bunny");
</script>
</head>
</html>This package depends on the following packages:
- WebRTC Native Code Package for WebRTC (see license https://webrtc.github.io/webrtc-org/license)
- civetweb HTTP server for HTTP server (see license https://github.com/civetweb/civetweb/blob/master/LICENSE.md)
- live555 for RTSP/MKV source (see license http://www.live555.com/liveMedia/faq.html#copyright-and-license)
The following steps are required to build the project, and will install the dependencies above:
-
Install the Chromium depot tools
pushd .. git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git export PATH=$PATH:`realpath depot_tools` popd
-
Download WebRTC
mkdir ../webrtc pushd ../webrtc fetch webrtc popd
-
Build WebRTC Streamer
cmake . && make
It is possible to specify cmake parameters WEBRTCROOT &
WEBRTCDESKTOPCAPTURE:
$WEBRTCROOT/srcshould contains source (default is $(pwd)/../webrtc)WEBRTCDESKTOPCAPTUREenabling desktop capture if available (default is ON)
There is pipelines on CircleCI, CirrusCI, or GitHub CI, for the following architectures:
- x86_64 on Ubuntu
- armv7 crosscompiled (this build is running on Raspberry Pi2 and NanoPi NEO)
- armv6+vfp crosscompiled (this build is running on Raspberry PiB and should run on a Raspberry Zero)
- arm64 crosscompiled
- Windows x64 build with clang
- MacOS





