I wanted to have a system that uses exclusively multicast to obtain on-demand streams.

Endpoints signal their insterest in a stream via MLDv2 or IGMP emmissions, and the source can react to that to start or stop streaming. Interstitial routers aggregate and backfeed this as group membership uses linklocal.

The usual system for transport of content is RTP on UDP, operating on port 5004. It has a media type octet that has common meanings for only a few types and most use "user defined" from 96 onwards that requre the receiver to either see an SDP message to define it, or emit an SDP message via SIP to pre-agree it.

I discovered that SAP on udp 9875 can emit the SDP that this needs, for now choosing to use the media RTP address instead of the common address for SAP FF0X:0:0:0:0:0:2:7FFE, although with source specific multicast it would be possible to use the common address.

Reasons to send SAP to the RTP group

SAP has some dedicated assignments of, and FF0X::2:7FFE, though any system design has to consider scale, subscribing to these implies receiving the metadata every live stream in existence at once, although doing still may work with source-specific.

Instead have a block of multicast groups, one for each service, the sender detects when the number of subscribers enters and exits zero.

The sequence of events is like this:

  1. The first client indicates it wants a multicast group such as ff3e:40:2001:db8:d0be:f00d:8000:0000 via MLDv2 (or IGMPv3 for IPv4)
  2. The MLDv2 request propogates back over the network to the headend.
  3. Headend notices the request and looks up the group in the configured stream directories. http connection to the Internet radio station initiated, or tune in to a DVB-S2 multiplex.
  4. Headend eventually gets a stream, decodes what codec it is using and then sends RTP on UDP port 5004 and SAP on UDP port 9875 to that same multicast group.
  5. Other clients join the group and headend will notice that too, if they are on the same IP segment.
  6. Clients reply with RTCP on UDP 5005 to the group, these are seen and aggregated by the headend.
  7. When the last client unsubscribes the group, headend will shutdown the stream.

I have used FFmpeg for the headend, FFplay for the client, with experimental alterations as needed: Here are the issues encounted, addressed or not:

Metadata Mapping
ICY StreamTitlei=
ICY StreamUrlu=
HTTP icy-names=
Offset into current song, possibly determned by new ICY metadatat=

Multicast routing table issues

Whilst the preferred method to issue multicast addresses is to supply the intended ingress/egress interface with % notation, if the app does not support that it may look in the regular routing tables, thus ip -6 route show table local | grep ^ff00 gives:

  1. ff00::/8 dev first table local metric 256 pref medium
  2. ff00::/8 dev second table local metric 256 pref medium
  3. ff00::/8 dev third table local metric 256 pref medium

Interface first is picked for multicast, and might not be wanted. This list is reorganised based on restarts.

Open access stream directories

For internet streaming, there are not that many directories that I know about that have facilities for programmatic access without registration.

Offered for programmatic usage so now the first choice.
They have the for the directory
Tunein, maybe
Details of the api were available in the past, though programmatic access could not be found on their current site.


Initially I had used mumudvb and later switched to dvblast as that works faster, especially on this elderly server, and allows switching between services without interrupting the stream. It does not provide SAP(9875) or RTSP(5005) like mumudvb but these could be added back via another program.

DVBlast also supports switching services on a carrier so no more need for tc qdisc tricks to do that.

With DVB, each transport stream has a selection of 16-bit service numbers, leaving the other 15 bits to select dvb and select a carrier. These do rarely change so we can have static mapping in the server.

Therefore, I have a primary file of dvblast commands, to select each transmission from Astra 28.2E

Nearly all the trasmissions are listed in the "NIT" that can be obtained, except 2:

dvblast -f 11426500 -v 18 -5 DVBS -s 27500000 -F 23 has the DAB transmissions

dvblast -f 12441000 -v 13 -5 DVBS2 -s 29500000 -F 89 -m QPSK astra UHD test

I had to recompile dvblast to make it accept "-F 89"

It is very desirable to have a static mapping for the multicast number, I noticed that the carrier frequencies where spaced in 250㎒ units, and is is possible to transmit on the same carrier with both vertical and horizontal polarisation:

Designing service numbers

There is a 31-bit service number in IPv6 multicast, so map the services to that. The first bit is meant to be set.

To select internet streams, map the low bits to enjoyable music services in the directory, and the high bits to select between directories.

The next problem is that http can return media, redirects, various playlists, and even web pages, no surprise as http is meant for hypertext transport.

Until an URL is invoked it is not known which it will be. It would be likely unfair on FFmpeg to handle every possibility, so it is patched to take a file descriptor handoff from a full featured http response handler, although it is further complicated with TLS.

Playlists are really a kind of multi target redirect.

UPnP bridge between networks

smcroute is chosen as it does not depend on correct use of IGMP for IPv4 or MLD for IPv6 for multicast forwarding to function. I generally do on-demand multicast.

apt-get install smcroute sipcalc

To assure us that both ethernets get equal multicast preference even in the presence of any kernel faults, multicast addresses originated remotely are carved out of the main routing table, though for now multicast destinations we originate locally has routes installed there until kernel forwarding for that is fixed.

This let the sonos controller on Android mobile phones aand tablets find the zoneplayers

I use smcroute to flood uPNP between IEEE802.3 int0 and IEEE802.11 ext0 networks, these are bridges to allow ebtables to function, they have 1 interface so do not need STP

So ping produces connect: Network is unreachable

However ping -I int0 -r -b produces responses or nothing if no device answers the request.

Some UPNP systems like Sonos also uses local broadcast, so we convert them to multicast using linux DNAT and now they can be forwarded to the other networks on our site.

iptables -t nat -A PREROUTING -i int0 -d -p udp -m multiport --dports 1900,6969 -j DNAT --to-destination
iptables -t nat -A PREROUTING -i ext0 -d -p udp -m multiport --dports 1900,6969 -j DNAT --to-destination

It may also be needed to disable IGMP controlled multicast filtering in the bridges used as the implementation may be broken and prevent multicasting functioning properly.

echo 0 > /sys/devices/virtual/net/int0/bridge/multicast_snooping
echo 0 > /sys/devices/virtual/net/ext0/bridge/multicast_snooping

If system is a border firewall, we also need ip6tables rules to let smcroute pass, here are examples, notice the input and output interface is the same.

ip6tables -A FORWARD -s 2001:db8:1337:2::/64 -d ff00::/8 -i int0 -o int0 -j ACCEPT
ip6tables -A FORWARD -s 2001:db8:1337:1::/64 -d ff00::/8 -i ext0 -o ext0 -j ACCEPT

For /etc/smcroute/

We also add some static groups in casts()

# This script is executed at startup by /etc/init.d/smcroute
# Add your calls to smcroute to setup your multicast routes here

range () {
        RANGE=`ip -4 -o addr show dev $1  primary | tr -s " " | cut -d " " -f4`
        echo `sipcalc -s 32 ${RANGE} | grep ^Network | cut -d"-" -f 2`

address () {
        echo `ip -6 -o addr show dev $1 scope global | tr -s " " | cut -d" " -f4 | cut -d"/" -f1`

casts () {
        for N in `seq $((0x80000000)) $((0x8000000F))` `seq $((0x90000000)) $((0x9000000F))` $((0xd0bef00d))
                N1=`printf %08x $N | cut -c1-4`
                N2=`printf %08x $N | cut -c5-8`
                printf "%s " `ip -6 -o addr show dev $1 scope global | tr -s " " | cut -d" " -f4 | cut -d"/" -f1 | cut -d":" -f1-4 \
                | sed -s s/^/ff3e:40:/g | sed -s s/$/:$N1:$N2/g`

# also if get _MFC 22 errors, reinstall smcroute fixed, or even recompile

for D in
	# probably not needed unless using multicast aware switch smcroute -j int0 $D
	for N in `range int0`
		smcroute -a int0 $N $D ext0
	for N in `range ext0`
		smcroute -a ext0 $N $D int0

# ff0e:0:0:0:0:0:2:7ffe = SAP
# ff02::16 = MLD
for DST in ff0e:0:0:0:0:0:2:7ffe `casts int0`
        for N in `address int0`
                smcroute -a int0 $N $DST ext0
        for N in `address ext0`
                smcroute -a ext0 $N $DST int0

If smcroute does not install routes, or if you get MRT_ADD_MFC errors, reinstall a fixed smcroute, or even recompile it, this is due to changes in one structure mfcctl in the C source.

In normal operation, expect tens of lines in cat /proc/net/ip_mr_cache && ip mroute show these show the actual multicast forwarders.

What about zeroconf multicast dns?

Although this is not supposed to be routed between networks, whenever dealing with semi-proprietary hardware or software from tech empires that like to wontfix hackers wishlist items, there are only guidelines, so do what works, where strict standards compliance is like a nice to have.

Instead, avahi can be used on routers to re-advertise mdns .local entries, but one problem is that it does not have built in isolation so that less trusted internet of things items are not allowed to get access to mdns used in higher trust vlans, so the next best thing is to run "untrusted" instances of avahi for that:

Do this manually for a first attempt, it may be best to use the system avahi instance as the "trusted" one for management vlan, and as below, copy the avahi config file for each new "untrusted" instance

unshare -m
unshare the filesystem so can mount things
mount -t ramfs none /run/dbus
overlay a private dbus dir
mount -t ramfs none /run/avahi-daemon
overlay a private avahi dir
/usr/bin/dbus-daemon --system --address=unix:path=/run/dbus/system_bus_socket --nofork --nopidfile
avahi wants to talk to the system dbus, for isolation reasons connecting to the "real" one is not wanted, so we start another
/usr/sbin/avahi-daemon --no-drop-root --no-chroot --debug -s -f /usr/local/etc/avahi/avahi-daemon-example.conf
now we launch the avahi, the network interfaces used in avahiconfig are set to bridge the lesser trusted.

The next step may be user avahi instances to explore untrust mdns safely, in proper user territory the network tends to be unshared as well to protect against more "accidents".

I'm using a function to userspace join a vlan, it assumes systemd-networkd is running on the host and attaches the veth to the host bridge. As I also have /etc/resolv.conf to /run/systemd/resolve/resolv.conf hostside, by overmounting that, different nameservers can be used in the container, such as the guest having internet access when the host does not.

Rather than use a random mac address, generate them using network namespace number, this appears to guarantee no clashes on a single host, if there are multiple hosts invoved then it may be needed to change the example such as fe:ff

  1. container ()
  2. {
  3. unshare -ncm --keep-caps /bin/sh -c '\
  4. VLAN=$1
  5. NS=$(printf '%08x' $(( $(ps -p $$ -o netns=) ^ 0xf0000000 )));\
  6. MAC=$(echo -n $NS | fold -w 2 | tr "\n" ":");\
  7. ip link add br${VLAN} address fe:ff:${MAC} mtu 65535 up type veth peer name rt${VLAN}_${NS} netns 1;\
  8. mount -t sysfs none /sys;\
  9. mount -t ramfs none /run/systemd/resolve/;\
  10. mount -t ramfs none /var/lib/dhcp/;\
  11. mount -t ramfs none /etc/bird/;\
  12. mount -t ramfs none /run/bird/;\
  13. /bin/bash
  14. ' "${0}" "${@}"
  15. }

After setup of the container and dhclient to get ipv4 address:

mount -t ramfs none /run/dbus
overlay a private dbus dir
mount -t ramfs none /run/avahi-daemon
overlay a private avahi dir
cp /etc/passwd /run/dbus/passwd; cp /etc/group /run/dbus/group
Make writable copies of passwd, group to set avahi's uid/gid to ${UID}
mount --bind /run/dbus/passwd /etc/passwd; /run/dbus/group /etc/group
overlay the host ones with the copied and modified ones
/usr/bin/dbus-daemon --system --address=unix:path=/run/dbus/system_bus_socket --nofork --nopidfile
launch a private "system" dbus like we did already
/usr/sbin/avahi-daemon --no-drop-root --no-chroot --debug -s -f ~/.local/etc/avahi/avahi-daemon-example.conf
avahi launched, passwd/group trick stops it failing to chown /run/avahi-daemon. With it backgrounded, then avahi-browse can explore mdns and possibly use for automation to cope with dynamic tcp/udp ports


Multiple avahi instances is an ideal application of systemd.nspawn, so I generated an nspawn unit to run a distruted avahi instance.

  1. [Exec]
  2. Boot=on
  3. #PrivateUsers=3502112768
  4. PrivateUsers=no
  5. Capability=all
  6. # using veth to connect guests in separate netns is great, however for this application we do not need it
  7. [Network]
  8. Private=off
  9. #VirtualEthernet=on
  10. #Bridge=br
  11. [Files]
  12. #BindReadOnly=/:/ - nspawn refuses to do this but can be done with host systemd and fstab
  13. Bind=/var/local/lib/machines/example.avahi/etc:/etc
  14. BindReadOnly=/etc/dbus-1:/etc/dbus-1
  15. BindReadOnly=/etc/pam.d:/etc/pam.d
  16. BindReadOnly=/etc/passwd:/etc/passwd
  17. BindReadOnly=/etc/group:/etc/group
  18. BindReadOnly=/etc/shadow:/etc/shadow
  19. BindReadOnly=/etc/gshadow:/etc/gshadow
  20. #PrivateUsersChown=off
  21. #BindReadOnly=/tmp/.X11-unix
  22. #PrivateUsersChown=on
  23. #Bind=/dev/ppp
  24. #Overlay=/

In fstab we have / /var/lib/machines/example.avahi none noauto,fail,x-systemd.automount,bind,ro

Inside the nspawn container a second instance of systemd starts, this utilises the host filesystem as a base and overlays parts of it with config from /var/local/lib/machines/example.avahi

All monitoring over multicast

I found that the Netgear GS108tv2 can be set to send ntp requests, snmp traps and syslog all to IPv4 multicast addresses.

This is useful as only one ip address can be first and this means that admission to the management vlan allows machines to become the switch's first server without having to either reconfigure the switch, or the host ip address, and reduces any mac re-learning with changeover.

It is also faster, the source generally does not bother with ARP to discover a destination MAC address, and just skips directly to sending to the precomputed address.

NTP has the allocated group of so we use that, whereas snmp and syslog have no group so for now use the group implied for as, this means take the first 3 octets of a private address as the first octet, thus we have 234.192.168 for 192.168

The switch itself kept the factory ipv4 address until we had more than one of them.

The remaining issue is that ntpd, snmptrapd and rsyslog get some help from iptables to consume messages.

This trick also works with gigaset dect equipment.