# The problem

Our extender-nodes can't bridge their adhoc wifi interface to their ethernet interface since bridging adhoc is not possible. A layer 3 pseudo-bridge using relayd should work but both Babel and mDNS rely on multicast which is not handled by relayd.

# Solutions

It looks like we have two options:

1. Forward both babel and mDNS IPv4 and IPv6 multicast messages between interfaces selectively based on the multicast addresses of these protocols
2. Have extender-nodes forward / route multicast traffic in general (pimd/pim6sd).

Running babeld is a solution, but not ideal since it makes Babel deal with extender-nodes as real nodes, adding more signalling traffic to the network which should be unnecessary and complicates our network graphs.

## Selective multicast routing/forwarding

mDNS has existing solutions available, such as:

* avahi with enable-reflector option (kinda bulky)
* mdns-repeater (no openwrt package exists)

For Babel we are not so lucky. Stripping babeld down so it is only a dumb forwarder for Babel multicast traffic and then running that on the extender-nodes might be a way to go.

Another option is to use a general-purpose IPv4/IPv6 multicast reflector/forwarder that can be configured to relay or route traffic for a set of multicast addresses and ports.

RFC 6621 which is implemented in nrlsmf is something like that but:

* It is a userland process that sniffs packets in promiscious mode (shit performance)
* It forwards _all_ multicast traffic (which would be fine for out use case if it wasn't doing this in userland)
* It has no specified license though source is available

mcproxy (similar to igmpproxy but more modern) is not too far from what we need. It seems likely that mcproxy could be fairly easily modified for selective forwarding. mcproxy is nice because it doesn't intercept data in userland in order to forward (it uses the kernel multicast routing features). Currently it listens for multicast subscriptions from the downstream network and then begins forwarding multicast traffic on the subscribed IP to the downstream interface.

What we need is a simpler daemon that doesn't listen for subscriptions but instead forwards in both directions based on the following configuration file options:

* IP version / Protocol (IGMPv2, IGMPv3, MLDv1, etc)
* Multicast IP to forward for (e.g. 224.0.0.111
* Interfaces to forward between

So you could tell it, e.g:

  "forward all 224.0.0.111 and ff02::1:6 traffic between eth0.1 and adhoc0" (babel traffic)

or:

  "forward all 224.0.0.251 and ff02::fb traffic between eth0.1 and adhoc0" (mDNS traffic)

# General-purpose multicast routing/forwarding

Instead of selectively forwarding we could run real multicast routing daemons and simply configure them to forward all multicast traffic. The two standard packages for this seem to be pimd for IPv4 and pim6sd for IPv6.

Attitude Adjustment has a pim6sd package but seems to be missing a package for the normal pimd package. Even worse, it looks like Chaos Calmer drops the pim6sd package as well (I really wish there was a central place where one could read about all OpenWRT dropped/added packages and reasons for doing so).

We could try to run these two daemons, but we'd have to make a package for pimd and figure out why they're droppping pim6sd. Doesn't seem like the worst chore.

If we wanted to avoid pimd, then we could disable mDNS on IPv4, which shouldn't present a problem since all mesh-connected devices will have IPv6 addresses anyway, but it's probably unlikely that babeld can function on both IPv4 and IPv6 without IPv4 multicast (unless we change a bunch of code). Totally worth checking if it already has that ability though, since avoiding IPv4 multicast would be a bonus.

# Tricks

If you run "mcproxy -c" it will check if all relevant kernel config options are enabled for IPv4 and IPv6 multicast routing. It looks like default Attitude Adjustment has all but one feature enabled. Here's the output:

```
# mcproxy -c
Check the currently available kernel features.
 - root privileges: Ok!

 - ipv4 multicast: Ok!
 - ipv4 multiple routing tables: Ok!
 - ipv4 routing tables: Ok!

 - ipv4 mcproxy was able to join 40+ groups successfully (no limit found)
 - ipv4 mcproxy was able to set 40+ filters successfully (no limit found)

 - ipv6 multicast: Ok!
ERROR: failed to set kernel table! Error: Protocol not available errno: 99
 - ipv6 multiple routing tables: Failed!
 - ipv6 routing tables: Ok!

 - ipv6 mcproxy was able to join 40+ groups successfully (no limit found)
 - ipv6 mcproxy was able to set 40+ filters successfully (no limit found)
```

It's unclear if we need support for multiple IPv6 routing tables (probably not), but it's probably trivial to enable the kernel option if we do.

# Conclusion

I propose that we run babeld on the extender nodes and use avahi as a reflector for now. We can then revisit this for e.g. version 0.4.

What do you all say?

# Ponderings

Based on this preliminary research it looks like the long-term solution involving the smallest amount of work is probably running pim6sd and pimd. This is gratifying since it would be really cool to have a mesh that does real actual multicast routing.

I'm especially excited about this since the film-maker collective at Omni (Optik Allusions) seem to be growing and because I have been working hard on video streaming solutions for Omni. It would be really cool if we could multicast event video over the mesh and to other local radical spaces. E.g Having places like the Long Haul or other hackerspaces function as overflow seating/participation for workshops/lectures and vice-versa or streaming performances into local pubs instead of whatever corrupt pro sports they normally offer!

--
marc/juul