[mesh-dev] Multicast forwarding for layer 3 pseudo-bridges

Dave Taht dave.taht at gmail.com
Wed Jun 24 10:02:50 PDT 2015


On Wed, Jun 24, 2015 at 9:55 AM, Dave Taht <dave.taht at gmail.com> wrote:
> On Wed, Jun 24, 2015 at 4:07 AM, Marc Juul <juul at labitat.dk> wrote:
>> # The problem
>>
>> Our extender-nodes can't bridge their adhoc wifi interface to their ethernet
>> interface since bridging adhoc is not possible. A layer 3 pseudo-bridge
>> using relayd should work but both Babel and mDNS rely on multicast which is
>> not handled by relayd.
>>
>> # Solutions
>>
>> It looks like we have two options:
>>
>> 1. Forward both babel and mDNS IPv4 and IPv6 multicast messages between
>> interfaces selectively based on the multicast addresses of these protocols
>> 2. Have extender-nodes forward / route multicast traffic in general
>> (pimd/pim6sd).
>
> pimd is horribly ill-maintained. I have not seen it work right in many cases.
>
>>
>> Running babeld is a solution, but not ideal since it makes Babel deal with
>> extender-nodes as real nodes, adding more signalling traffic to the network
>> which should be unnecessary and complicates our network graphs.
>
> The signalling traffic is trivial. (especially compared to bridging
> multicast????)
>
> The complications are possibly beneficial. I have gone through hell
> finding weird
> bridged networks in the path.
>
>> ## Selective multicast routing/forwarding
>>
>> mDNS has existing solutions available, such as:
>>
>> * avahi with enable-reflector option (kinda bulky)
>> * mdns-repeater (no openwrt package exists)
>
> The in-progress solution to unicast mdns discovery is the mdns hybrid
> proxy work spearheaded by stuart chesire at the ietf.
>
> https://tools.ietf.org/html/draft-cheshire-dnssd-hybrid-01
>
> Code: https://github.com/sbyx/ohybridproxy
>
>> For Babel we are not so lucky. Stripping babeld down so it is only a dumb
>> forwarder for Babel multicast traffic and then running that on the
>> extender-nodes might be a way to go.
>>
>> Another option is to use a general-purpose IPv4/IPv6 multicast
>> reflector/forwarder that can be configured to relay or route traffic for a
>> set of multicast addresses and ports.
>>
>> RFC 6621 which is implemented in nrlsmf is something like that but:
>>
>> * It is a userland process that sniffs packets in promiscious mode (shit
>> performance)
>> * It forwards _all_ multicast traffic (which would be fine for out use case
>> if it wasn't doing this in userland)
>> * It has no specified license though source is available
>>
>> mcproxy (similar to igmpproxy but more modern) is not too far from what we
>> need. It seems likely that mcproxy could be fairly easily modified for
>> selective forwarding. mcproxy is nice because it doesn't intercept data in
>> userland in order to forward (it uses the kernel multicast routing
>> features). Currently it listens for multicast subscriptions from the
>> downstream network and then begins forwarding multicast traffic on the
>> subscribed IP to the downstream interface.
>>
>> What we need is a simpler daemon that doesn't listen for subscriptions but
>> instead forwards in both directions based on the following configuration
>> file options:
>>
>> * IP version / Protocol (IGMPv2, IGMPv3, MLDv1, etc)
>> * Multicast IP to forward for (e.g. 224.0.0.111
>> * Interfaces to forward between
>>
>> So you could tell it, e.g:
>>
>>   "forward all 224.0.0.111 and ff02::1:6 traffic between eth0.1 and adhoc0"
>> (babel traffic)
>>
>> or:
>>
>>   "forward all 224.0.0.251 and ff02::fb traffic between eth0.1 and adhoc0"
>> (mDNS traffic)
>>
>> # General-purpose multicast routing/forwarding
>>
>> Instead of selectively forwarding we could run real multicast routing
>> daemons and simply configure them to forward all multicast traffic. The two
>> standard packages for this seem to be pimd for IPv4 and pim6sd for IPv6.
>>
>> Attitude Adjustment has a pim6sd package but seems to be missing a package
>> for the normal pimd package. Even worse, it looks like Chaos Calmer drops
>> the pim6sd package as well (I really wish there was a central place where
>> one could read about all OpenWRT dropped/added packages and reasons for
>> doing so).
>>
>> We could try to run these two daemons, but we'd have to make a package for
>> pimd and figure out why they're droppping pim6sd. Doesn't seem like the
>> worst chore.
>
> I have pimd in ceropackages. Does not work well.
>
> quagga had pim support enter it recently. That codebase looked promising.

openwrt had at one point stripped from the kernel all pim support. Not sure
if that is still the case.

>> If we wanted to avoid pimd, then we could disable mDNS on IPv4, which
>> shouldn't present a problem since all mesh-connected devices will have IPv6
>> addresses anyway, but it's probably unlikely that babeld can function on
>> both IPv4 and IPv6 without IPv4 multicast (unless we change a bunch of
>> code). Totally worth checking if it already has that ability though, since
>> avoiding IPv4 multicast would be a bonus.
>
> Babel does not use ipv4 at all. All multicast is ipv6. Babel can carry
> both ipv4 and ipv6 in it´s ipv6 packets.

A common trick is to just filter out ipv4 from a given node´s
interface (where, for example,
ipv6 is routable, but ipv4 is natted)

http://www.ietf.org/mail-archive/web/homenet/current/msg05225.html


>
>
>>
>> # Tricks
>>
>> If you run "mcproxy -c" it will check if all relevant kernel config options
>> are enabled for IPv4 and IPv6 multicast routing. It looks like default
>> Attitude Adjustment has all but one feature enabled. Here's the output:
>>
>> ```
>> # mcproxy -c
>> Check the currently available kernel features.
>>  - root privileges: Ok!
>>
>>  - ipv4 multicast: Ok!
>>  - ipv4 multiple routing tables: Ok!
>>  - ipv4 routing tables: Ok!
>>
>>  - ipv4 mcproxy was able to join 40+ groups successfully (no limit found)
>>  - ipv4 mcproxy was able to set 40+ filters successfully (no limit found)
>>
>>  - ipv6 multicast: Ok!
>> ERROR: failed to set kernel table! Error: Protocol not available errno: 99
>>  - ipv6 multiple routing tables: Failed!
>>  - ipv6 routing tables: Ok!
>>
>>  - ipv6 mcproxy was able to join 40+ groups successfully (no limit found)
>>  - ipv6 mcproxy was able to set 40+ filters successfully (no limit found)
>> ```
>>
>> It's unclear if we need support for multiple IPv6 routing tables (probably
>> not), but it's probably trivial to enable the kernel option if we do.
>
> Babel when using source specific routing and falls back to multiple
> ipv6 routing tables when ipv6_subtrees is not available. ipv6_subtrees
> is enabled by default in BB and later.
>
>> # Conclusion
>>
>> I propose that we run babeld on the extender nodes and use avahi as a
>> reflector for now. We can then revisit this for e.g. version 0.4.
>>
>> What do you all say?
>
> I shipped avahi as a reflector in cerowrt. It caused broadcast storms
> with more than three nodes interconnected. It also went nuts when a
> device was on more than one subnet and announced itself, doing things
> like renumbering nodea-1 to nodea-2,3,4,5,6 (I dont think avahi is RFC
> compliant here)
>
> You are SOL.
>
>>
>> # Ponderings
>>
>> Based on this preliminary research it looks like the long-term solution
>> involving the smallest amount of work is probably running pim6sd and pimd.
>> This is gratifying since it would be really cool to have a mesh that does
>> real actual multicast routing.
>
> I would like pimd to work well also. However the code path in the
> kernel is underutilized, horribly undertested, and I think you are
> dreaming.
>
>>
>> I'm especially excited about this since the film-maker collective at Omni
>> (Optik Allusions) seem to be growing and because I have been working hard on
>> video streaming solutions for Omni. It would be really cool if we could
>> multicast event video over the mesh and to other local radical spaces. E.g
>> Having places like the Long Haul or other hackerspaces function as overflow
>> seating/participation for workshops/lectures and vice-versa or streaming
>> performances into local pubs instead of whatever corrupt pro sports they
>> normally offer!

I am sorry, but I regard routed multicast as wet paint that only works in a few
very specific situations, and does not scale as intended.

People keep trying to make it work, most recently with ccnx, and always failing
at some point or another on the path, big time.

Well chosen reflector points, using a reliable backbone transport,
will yield better results, particularly for video, where packet loss
is annoying and oft hard to conceal.

Probably the best implemented multicast file transfer protocol I have
yet seen - with ipv6 support - is uftp. If you can make that work over
whatever daemons and kernels you choose, maybe you can make a
different app work.


>>
>> --
>> marc/juul
>>
>> _______________________________________________
>> mesh-dev mailing list
>> mesh-dev at lists.sudoroom.org
>> https://sudoroom.org/lists/listinfo/mesh-dev
>>
>
>
>
> --
> Dave Täht
> worldwide bufferbloat report:
> http://www.dslreports.com/speedtest/results/bufferbloat
> And:
> What will it take to vastly improve wifi for everyone?
> https://plus.google.com/u/0/explore/makewififast



-- 
Dave Täht
worldwide bufferbloat report:
http://www.dslreports.com/speedtest/results/bufferbloat
And:
What will it take to vastly improve wifi for everyone?
https://plus.google.com/u/0/explore/makewififast



More information about the mesh-dev mailing list