On Wed, Jun 24, 2015 at 9:55 AM, Dave Taht <dave.taht(a)gmail.com> wrote:
On Wed, Jun 24, 2015 at 4:07 AM, Marc Juul
<juul(a)labitat.dk> wrote:
# The problem
Our extender-nodes can't bridge their adhoc wifi interface to their
ethernet
interface since bridging adhoc is not possible. A
layer 3 pseudo-bridge
using relayd should work but both Babel and mDNS rely on multicast which
is
not handled by relayd.
# Solutions
It looks like we have two options:
1. Forward both babel and mDNS IPv4 and IPv6 multicast messages between
interfaces selectively based on the multicast addresses of these
protocols
2. Have extender-nodes forward / route multicast
traffic in general
(pimd/pim6sd).
pimd is horribly ill-maintained. I have not seen it work right in many
cases.
Well that sucks but good to know. It looks like it have an active
maintainer though?
https://github.com/troglobit/pimd
Running babeld is a solution, but not ideal since it makes Babel deal
with
extender-nodes as real nodes, adding more
signalling traffic to the
network
which should be unnecessary and complicates our
network graphs.
The signalling traffic is trivial. (especially compared to bridging
multicast????)
The complications are possibly beneficial. I have gone through hell
finding weird
bridged networks in the path.
I agree. I see now that running babeld is the best solution. I was trying
to make a router act like basically an extra radio for another router and
got hung up on having it be a dumb bridge when it was looking more
complicated than just running babeld.
## Selective multicast routing/forwarding
mDNS has existing solutions available, such as:
* avahi with enable-reflector option (kinda bulky)
* mdns-repeater (no openwrt package exists)
The in-progress solution to unicast mdns discovery is the mdns hybrid
proxy work spearheaded by stuart chesire at the ietf.
https://tools.ietf.org/html/draft-cheshire-dnssd-hybrid-01
Code:
https://github.com/sbyx/ohybridproxy
Interesting. Seems a bit more centralized than what we're looking for but
will have a look at the code.
I have pimd in ceropackages. Does not work well.
quagga had pim support enter it recently. That codebase looked promising.
Interesting.
If we wanted to avoid pimd, then we could disable
mDNS on IPv4, which
shouldn't present a problem since all mesh-connected devices will have
IPv6
addresses anyway, but it's probably unlikely
that babeld can function on
both IPv4 and IPv6 without IPv4 multicast (unless we change a bunch of
code). Totally worth checking if it already has that ability though,
since
avoiding IPv4 multicast would be a bonus.
Babel does not use ipv4 at all. All multicast is ipv6. Babel can carry
both ipv4 and ipv6 in it´s ipv6 packets.
Good to know!
Babel when using source specific routing and falls
back to multiple
ipv6 routing tables when ipv6_subtrees is not available. ipv6_subtrees
is enabled by default in BB and later.
You are answering all my questions!
# Conclusion
I propose that we run babeld on the extender nodes and use avahi as a
reflector for now. We can then revisit this for e.g. version 0.4.
What do you all say?
I shipped avahi as a reflector in cerowrt. It caused broadcast storms
with more than three nodes interconnected. It also went nuts when a
device was on more than one subnet and announced itself, doing things
like renumbering nodea-1 to nodea-2,3,4,5,6 (I dont think avahi is RFC
compliant here)
You are SOL.
Aw dang. Thanks for saving us from learning the hard way. I guess we'll
look at mdns-repeater and maybe modifying it to prevent packet storms.
Definitely pushing this issue to a later version then.
Based on this
preliminary research it looks like the long-term solution
involving the smallest amount of work is probably running pim6sd and
pimd.
This is gratifying since it would be really cool
to have a mesh that does
real actual multicast routing.
I would like pimd to work well also. However the code path in the
kernel is underutilized, horribly undertested, and I think you are
dreaming.
But we must dream of a better future! :) We'll have to revisit this next
year then.
Thank you for the insightful feedback!
--
marc/juul