At Omni we're using TP-Link dual-band home routers (N750) and modifying
them for PoE.
These routers have gigabit ethernet.
Unfortunately gigabit ethernet and PoE at the same requires special
ethernet transformers and these routers don't have that type of transformer.
This means that they drop to 100 mbit when we modify them for PoE (but only
on the port we modify).
I was looking into how to get around this problem.
I thought maybe there was a way to do gigabit half-duplex on two or three
pairs, but it seems like that's not possible.
BUT! It looks like TP-Link makes a PoE splitter which can apparently handle
gigabit:
The TL-POE10R
Here it is for $12:
http://www.ebay.com/itm/TP-LINK-TL-POE10R-Gigabit-PoE-Splitter-Adapter-IEEE…
I ordered one so we can try it out. If it works we should just do that to
all of the routers. No reason not to have gigabit if it's only $12 extra
per router. It's still only $77 total for router + PoE splitter.
oh and btw, someone is selling dirt cheap Ubiquiti Unifi UAP devices:
http://www.ebay.com/itm/New-Ubiquiti-UniFi-UAP-802-11n-300-Mbps-Wireless-Ac…
Caution: They are 2.4 GHz only.
I ordered one for the mesh, just so we have it to test our firmware on.
--
marc/juul
Here's my shitty first attempt at diagram to illustrate how it all hangs
together with vlan, bridging and tunnels:
https://i.imgur.com/B0jzM9h.jpg
Hopefully this can inform a better diagram made
The big thing in the middle is a home node. The chunky things sticking out
of it to the left are antennas. The two things on the right are extender
nodes.
The numbers are VLAN IDs. As you can see there is one VLAN for the open
network (VLAN 10), one VLAN for the private network (VLAN 11) and then one
VLAN per extender node for the mesh network (VLANs 1 and 2).
You can see that the private and public network interfaces are connected
together between wifi, ethernet, home and extender nodes by putting the
interfaces on the same VLAN (for ethernet) and then bridging to the wifi
interfaces.
The mesh network interfaces are connected together by enabling babeld on
all of them, or in the case of the connection between home and extender
nodes they are connected by being on the same VLAN.
On this diagram all three networks are available on all radios both on home
and extender nodes. This is not yet fully implemented, but it is a small
enough change that we should be able to finish it this week.
--
marc/juul
The mesh will soon be receiving two flutter boards since we funded their
kickstarter.
These are 900 mhz fairly long range low bandwidth (i think up to 1 mbps?)
boards that can be programmed with a modified Arduino IDE.
These would be pretty good for a low-bandwidth disaster recovery mesh, or
for sensor reporting. I'm sending this mail to start people thinking about
possible uses.
Here's their kickstarter:
https://www.kickstarter.com/projects/flutterwireless/flutter-20-wireless-ar…
Here's my old write-up for the DisasterRadio project for inspiration
(currently in suspended animation):
https://sudoroom.org/wiki/DisasterRadio
--
marc/juul
Not a great deal of progress tonight.
Flashed the newly received nanostation m2 and picostation m2 and
investigated if any special configuration would be needed to support them
as extender nodes. Surprisingly it looks like the answer is no, even though
the nanostation m2 has a built-in switch.
Information about supporting these devices is logged here:
https://github.com/sudomesh/sudowrt-firmware/issues/46
Started a build of latest sudowrt on room.sudoroom.org:
tail -f
/home/juul/sudowrt-firmware/built_firmware/builder.ar71xx/build.log
Since we won't have to do any hook-script re-writing for nanostation I am
hopeful that we can complete the tasks I listed for 0.2 this week:
https://github.com/sudomesh/sudowrt-firmware/milestones/0.2%20-%20initial%2…
--
marc/juul
Here's what went down.
# Added functionality to our babeld fork
-x for dynamically removing interfaces (we only had -a to add them)
-F to enable the dynamic functionality (fungible mode)
-i to print the "kill -USR1" information for the running babeld
babeld no longer requires any interfaces to be specified when initially
started if fungible mode is enabled
# Switched our firmwares to using our babeld fork
Here it is:
https://github.com/sudomesh/sudowrt-packages/tree/master/net/babeld-sudowrt
We were only using them on the VPuN (exit) server before.
I haven't tried to recompile the firmware with this package added. Maybe
someone else can test that this compiles correctly?
max: be aware that VPuN servers will now have to start new versions with -F
to get the dynamic functionality
# Completed extender-node functionality
Everything now works as expected with babeld running on the extender nodes.
The extender nodes come up automatically and both the open and adhoc
networks work.
Due to feedback by Dave Taht I abandoned adding avahi-daemon as a reflector
on the extender nodes and pushed forwarding of mDNS traffic to the
milestone for a future release.
The one thing left to do for the extender nodes is to re-compile both
firmwares from scratch, flash two nodes and test that it all comes up as
expected. I've tried hard to bring the repositories in line with the
working configuration on my two test nodes, but I may have missed something.
# Added milestones and issues on github
Milestones:
https://github.com/sudomesh/sudowrt-firmware/milestones
Issues for upcoming version 0.2:
https://github.com/sudomesh/sudowrt-firmware/milestones/0.2%20-%20initial%2…
Please add any issues I may have missed. Also, please change things if you
disagree :) I just did what I thought made sense but I'm not married to
anything.
Yay progress!
--
marc/juul
# The problem
Our extender-nodes can't bridge their adhoc wifi interface to their
ethernet interface since bridging adhoc is not possible. A layer 3
pseudo-bridge using relayd should work but both Babel and mDNS rely on
multicast which is not handled by relayd.
# Solutions
It looks like we have two options:
1. Forward both babel and mDNS IPv4 and IPv6 multicast messages between
interfaces selectively based on the multicast addresses of these protocols
2. Have extender-nodes forward / route multicast traffic in general
(pimd/pim6sd).
Running babeld is a solution, but not ideal since it makes Babel deal with
extender-nodes as real nodes, adding more signalling traffic to the network
which should be unnecessary and complicates our network graphs.
## Selective multicast routing/forwarding
mDNS has existing solutions available, such as:
* avahi with enable-reflector option (kinda bulky)
* mdns-repeater (no openwrt package exists)
For Babel we are not so lucky. Stripping babeld down so it is only a dumb
forwarder for Babel multicast traffic and then running that on the
extender-nodes might be a way to go.
Another option is to use a general-purpose IPv4/IPv6 multicast
reflector/forwarder that can be configured to relay or route traffic for a
set of multicast addresses and ports.
RFC 6621 which is implemented in nrlsmf is something like that but:
* It is a userland process that sniffs packets in promiscious mode (shit
performance)
* It forwards _all_ multicast traffic (which would be fine for out use case
if it wasn't doing this in userland)
* It has no specified license though source is available
mcproxy (similar to igmpproxy but more modern) is not too far from what we
need. It seems likely that mcproxy could be fairly easily modified for
selective forwarding. mcproxy is nice because it doesn't intercept data in
userland in order to forward (it uses the kernel multicast routing
features). Currently it listens for multicast subscriptions from the
downstream network and then begins forwarding multicast traffic on the
subscribed IP to the downstream interface.
What we need is a simpler daemon that doesn't listen for subscriptions but
instead forwards in both directions based on the following configuration
file options:
* IP version / Protocol (IGMPv2, IGMPv3, MLDv1, etc)
* Multicast IP to forward for (e.g. 224.0.0.111
* Interfaces to forward between
So you could tell it, e.g:
"forward all 224.0.0.111 and ff02::1:6 traffic between eth0.1 and adhoc0"
(babel traffic)
or:
"forward all 224.0.0.251 and ff02::fb traffic between eth0.1 and adhoc0"
(mDNS traffic)
# General-purpose multicast routing/forwarding
Instead of selectively forwarding we could run real multicast routing
daemons and simply configure them to forward all multicast traffic. The two
standard packages for this seem to be pimd for IPv4 and pim6sd for IPv6.
Attitude Adjustment has a pim6sd package but seems to be missing a package
for the normal pimd package. Even worse, it looks like Chaos Calmer drops
the pim6sd package as well (I really wish there was a central place where
one could read about all OpenWRT dropped/added packages and reasons for
doing so).
We could try to run these two daemons, but we'd have to make a package for
pimd and figure out why they're droppping pim6sd. Doesn't seem like the
worst chore.
If we wanted to avoid pimd, then we could disable mDNS on IPv4, which
shouldn't present a problem since all mesh-connected devices will have IPv6
addresses anyway, but it's probably unlikely that babeld can function on
both IPv4 and IPv6 without IPv4 multicast (unless we change a bunch of
code). Totally worth checking if it already has that ability though, since
avoiding IPv4 multicast would be a bonus.
# Tricks
If you run "mcproxy -c" it will check if all relevant kernel config options
are enabled for IPv4 and IPv6 multicast routing. It looks like default
Attitude Adjustment has all but one feature enabled. Here's the output:
```
# mcproxy -c
Check the currently available kernel features.
- root privileges: Ok!
- ipv4 multicast: Ok!
- ipv4 multiple routing tables: Ok!
- ipv4 routing tables: Ok!
- ipv4 mcproxy was able to join 40+ groups successfully (no limit found)
- ipv4 mcproxy was able to set 40+ filters successfully (no limit found)
- ipv6 multicast: Ok!
ERROR: failed to set kernel table! Error: Protocol not available errno: 99
- ipv6 multiple routing tables: Failed!
- ipv6 routing tables: Ok!
- ipv6 mcproxy was able to join 40+ groups successfully (no limit found)
- ipv6 mcproxy was able to set 40+ filters successfully (no limit found)
```
It's unclear if we need support for multiple IPv6 routing tables (probably
not), but it's probably trivial to enable the kernel option if we do.
# Conclusion
I propose that we run babeld on the extender nodes and use avahi as a
reflector for now. We can then revisit this for e.g. version 0.4.
What do you all say?
# Ponderings
Based on this preliminary research it looks like the long-term solution
involving the smallest amount of work is probably running pim6sd and pimd.
This is gratifying since it would be really cool to have a mesh that does
real actual multicast routing.
I'm especially excited about this since the film-maker collective at Omni
(Optik Allusions) seem to be growing and because I have been working hard
on video streaming solutions for Omni. It would be really cool if we could
multicast event video over the mesh and to other local radical spaces. E.g
Having places like the Long Haul or other hackerspaces function as overflow
seating/participation for workshops/lectures and vice-versa or streaming
performances into local pubs instead of whatever corrupt pro sports they
normally offer!
--
marc/juul
As I am rolling out a bunch of new babel nodes, I decided to get a
cluster (2 nanos and a pico) up in the lab, where I have good
connectivity to the rest of the network, to replace an aging cluster
by the pool.
So I booted it up and configured it for the right channels and a new
set of ip addresses... didnt have good LED support at all (RSSI does
not seem to do anything)...
I got blinkenlights to sort of work, and they were lit up, kind of
solid, for some reason... [1]
...people started wandering by to complain about the network...
naturally I didnt notice because I was even closer to the exit points
than anyone else...
...to discover that I was offering the shortest path to the exit
nodes, and thus had bypassed the two existing ~50mbit links into lab
links that were located indoors and going through a thousand+ meters
of trees... that was barely doing a megabit with 800+ms of delay.
(channel diversity not working did not help either)
After that experience, I decided that I would make the firmware for
unconfigured nodes export a 512 metric, and use a high rxcost until
they were fully configured AND in place. I might disable ipv4 entirely
in favor of the autoconfigured ula openwrt has, and just start
configuring stuff based on the appearance of new ulas in the network.
[1] if you come up with a useful LED config for nanostations and
picostations, let me know.
--
Dave Täht
worldwide bufferbloat report:
http://www.dslreports.com/speedtest/results/bufferbloat
And:
What will it take to vastly improve wifi for everyone?
https://plus.google.com/u/0/explore/makewififast