Did anyone have trouble getting the Ethernet on the Meraki
Mini to work under OpenWRT? I'm able to bypass the MTD
I'm able to bypass the MTD and boot into Barrier Breaker
over TFTP, but get no eth0.
Snippet from boot:
[ 1.690000] eth0: Atheros AR231x: 00:18:0a:50:65:40, irq 4
[ 1.890000] libphy: ar231x_eth_mii: probed
[ 1.900000] adm6996: eth0: ADM6996FC model PHY found.
[ 1.900000] eth0: Could not attach to PHY
[ 1.910000] eth0: mdiobus_probe failed
By the way I have learned that the AR231x support is not in
the kernel mainline, but is an off-of-tree OpenWRT addition.
I think it's been merged into linux-mti now (the MIPS tree),
as ATH25.
Alex
One big issue that we will have to deal with before really launching is the
fact that several home routers use 10.0.0.0
I honestly don't know if any of them use 10.0.0.0/8 or if they're all
10.0.0.0/24 but either way it's a problem.
There are two possible solutions.
= We don't use 10.x.x.x for the mesh =
There _are_ alternatives but one of them look great. Here are some I've
looked at.
== 44.0.0.0/8 ==
This is the HAM or AMPRnet subnet. It looks like it's very scarcely used,
if it's really used at all. It's for HAM packet radio and experimentation.
One the one hand I don't want to piss of the HAMs, but on the other hand
their entire subnet is in violation of net neutrality since they don't
allow commercial traffic on their subnet. This is definitely the easy
solution.
Here's an overview of their allocations:
https://portal.ampr.org/networks.php
I'm sure if we did a ping scan of the address space we'd see only very few
hosts. Anyone wanna take a stab at that?
== 238.0.0.0/8 ==
Using multicast address space as unicast unfortunately does not work.
== 240.0.0.0/8 ==
All of 240.0.0.0 and above is designated as "future use". However, an IETF
proposal to take it into use was rejected, apparently partially because
many IP stacks just outright reject or ignore any packets from this address
space. We'd need an overview of which systems are affected, but I don't
really think this is a viable option.
= We use 10.x.x.x for the mesh =
If we are to use 10.x.x.x for the mesh then we will have to do something
clever/ugly.
A solution would contain the following parts:
* The DHCP client would have to remap 10/8 DHCP responses on eth0 to a
different subnet (this could be 240/8) such that the interface takes on an
address different from the one provided by the DHCP server.
* ARP spoofing would have to be enabled on eth0 such that the node will
respond to ARP requests for the address it was assigned via DHCP.
* All incoming traffic on eth0 with a destination of 10/8 would have to be
remapped to 240/8 before routing happens
* All outgoing traffic on eth0 with a destination of 240/8 would have to be
remapped to 10/8 after routing happens
To accomplish the DHCP fakery, a modification to the openwrt dhcp client
would have to be written. I don't foresee that being very difficult.
Depending on the ARP spoofing difficulty, it could be as simple as adding
support for a hook script that runs on dhcp lease acceptance.
I'm not sure how best to do the ARP spoofing. There may be mechanisms built
into the Linux kernel. It may be that we can actually just assign the
10.x.x.x address gotten from the dhcp server to eth0 in addition to the
240.x.x.x address and just ensure that no mention of the 10.x.x.x address
appears in the routing table and that the kernel is configure to _not_
respond to ARP requests on interfaces other than the interface they are
inquiring about (the default is to always answer).
It seems that the address remapping might already be possible. I haven't
yet tested if it works as expected, but the following commands seem to be
what we'd need:
iptables -t nat -A PREROUTING -i eth0 -s 10.0.0.0/8 -j NETMAP --to
239.0.0.0/8
iptables -t nat -A POSTROUTING -o eth0 -d 239.0.0.0/8 -j NETMAP --to
10.0.0.0/8
I'll try to set up a little experiment later tonight to see if this
remapping works as expected. Honestly though, using 44.0.0.0/8 seems really
attractive to me at this point.
--
marc/juul
Hey so I was looking into the ability to broadcast mDNS across subnets, so
that a client of a node could broadcast a service across the whole
10.0.0.0/8 mesh subnet.
If I'm reading the page correctly, it looks like avahi-daemon supports this
functionality with:
*enable-reflector=yes*
That being said, it seems that on openwrt avahi-daemon requires the
following additional packages: avahi-daemon, libavahi, libexpat, and
libdbus, which total ~250kB
I know that Marc was working on a mini-mdns package, but I assume that the
dns reflection might be a more involved task (also not sure how stable we
could get it).
I'm going to install it on a couple nodes and test it out, but I guess we
can keep an eye on space/memory constraints.
Max
Hey mesh-dev,
I'm new. Greetings. Don't think I'm going to make the meeting tonight, so I
thought I'd post this here.
I've been poking around in the tunneldigger today per this
<https://github.com/sudomesh/tunneldigger/issues/2> issue. Wanted to
mention that there are two relevant things happening when the tunneldigger
client starts up:
1. Broker selection - attempts to resolve each broker address
asynchronously, flags the first successful one and asks it for a tunnel.
Churns forever if none of the brokers respond.
2. Tunnel creation - Churns for 30s then returns to (1).
I think the issue Max created refers to the former. Probably can solve that
as noted in the issue by sleeping for a little while if we can't connect to
any brokers.
I think the second issue might be worth addressing as well, though. Maybe
just sleep in between attempts to create the tunnel or before restarting
broker selection? Any thoughts or opinions?
Thanks,
Oren
I migrated the alpha monitor server to babel routing + tunneldigger and
installed the first babel test node at the omni:
http://192.241.217.196/smokeping/smokeping.cgi?target=Omni.SudomeshAlpha
I'm going to walk through the process of flashing our firmware and
configuring a new node with makenode on thursday. I've also got to see if
cacti is reporting SNMP correctly, but I think we're making good headway.
Over and out,
Max
Hello,
I spoke with LMI regarding an uplink to the OMNI.
The person I spoke with wasn't very familiar with their wireless
deployment, but assured me that their best offering for us would
be VDSL (he said that "there's been a change in the technology
they can offer us"). The CO is apparently behind Genova Deli
near 49th. This means we could hope for about 80Mbps.
Their rate (on the Business plan) is $59.99 + tax, with $99 for
gear and $60 for activation. They also offer line bonding (to
double capacity).
This is a reasonable option for the Omni, but I think it would
only serve us (peoplesopen/sudomesh) temporarily. Also, I
have always had poor latency with DSL, even with reasonable
speeds.
Discuss?
Alex
It was suggested that 44.0.0.0/8 is an old HAM radio subnet that isn't
really used for anything anymore:
http://bgp.he.net/net/44.0.0.0/8#_netinfo
Just a thought.....
After playing with bmx6 for a while I thought I'd look more into babel.
I've been trying to figure out what we'd need in order to use babel for the
firmware. Here are my thoughts:
Each node has an IPv4 subnet and an IPv6 subnet.
The IPv4 subnet is assigned using makenode.
The IPv6 subnet is assigned using makenode, randomly using
generate-ipv6-address from
http://www.pps.univ-paris-diderot.fr/~jch/software/files/
The IPs and subnets are statically configured in openwrt config for each
node just like they are in the old sudomesh firmware, and each node assigns
IPv4 and IPv6 addresses to clients using dhcp. All nodes have their own
subnets and run dhcp servers (as opposed to the old firmware where only
internet-connected nodes ran dhcp).
If tunneldigger succeeds in establishing a tunnel, the node becomes an
internet gateway and announces its route to the internet using:
babeld -C 'redistribute if eth0 metric 128' mesh0
To begin with we can base the metric on something like the average of the
upload and download bandwidth limit on the node maybe? I'm not exactly sure
how this metric stuff works for these manual route announcements. I assume
the specified metric is the base metric and then normal babel metric
calculations happen on top of that?
In the longer term we could create a babel extension that attaches
information to route announcements about delay to the vpn server and
currently available internet/tunnel bandwidth (averaged since last router
announce). We can see how the babel diversity routing extension is
implemented and base it on that:
https://tools.ietf.org/html/draft-chroboczek-babel-diversity-routing-00
I'm not sure how we'll measure available bandwidth on a live connection
without saturating it? Maybe we can instead measure whether the bandwidth
is currently saturated by looking at how many packets are getting
queued/dropped? We can also do something even simpler and look at current
bandwidth usage vs. the user-selected bandwidth limit. If we do this then
for nodes with no bandwidth limit, we'd have to measure the available
bandwidth e.g. the first time the node finds an internet connection. We
could just skip this feature in the first firmware version and require a
user-selected bandwidth limit, but then we run the risk of the user
selecting a limit that's higher than their available connection speed.
In the future we can also replace tunneldigger with something using
foo-over-udp, but I think tunneldiggger with the "only try to connect when
a ping succeeds"-addition is good enough for now.
I must say that the more I look at it the more I'm liking babel. It seems
to me that bmx6 has several oddities:
* Can't do IPv4 and IPv6 at the same time.
* Can't have several nodes announcing the same route (e.g. to internet)
* Instead uses tunnel announcements for internet gateway selection, even
though this adds the extra overhead of having a tunnel from each
non-internet-connected node to a internet-connected node for no apparent
reason. The overhead might no be bad but the plurality of tunnel interfaces
makes debugging kinda confusing and it just seems... dirty.
* The weirdness with different nodes ending up on different IPv6 subnet.
Are there any advantages to using bmx6? The only one I can think of right
now is that it already has some internet bandwidth metric stuff
implemented, but on the other hand babel has its diversity routing metric
which is probably going to be important for us in the future if we don't
want to loose a lot of bandwidth on multi-hop routes.
Does babel have any problems I haven't considered?
Thoughts? Comments? Anything I overlooked?
--
marc/juul
On Sun, Jan 11, 2015 at 9:46 AM, April Glaser <april.glaser(a)riseup.net>
wrote:
>
> hey!! could you all fill out what dates work for you on this poll to have
> a strategy meeting to asses where we are now and what our next steps will
> be for people's open?
>
planning? i think we should spend some time talking about direction, but we
shouldn't spend a whole 1-2 days on that. We should spend most of the time
actually getting work done. Here's what i'm thinking about work we need to
do to get to a launched mesh:
1. finish firmware with new protocol (at least enough that the basics are
working)
2. moar tall nodes mounted!
3. link 500+ mbit internet connection to mesh (optional, but nice to have)
4. make ordering system for pre-flashed nodes (optional, but nice to have)
5. run beta test with smaller mesh
6. reach out to wider community while test is running in prep of launch
7. complete intro material for new node owners
8. launch crowdfunding campaign where we sell pre-flashed routers as perks
9. have large working mesh
10. everything works as expected with absolutely no surprises
Not all of these block each other (e.g. #1 doesn't block #7), but #1 and #2
do block #9
Thanks for setting up the dudle April.
btw, who filled out the dudle anonymously?
--
marc/juul