Hey folks,
We were talking a while ago about having some sort of development tracker
so that we could better organize the tasks we wanted to accomplish. I was
interested in fulcrum, so I went ahead and deployed an instance of it:
http://sudomeshtrax.herokuapp.com/
I'm not sure that it's exactly what we need, but I'm currently using it to
keep track of a few things. Also - it seems to have a couple weird display
bugs, so we can try to tackle them if they end up being an issue.
It should be open to registration, although I'd ask that you actually be
working on peoplesopen.net development if you're using it.
I'm still personally a fan of trac, but I think its a much larger project
to take on than I'm interested in at the moment (especially upkeep, but we
can see).
Thanks!
Max
Did anyone have trouble getting the Ethernet on the Meraki
Mini to work under OpenWRT? I'm able to bypass the MTD
I'm able to bypass the MTD and boot into Barrier Breaker
over TFTP, but get no eth0.
Snippet from boot:
[ 1.690000] eth0: Atheros AR231x: 00:18:0a:50:65:40, irq 4
[ 1.890000] libphy: ar231x_eth_mii: probed
[ 1.900000] adm6996: eth0: ADM6996FC model PHY found.
[ 1.900000] eth0: Could not attach to PHY
[ 1.910000] eth0: mdiobus_probe failed
By the way I have learned that the AR231x support is not in
the kernel mainline, but is an off-of-tree OpenWRT addition.
I think it's been merged into linux-mti now (the MIPS tree),
as ATH25.
Alex
One big issue that we will have to deal with before really launching is the
fact that several home routers use 10.0.0.0
I honestly don't know if any of them use 10.0.0.0/8 or if they're all
10.0.0.0/24 but either way it's a problem.
There are two possible solutions.
= We don't use 10.x.x.x for the mesh =
There _are_ alternatives but one of them look great. Here are some I've
looked at.
== 44.0.0.0/8 ==
This is the HAM or AMPRnet subnet. It looks like it's very scarcely used,
if it's really used at all. It's for HAM packet radio and experimentation.
One the one hand I don't want to piss of the HAMs, but on the other hand
their entire subnet is in violation of net neutrality since they don't
allow commercial traffic on their subnet. This is definitely the easy
solution.
Here's an overview of their allocations:
https://portal.ampr.org/networks.php
I'm sure if we did a ping scan of the address space we'd see only very few
hosts. Anyone wanna take a stab at that?
== 238.0.0.0/8 ==
Using multicast address space as unicast unfortunately does not work.
== 240.0.0.0/8 ==
All of 240.0.0.0 and above is designated as "future use". However, an IETF
proposal to take it into use was rejected, apparently partially because
many IP stacks just outright reject or ignore any packets from this address
space. We'd need an overview of which systems are affected, but I don't
really think this is a viable option.
= We use 10.x.x.x for the mesh =
If we are to use 10.x.x.x for the mesh then we will have to do something
clever/ugly.
A solution would contain the following parts:
* The DHCP client would have to remap 10/8 DHCP responses on eth0 to a
different subnet (this could be 240/8) such that the interface takes on an
address different from the one provided by the DHCP server.
* ARP spoofing would have to be enabled on eth0 such that the node will
respond to ARP requests for the address it was assigned via DHCP.
* All incoming traffic on eth0 with a destination of 10/8 would have to be
remapped to 240/8 before routing happens
* All outgoing traffic on eth0 with a destination of 240/8 would have to be
remapped to 10/8 after routing happens
To accomplish the DHCP fakery, a modification to the openwrt dhcp client
would have to be written. I don't foresee that being very difficult.
Depending on the ARP spoofing difficulty, it could be as simple as adding
support for a hook script that runs on dhcp lease acceptance.
I'm not sure how best to do the ARP spoofing. There may be mechanisms built
into the Linux kernel. It may be that we can actually just assign the
10.x.x.x address gotten from the dhcp server to eth0 in addition to the
240.x.x.x address and just ensure that no mention of the 10.x.x.x address
appears in the routing table and that the kernel is configure to _not_
respond to ARP requests on interfaces other than the interface they are
inquiring about (the default is to always answer).
It seems that the address remapping might already be possible. I haven't
yet tested if it works as expected, but the following commands seem to be
what we'd need:
iptables -t nat -A PREROUTING -i eth0 -s 10.0.0.0/8 -j NETMAP --to
239.0.0.0/8
iptables -t nat -A POSTROUTING -o eth0 -d 239.0.0.0/8 -j NETMAP --to
10.0.0.0/8
I'll try to set up a little experiment later tonight to see if this
remapping works as expected. Honestly though, using 44.0.0.0/8 seems really
attractive to me at this point.
--
marc/juul
Hey so I was looking into the ability to broadcast mDNS across subnets, so
that a client of a node could broadcast a service across the whole
10.0.0.0/8 mesh subnet.
If I'm reading the page correctly, it looks like avahi-daemon supports this
functionality with:
*enable-reflector=yes*
That being said, it seems that on openwrt avahi-daemon requires the
following additional packages: avahi-daemon, libavahi, libexpat, and
libdbus, which total ~250kB
I know that Marc was working on a mini-mdns package, but I assume that the
dns reflection might be a more involved task (also not sure how stable we
could get it).
I'm going to install it on a couple nodes and test it out, but I guess we
can keep an eye on space/memory constraints.
Max
Hey mesh-dev,
I'm new. Greetings. Don't think I'm going to make the meeting tonight, so I
thought I'd post this here.
I've been poking around in the tunneldigger today per this
<https://github.com/sudomesh/tunneldigger/issues/2> issue. Wanted to
mention that there are two relevant things happening when the tunneldigger
client starts up:
1. Broker selection - attempts to resolve each broker address
asynchronously, flags the first successful one and asks it for a tunnel.
Churns forever if none of the brokers respond.
2. Tunnel creation - Churns for 30s then returns to (1).
I think the issue Max created refers to the former. Probably can solve that
as noted in the issue by sleeping for a little while if we can't connect to
any brokers.
I think the second issue might be worth addressing as well, though. Maybe
just sleep in between attempts to create the tunnel or before restarting
broker selection? Any thoughts or opinions?
Thanks,
Oren
I migrated the alpha monitor server to babel routing + tunneldigger and
installed the first babel test node at the omni:
http://192.241.217.196/smokeping/smokeping.cgi?target=Omni.SudomeshAlpha
I'm going to walk through the process of flashing our firmware and
configuring a new node with makenode on thursday. I've also got to see if
cacti is reporting SNMP correctly, but I think we're making good headway.
Over and out,
Max
Hello,
I spoke with LMI regarding an uplink to the OMNI.
The person I spoke with wasn't very familiar with their wireless
deployment, but assured me that their best offering for us would
be VDSL (he said that "there's been a change in the technology
they can offer us"). The CO is apparently behind Genova Deli
near 49th. This means we could hope for about 80Mbps.
Their rate (on the Business plan) is $59.99 + tax, with $99 for
gear and $60 for activation. They also offer line bonding (to
double capacity).
This is a reasonable option for the Omni, but I think it would
only serve us (peoplesopen/sudomesh) temporarily. Also, I
have always had poor latency with DSL, even with reasonable
speeds.
Discuss?
Alex