I thought I understood how route selection works: Most specific route is
always selected.
So if you have a route for 10.0.0.0/8 and another for 10.0.0.0/24 then the
/24 will be selected when the destination falls within the /24
This works as expected.
However, if I add a /32 route e.g. 10.0.0.42/32 then it gets prioritized
_lower_ than the /8 and /24
Why does this happen and is there anything I can do about it?
Example:
> ip route add 10.0.0.42/32 src 10.0.0.1 dev eth0
> ip route add 10.0.0.0/24 src 10.0.0.1 dev eth0
> ip route
10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.1
10.0.0.0/8 dev eth0 proto kernel scope link src 10.0.0.1
10.0.0.42 dev eth0 proto kernel scope link src 10.0.0.1
Aaaa! Why is the /32 all the way at the bottom?
In other news: Made some progress on extender node stuff tonight. I decided
to use VLANs to put both the open and adhoc networks on the extender nodes
and have updated the configs for tp-link and extender node accordingly.
--
marc/juul
Hey so I'm trying to debug some slightly strange tunneldigger behaviour and
thought I'd check to see if anyone here has any thoughts.
This page shows ping times to a few mesh nodes from a VPS monitor server:
http://192.241.217.196/smokeping/smokeping.cgi?target=Mesh
Both MaxbMyNet1 and MaxbMyNet2 show a consistent increase in ping times
starting Monday (5-25-15) at like 11am or so.
MaxbMyNet1 has a direct ethernet connection to the internet and is
tunnelling to the exit server, while MaxbMyNet2 does not have any ethernet
connection and is instead connecting to the internet through MaxbMyNet1.
If I ssh into MaxbMyNet1, I can see that the l2tp0 tunnel is correctly
setup and that tunneldigger seems to be working correctly:
root@my:~# ps | grep tunneldigger
9538 root 5296 S /usr/bin/tunneldigger -u Sudomesh-MyNet-2 -i
l2tp0 -t 1 -b 104.236.181.226 8942 -L 20000kbit -s /opt/mesh/tunnel_hook -I
eth0.1
root@my:~# ip addr show l2tp0
18: l2tp0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1438 qdisc htb state
UNKNOWN group default qlen 1000
link/ether da:d8:46:b7:d7:9b brd ff:ff:ff:ff:ff:ff
inet 100.64.3.1/32 scope global l2tp0
valid_lft forever preferred_lft forever
inet6 fe80::d8d8:46ff:feb7:d79b/64 scope link
valid_lft forever preferred_lft forever
root@my:~# ip addr show eth0.1
11: eth0.1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default
link/ether 00:90:a9:0b:73:cb brd ff:ff:ff:ff:ff:ff
inet 192.168.13.37/24 brd 192.168.13.255 scope global eth0.1
valid_lft forever preferred_lft forever
inet 192.168.0.102/24 brd 192.168.0.255 scope global eth0.1
valid_lft forever preferred_lft forever
inet6 fe80::290:a9ff:fe0b:73cb/64 scope link
valid_lft forever preferred_lft forever
Even more strangely, I can ping the world-routable IP of the exit server
and get back ping times consistent with the lower line of the graph:
root@my:~# ping 104.236.181.226
PING 104.236.181.226 (104.236.181.226): 56 data bytes
64 bytes from 104.236.181.226: seq=0 ttl=52 time=14.670 ms
64 bytes from 104.236.181.226: seq=1 ttl=52 time=14.264 ms
64 bytes from 104.236.181.226: seq=2 ttl=52 time=13.241 ms
64 bytes from 104.236.181.226: seq=3 ttl=52 time=13.949 ms
64 bytes from 104.236.181.226: seq=4 ttl=52 time=13.626 ms
64 bytes from 104.236.181.226: seq=5 ttl=52 time=18.133 ms
64 bytes from 104.236.181.226: seq=6 ttl=52 time=13.531 ms
And if I manually specify ping packets to go over the eth0.1 interface and
NOT the l2tp0 interface they have low ping times:
root@my:~# ping -I eth0.1 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=55 time=21.834 ms
64 bytes from 8.8.8.8: seq=1 ttl=55 time=16.872 ms
64 bytes from 8.8.8.8: seq=2 ttl=55 time=19.764 ms
64 bytes from 8.8.8.8: seq=3 ttl=55 time=17.265 ms
64 bytes from 8.8.8.8: seq=4 ttl=55 time=16.989 ms
64 bytes from 8.8.8.8: seq=5 ttl=55 time=18.188 ms
However, if I ping over the tunnel and through the exit server I get the
slower times:
root@my:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=56 time=28.958 ms
64 bytes from 8.8.8.8: seq=1 ttl=56 time=29.211 ms
64 bytes from 8.8.8.8: seq=2 ttl=56 time=28.965 ms
64 bytes from 8.8.8.8: seq=3 ttl=56 time=29.022 ms
And then, weirdly, restarting tunneldigger on the MyNet seems to have fixed
it (look for the new line that will proably start around 16:00 on Monday
which will be at the lower time).
Thoughts? I'll keep taking a look at it, and it's possible it has something
to do with our up hook on the exit server which adds the new l2tp interface
to babel, but wanted to put it out there in case anyone had any ideas.
Max
So I built the tunneldigger package that Alex was working on. He just
simply added an option for binding to a specific interface when attempting
to create a tunnel. I built it and tested it on the two MyNets in my house
(one with a direct ethernet connection to the internet and one without) and
they are both working exactly the way I expected! Hooray!
I had to make a couple changes to our nodewatcher-firmware-packages (the
repo for openwrt package makefiles for wlanslovenija code) and do a few
other sort of repo architectural things. It shouldn't break anything, but
technically the sudowrt-firmware repo will need a rebuild. Should be an
excellent opportunity to test out the "rebuild" script eh?
For Tunneldigger we're still going to need a more permanent fix when we
want to use a domain name instead of IP addresses. The DNS lookup is what
was causing us all of that churn, so we'll want to re-write it to avoid
that particular step.
Lemme know if I missed anything?
Thanks!
Max
It's 5 am and I'm just about to sleep
I've been tweaking sudowrt-firmware and makenode and I now have a working
setup with a tp-link wdr3500 running the home node firmware and a bullet m2
running the extender node firmware that are talking to each other with
notdhcp correctly without any manual configration after flashing.
I took the liberty of adding max's ssh pub key (same one as sudoroom.org)
to the extender node firmware since there's really no other way to get into
it during development. Alex feel free to add your own as well.
The home nodes have a "for development only" IP of 172.22.0.1 and the
extender nodes have 172.22.0.2
SO! Just some notdhcp hook script wrangling left and then I'll get the new
web gui finalized enough that it can replace the luci gui, and maybe
someone can package it into sudowrt-packages? Then that's it! Ready to go
for a usable 0.2.
Oh btw, I was wondering how we should parcel up the eternet port. Here's
one option:
Port 0: The mesh
Port 1: The private network
Port 2: An extender node
Port 3: An extender node
Or we could do port 0 and 1 on the mesh both? Whaddaya think? We can make
them dymanically switchable from the web gui in next version.
--
marc/juul
Hey so is anyone planning on being at sudo this Sunday? There are probably
some things we can do, though it will depend a bit on whether Will will be
thre or not.
I went to the ham store, but they definitely didn't have any affordable
mounts that fit our purposes. That being said, I actually think it'd be
pretty trivial to make a mount ourselves with a bit of angle iron and a few
bolts. I don't imagine we'll have the time this Sunday to do that, but if
anyone were to be excited about it, some group could go in that direction.
There is plenty of cabling and setting up routers to do, though. Who's
coming with me....?
I changed the default IP in sudowrt-firmware and makenode
All instances of 192.168.13.37 have been changed to 172.22.0.1
Netmask is still /24
It was getting annoying to work with the routers when on a 192.168.0,0/16
network (such as the one in sudo room).
--
marc/juul
Meshmap is one of the last things remaining on the old server. Both
Nodeshot and its Django backend are running code from 2012 with local
modifications. Can't update any single thing without making the rest
incompatible.
Anyone else wanna take this one? :)
On Thu, May 14, 2015 at 4:02 PM, yar <yardenack(a)gmail.com> wrote:
> Please don't do anything important until the DNS finishes propagating.
> Avoid any important mailing list messages, wiki edits, blog posts,
> etc. I will send an update when we're done. Thanks!
Ok, all should be well now. If you have any trouble please email
sudo-sys(a)lists.sudoroom.org. Thanks!
* I made a rebuild script for sudowrt-firmware but I haven't tested it yet.
* Minor fixes to build script
* notdhcp now compiles with its new library dependencies both as part of
OpenWRT and using the cross_compile_env script. There are options to
compile with and without integrated switch support so you can still compile
it on a non-openwrt system and those features will simply be disabled. The
code for the actual switch port polling is still not done.
--
marc/juul