[mesh-dev] Slightly strange tunneldigger behavior?

max b maxb.personal at gmail.com
Mon May 25 17:05:38 PDT 2015


Yeah we had this conversation ;)

I went and fixed the repo recently so that we had a cleaner line between
the two. Shouldn't be too much of a difference if we're developing on
"master" and keep a clean "upstream" branch though right?

I can go through maybe this week and put together a pull request. Thanks
for the reminder.

On Mon, May 25, 2015 at 4:59 PM, Mitar <mitar at tnode.com> wrote:

> Hi!
>
> BTW, make pull requests with your changes upstream. ;-)
>
> And committing commits into master is not upstream friendly. It would be
> much better if you would have "sudoroom" branch inside your fork (you
> can configure it as a default in GitHub) and you have your working
> version of tunneldigger there. And then you make feature branches and
> make pull requests from there up.
>
> Because now is the problem that we have to merge exactly the commit you
> made upstream. If we do not do that, and for example merge a rebased
> commit. Then from then on you will have a different master branch in
> your git history, which will interfere with any other future pull requests.
>
>
> Mitar
>
> > Hi!
> >
> > We have general development mailing list, and nodewatcher mailing list.
> > So because that are two main projects, development mailing is mostly
> > about tunneldigger.
> >
> > But you will not die if you get from time to time also some e-mail about
> > how people are connecting sensors to nodes, or some other mesh-related
> > technology like info about a new cheap router on the market. :-)
> >
> > Our mailing lists are pretty quiet, we mostly use real-time chat for
> > most work. Probably it would even be the easiest just to jump to our
> > channel. That is the most effective way.
> >
> > http://dev.wlan-si.net/wiki/Skype
> >
> > Yea, closed source. ;-)
> >
> > For development in fact we mostly use our development Trac and tickets.
> > We prefer having discussions about particular things there, so that they
> > stay nicely documented and searchable. You can also open a ticket there,
> > attach your logs and stuff. This is in fact even more preferred way than
> > Skype and mailing lists. But it seems people prefer real-time chat over
> > a long-term more useful tickets. :-)
> >
> >
> > Mitar
> >
> >> Oh wait Mitar, there isn't a specific tunneldigger list is there? Just
> the
> >> wlan-si dev right?
> >>
> >> On Mon, May 25, 2015 at 4:25 PM, Mitar <mitar at tnode.com> wrote:
> >>
> >>> Hi!
> >>>
> >>> Interesting.
> >>>
> >>> Such things would be useful to cross-post also to tunneldigger
> >>> development mailing list:
> >>>
> >>> https://wlan-si.net/lists/info/development
> >>>
> >>> Maybe also some other users encountered that.
> >>>
> >>>
> >>> Mitar
> >>>
> >>>> Hey so I'm trying to debug some slightly strange tunneldigger
> behaviour
> >>> and
> >>>> thought I'd check to see if anyone here has any thoughts.
> >>>>
> >>>> This page shows ping times to a few mesh nodes from a VPS monitor
> server:
> >>>>
> >>>> http://192.241.217.196/smokeping/smokeping.cgi?target=Mesh
> >>>>
> >>>> Both MaxbMyNet1 and MaxbMyNet2 show a consistent increase in ping
> times
> >>>> starting Monday (5-25-15) at like 11am or so.
> >>>>
> >>>> MaxbMyNet1 has a direct ethernet connection to the internet and is
> >>>> tunnelling to the exit server, while MaxbMyNet2 does not have any
> >>> ethernet
> >>>> connection and is instead connecting to the internet through
> MaxbMyNet1.
> >>>>
> >>>> If I ssh into MaxbMyNet1, I can see that the l2tp0 tunnel is correctly
> >>>> setup and that tunneldigger seems to be working correctly:
> >>>>
> >>>> root at my:~# ps | grep tunneldigger
> >>>>  9538 root      5296 S    /usr/bin/tunneldigger -u Sudomesh-MyNet-2 -i
> >>>> l2tp0 -t 1 -b 104.236.181.226 8942 -L 20000kbit -s
> /opt/mesh/tunnel_hook
> >>> -I
> >>>> eth0.1
> >>>>
> >>>> root at my:~# ip addr show l2tp0
> >>>> 18: l2tp0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1438 qdisc htb state
> >>>> UNKNOWN group default qlen 1000
> >>>>     link/ether da:d8:46:b7:d7:9b brd ff:ff:ff:ff:ff:ff
> >>>>     inet 100.64.3.1/32 scope global l2tp0
> >>>>        valid_lft forever preferred_lft forever
> >>>>     inet6 fe80::d8d8:46ff:feb7:d79b/64 scope link
> >>>>        valid_lft forever preferred_lft forever
> >>>> root at my:~# ip addr show eth0.1
> >>>> 11: eth0.1 at eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >>> noqueue
> >>>> state UP group default
> >>>>     link/ether 00:90:a9:0b:73:cb brd ff:ff:ff:ff:ff:ff
> >>>>     inet 192.168.13.37/24 brd 192.168.13.255 scope global eth0.1
> >>>>        valid_lft forever preferred_lft forever
> >>>>     inet 192.168.0.102/24 brd 192.168.0.255 scope global eth0.1
> >>>>        valid_lft forever preferred_lft forever
> >>>>     inet6 fe80::290:a9ff:fe0b:73cb/64 scope link
> >>>>        valid_lft forever preferred_lft forever
> >>>>
> >>>> Even more strangely, I can ping the world-routable IP of the exit
> server
> >>>> and get back ping times consistent with the lower line of the graph:
> >>>>
> >>>> root at my:~# ping 104.236.181.226
> >>>> PING 104.236.181.226 (104.236.181.226): 56 data bytes
> >>>> 64 bytes from 104.236.181.226: seq=0 ttl=52 time=14.670 ms
> >>>> 64 bytes from 104.236.181.226: seq=1 ttl=52 time=14.264 ms
> >>>> 64 bytes from 104.236.181.226: seq=2 ttl=52 time=13.241 ms
> >>>> 64 bytes from 104.236.181.226: seq=3 ttl=52 time=13.949 ms
> >>>> 64 bytes from 104.236.181.226: seq=4 ttl=52 time=13.626 ms
> >>>> 64 bytes from 104.236.181.226: seq=5 ttl=52 time=18.133 ms
> >>>> 64 bytes from 104.236.181.226: seq=6 ttl=52 time=13.531 ms
> >>>>
> >>>> And if I manually specify ping packets to go over the eth0.1 interface
> >>> and
> >>>> NOT the l2tp0 interface they have low ping times:
> >>>>
> >>>> root at my:~# ping -I eth0.1 8.8.8.8
> >>>> PING 8.8.8.8 (8.8.8.8): 56 data bytes
> >>>> 64 bytes from 8.8.8.8: seq=0 ttl=55 time=21.834 ms
> >>>> 64 bytes from 8.8.8.8: seq=1 ttl=55 time=16.872 ms
> >>>> 64 bytes from 8.8.8.8: seq=2 ttl=55 time=19.764 ms
> >>>> 64 bytes from 8.8.8.8: seq=3 ttl=55 time=17.265 ms
> >>>> 64 bytes from 8.8.8.8: seq=4 ttl=55 time=16.989 ms
> >>>> 64 bytes from 8.8.8.8: seq=5 ttl=55 time=18.188 ms
> >>>>
> >>>>
> >>>> However, if I ping over the tunnel and through the exit server I get
> the
> >>>> slower times:
> >>>> root at my:~# ping 8.8.8.8
> >>>> PING 8.8.8.8 (8.8.8.8): 56 data bytes
> >>>> 64 bytes from 8.8.8.8: seq=0 ttl=56 time=28.958 ms
> >>>> 64 bytes from 8.8.8.8: seq=1 ttl=56 time=29.211 ms
> >>>> 64 bytes from 8.8.8.8: seq=2 ttl=56 time=28.965 ms
> >>>> 64 bytes from 8.8.8.8: seq=3 ttl=56 time=29.022 ms
> >>>>
> >>>>
> >>>> And then, weirdly, restarting tunneldigger on the MyNet seems to have
> >>> fixed
> >>>> it (look for the new line that will proably start around 16:00 on
> Monday
> >>>> which will be at the lower time).
> >>>>
> >>>> Thoughts? I'll keep taking a look at it, and it's possible it has
> >>> something
> >>>> to do with our up hook on the exit server which adds the new l2tp
> >>> interface
> >>>> to babel, but wanted to put it out there in case anyone had any ideas.
> >>>>
> >>>>
> >>>> Max
> >>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> mesh-dev mailing list
> >>>> mesh-dev at lists.sudoroom.org
> >>>> https://sudoroom.org/lists/listinfo/mesh-dev
> >>>>
> >>>
> >>> --
> >>> http://mitar.tnode.com/
> >>> https://twitter.com/mitar_m
> >>>
> >>
> >
>
> --
> http://mitar.tnode.com/
> https://twitter.com/mitar_m
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://sudoroom.org/lists/private/mesh-dev/attachments/20150525/cd88536a/attachment.html>


More information about the mesh-dev mailing list