Writing

Software, technology, sysadmin war stories, and more. Feed
Saturday, June 29, 2013

TCP simultaneous open and peer to peer networking

Many years ago, I picked up a copy of a book which taught me a great many things about networking as we use it on the Internet today. While I haven't had a reason to refer to it recently, there was a time when it never left my desk. In the days when I was actively playing with packet drivers and tcpdump and ka9q and things like that, it was essential to understanding what actually happened on the wire.

One thing which stuck with me from back then was an interesting quirk about the way TCP operated. The book uses a bunch of great timeline diagrams to show which packets are emitted by hosts and when, and this explains things like a three-way handshake or how you tear down a TCP connection. After it gets done with that, it then goes on to describe the "simultaneous open". I thought it was fascinating.

In that situation, both hosts somehow know to open connections to each other at the same time. They configure the connection explicitly rather than grabbing an ephemeral port. The exact nature of how they figure out the port numbers to use and how they know when to connect to each other isn't given, but that's not important. What's interesting is what happens when they do this.

At a glance, you might think this results in two connections. One would be the connection from A to B, and the other would be from B to A. However, because there's an explicit specification for how to handle this, it instead collapses into a single connection. The SYNs which cross each other on the network wind up (eventually) bringing it up.

If you try to do this with two hosts on the same network, it might be hard to do. If one of the hosts gets going just a bit too early, it probably will hit the other end before that one is ready to do anything. That host won't be expecting the connection on that port, and will fire off a RST. The too-quick host will then get that and assume it's not listening and so will give up. Then when the slower host "dials out", the same thing may happen.

It's easier to make this happen if there's some distance between the machines, but it also helps if they aren't going to emit a RST when a connection fails. It just turns out that ordinary consumer grade Internet connections happen to provide both of those traits most of the time. You're probably not "right next to" anyone of interest, and your "router" (plastic consumer-grade NAT box) probably will ignore any packets which don't match an established connection.

This means if one packet (from A) should make it to the other end (B) "too soon", then the NAT box at B will probably just drop it. Moments later, host B will start its own connection attempt, and that will add an entry to the NAT box's table of valid connections. When A retries and retransmits shortly after that, it will match and will be let through.

When this works, it means you can take two hosts which both cannot receive incoming TCP connections from the outside world and manage to connect them to each other. You just have to have all of the details right: addresses, ports, and timing, plus nobody in the middle flinging RSTs at you.

So that's what I learned from this book: a weird little trick which can be used to do what might seem impossible at first glance. Sure, there's the matter of synchronization, but that's what third parties are for.

I've been disappointed that nothing seems to use a simultaneous TCP open to get things done. There are plenty of ways for it to go wrong, like if there are nefarious devices in-between which mangle port numbers or worse. I guess it just wouldn't be sufficiently reliable for someone to build a service on top of it.

If it could be made to work reliably, it could be quite a thing. Hosts could connect to a shared server somewhere to find each other, and then agree on specifics and bang away at their connection attempts until they succeeded with a simultaneous open. Then there would be a path between the two hosts which did not involve the third party and they could communicate with somewhat better privacy.

I had this vision of a bunch of ordinary client systems on ordinary consumer-grade Internet connections with incoming connections filtered and NAT in place, and yet they'd still be able to reach each other directly with TCP. They'd get all of the utility it provides without trying to reproduce it over UDP with weird hole-punching tricks or the craziness that is requesting holes in the firewall with UPNP.

Exactly what would then run over those connections would be up to the users. I'd personally love to see everyone have their own little "platform" for hosting messages, cat pictures, and baby videos which requires nothing more than their own machine at home. The third party "sync server" would only be used to bring up the peer connections, and the actual pictures, chat messages and so on would then run over those peer to peer links.

User data would never go anywhere near the sync servers, in other words. This is important.

I even have an idea of a model which might be used for inspiration. Flip back about 20 years and think about what BBS networks looked like. Now imagine a BBS network where every one of your friends and relatives runs a personal node, and they all talk to each other to do the whole "store and forward" thing. Add a dash of cat pictures and memes and now you're caught up to the present day.

My most recent inspiration for this general idea was an XKCD comic in which two characters try to exchange a large file. Burning it to a CD or DVD is impractical, and that means e-mailing it as an attachment is also a bad idea. Both hosts are behind NATting firewalls, so direct connections seem impossible. Neither user has a way to bounce it through a third host somewhere.

A bunch of services exist which do a "transient dropbox" type of thing where person A uploads to them, and then person B downloads from them. I don't like that. It sends the data to people who have no business seeing it. It may also be woefully inefficient depending on where the three hosts are located.

Sure, obstacles exist, but I still think it would be interesting. Of course, since it adds little in the way of new features, no normal person would want this. They can already do their instant messaging, slower messaging (e-mail), picture and video sharing through any number of services which are run by third parties. Sure, those services morph and change and die and sometimes do creepy or stupid stuff, but for the most part, the end users manage to get their content out.

The XKCD situation will still exist, but it won't bother people to the point of having to worry about this sort of thing.

I expect this entire space to remain in the realm of "technically possible but dead on arrival in the marketplace" for a very long time.


July 2, 2013: This post has an update.