Software, technology, sysadmin war stories, and more. Feed
Monday, April 22, 2013

Half-baked IP address extensions

About 10 years ago, I was thinking about the problem of running out of IPv4 addresses and the annoyance that IPv6 represents. There's something a little spooky about having network addresses be that long. 32 bit IP addresses seem to be the sort of thing you can remember. They're about the same complexity as a telephone number: +1 408 555 1212 is 10 digits, and so is I don't think anyone would try to remember a 48 bit MAC address or a 128 bit IPv6 address. Those values are from an entirely different arena: one where you have to rely on other things to abstract it away for you.

I wonder how much of the relatively slow uptake is due to that relative inaccessibility of the addressing scheme. One might say it's even too flexible. It looks like a "second system effect" to me.

Anyway, as part of these ramblings far in the past, I started thinking about what a scheme might look like on a parallel world. Imagine that IPv6 had never been invented and it wasn't a pressing issue, but people were starting to wonder about the future. Maybe on this world, router RAM was super expensive and so the explosion of BGP routes also forced people to start being creative.

What I imagined was a little like having a switchboard operator for IP addressing. Imagine calling into a big business which only has a single public line with an operator. You call them up, they pick up the phone, and you ask for extension 1234. They connect you through, and that's it. There could be extension 1234s repeated at every big business in the world and they wouldn't conflict because their external addresses were unique.

My weird little idea was sort of like that. It just took the IPv4 space and doubled it. I dubbed it "IPv4++" for no particular reason. Here's how it would work.

I connect to an ISP. They give me a single IP address which is part of their (relatively small) netblock. Let's say it's A.A.A.100. That becomes my "IPv4++" segment. It would look like any other host to the rest of the net. Inside that segment, I have many hosts. How do I tell them apart? That's where it gets a little weird.

The next part involves cramming the "inner" address into the IP header somehow. Maybe it would use some weird new IP options. The point is, it should look like a normal packet to the rest of the world. Existing routers should pass it on just like any other traffic.

Inside my network, I'd give my hosts whatever addresses I wanted. For the sake of simplicity, let's say I use and I have two hosts: "alpha" at, and "beta" at

Other hosts on the Internet which understand this hypothetical addressing scheme would be able to reach my "alpha" and "beta" machines. Their addresses would be A.A.A.100: and A.A.A.100:, respectively. As you can see, the "outer" address identifies the network, and the "inner" address identifies the host.

There's something else which can be done here, too. Let's say I get a second ISP, and they give me an IP address of B.B.B.200. I use that to feed the same network at the same time as my first connection. What happens now? Well, now all of my hosts are available either way.

That is, you can reach "alpha" as A.A.A.100:, or B.B.B.200: They both refer to the the same machine, but those addresses represent completely different gateways. It's a little like encoding the route into the address.

Imagine a world in which the equivalent of A records can support this sort of addressing. You'd be able to do something like this:

www.example.com.    IN  CNAME  alpha.example.com.
alpha.example.com.  IN    A++  A.A.A.100:
                          A++  B.B.B.200:

Any number of things could be done with this information. A client could just pick one at random, much like what happens when multiple A records exist right now. A client could also try to make an educated guess about which address is "better" and purposely select one over the other. They'd have some choice as to how they wanted things to work. Maybe they'd notice "hey, they have connectivity through one of my ISPs" and they'd pick that one for a shorter/faster/cheaper route.

You might notice something else about this: I have just become multihomed by buying ordinary commodity Internet service from two providers. I didn't have to buy a giant router to speak BGP, acquire an ASN, get peering arrangements with the ISPs, acquire a netblock, or anything else of the sort. My single IP address from each ISP is enough to give me a path to the world.

Obviously, this only matters if the hosts initiating a connection actually understand this stuff. Ordinary hosts which speak only the original plain IPv4 would never be able to reach hosts like this. This isn't great, but it's not much different than the situation we have with NAT right now.

Instant multihoming is one interesting thing, but there's more: you don't need anyone's approval to do this if you really want to. If you start running it and someone else does too, then you can speak it between your two sets of systems. Nobody else needs to adjust their network to handle it: not your ISP, not their ISP, and not the providers in between. To all of them, it's just ordinary IPv4 traffic which possibly has a weird option in the headers.

There's another strange feature to all of this: you never need to renumber your actual hosts when changing providers. Sure, your externally-visible addresses will have their gateways change, but you can do that as a "make-before-break" situation, where you turn up provider B before you turn off provider A. The hosts themselves stay on their same internal addresses the whole time.

Right now, changing providers can be an interesting business. If you have your own routable IP block, then okay, I guess it's not too bad. Turn up a new link, get them to advertise you, get the old one to stop being advertised, then turn down the old link. How many companies have enough networking horsepower to do that sort of thing, though?

I saw this happen with my web hosting customers. They'd migrate from one data center to another, and in so doing would completely change IP addresses. The ones who really cared about this would lower the TTLs on their DNS records weeks in advance out of paranoia. Then they'd turn up the site at the new location and swing the A records around. The old location would stay up and would keep serving for a while, too, and then they'd have to somehow reconcile their logs and whatever later. Sometimes, they got clever and just proxied the stragglers from the old platform to the new. It was anything but simple.

More often than not, they'd just swing it around and then they'd write off the downtime. They took it as "one of those unavoidable things".

The big question for this half-baked idea is whether anyone would find it useful. Something like this would need a reason to exist or nobody would bother. I can think of a way how it might happen, but it's even more of a stretch than this idea is in the first place.

Imagine if the next big game console included support for this and did it as an open standard. Now, game players could connect to each other without worrying about chopping holes for specific ports or running horrible things like UPNP. Individual consoles would be directly addressable, even to the point of allowing multiple consoles per ordinary Internet connection. Any problems where having more than one running at the same time would fail due to NAT limitations would go out the window.

Obviously, the consumer router boxes of the world would have to add support for this, but I think if a bunch of gamers start yelling "shut up and take my money", the vendors in question will find a way to make a patch. Either that, or someone will step up with a new product. Given that some of these products already run DD-WRT right out of the box, how hard could it be?

So, to recap, here's the idea in a nutshell.

Take the existing IPv4 addressing scheme, and continue to use it to get packets between gateways. Then stuff 32 more bits into these packets to specify the "internal destination".

Hosts and gateways at either end have to change, but routers in between gateways can be blissfully ignorant.


Now for a few random on-topic thoughts while I'm writing about this...

Let's say you had clients which spoke this scheme, say, every smartphone running at least version X of your OS. Then you had a gateway which understood it. Then you had a whole bunch of old server machines which *didn't* understand it. It wouldn't matter, since the gateway would be able to do a sort of "directed NAT" based on the incoming traffic.

That is, you have a standard 2013 Linux box with IP, and new clients connect to X.X.X.X:, your gateway rewrites that to just, and sends it along. The Linux box responds like nothing funny is going on, and the gateway rewrites it on the way out.


I guess clients would also need to have their internal addresses conveyed in here, too. This means another 32 bits crammed in there somehow. That would be the only way to convey the complete address: source gateway + source host, destination gateway + destination host.


It occurs to me that IPv4 itself has options for source routing. Granted, those are not the same thing as what I'm talking about here, and I suspect that most of the net won't support any of those options for reasons of security or other paranoia, but they are still part of the spec. Dig around in RFC 791 (September 1981!) and you'll find them.

It would be funny if a variant on that encoding was used to extend the addressing far beyond that which we can realistically use today.

So, let me be clear: the extra destination address in my idea is only to be interpreted relative to the inside interface on a gateway.


Finally, I know that sometime after I thought of this, I discovered that someone else had the same basic idea years before. Unfortunately, I haven't been able to find a single citation for it. I think it was called "ABCDE", but you try looking for "ABCDE" and "IP" without getting "a.b.c.d" and all sorts of other cruft in your results.

I'm beginning to wish I had saved a copy of that post or document from so many years ago.

February 25, 2017: This post has an update.