Writing

Software, technology, sysadmin war stories, and more. Feed
Thursday, July 4, 2013

IRC netsplits, spanning trees and distributed state

I last used IRC in a meaningful way at a prior job. There was a channel where I would spend a lot of time lurking with other people who worked on kernel and platforms stuff elsewhere in the company. I hadn't used IRC in quite a while when I joined that channel and expected things had evolved. They hadn't.

Oh sure, there are nick and channel services now, even though that's kind of pointless on an IRC network which is completely controlled by your employer and is only accessible from the corporate WAN. There are features which will hide some or all of your address to keep people from attacking you, but again, that's also pointless on a private network like that one. There's even something which will kick off a low-level PING of your client before letting you on just in case you're one of those dreaded TCP sequence number spoofs. It's not like anyone's using an OS which still is vulnerable to that, but, hey, it can't hurt, right?

No, what I'm talking about is the fundamental model for linking servers. Everything I can find about linking IRC daemons even right now in 2013 seems to suggest that any given client, be it a server or a user, can "appear" down only one file descriptor at any given time. That means the entire network is a series of links which themselves constitute a single point of failure for people on either side of it. When something goes wrong between two servers, this manifests as a "netsplit".

Whole swaths of users (appear to) sign off en masse when this happens, and then will similarly seem to rejoin after the net relinks, whether via the same two servers or through some other path.

This was the situation back in the '90s, and it's still happening today. From all appearances, it would seem that it has been deemed "too difficult to solve" and thus has been left alone. One problem mentioned was that of duplicate messages flying all over the place.

It seems like this problem has been solved multiple times in different venues. Fidonet echomail flowed out in a fashion where messages would go to a node and then "explode" out to all of the subscribers, and would travel out from there. They tagged those messages with "Path" and "Seen-By" lines to keep them from going to places where they had already been. I suspect they also had message ID histories to squash duplicates.

Or, from the world of Ethernet, how about spanning tree? This is where you have a whole bunch of potential links nailed up and ready to go, but the switches collectively come up with a path in which no loops are present. The others are reserved and don't receive traffic normally. If a link goes down, another one can be brought up. Downtime is minimal... when it works, that is.

So, okay, there's also the matter of the whole "channel state" thing which isn't particularly easy when you are talking about actions happening in parallel with some amount of lag between servers. Even though it seems like the days of multi-minute lags between servers are gone thanks to the relative availability of large pipes, it's still a problem which would need to be solved.

One solution would be to have a "master server" which maintained all channel state, and requests from users would have to travel "upstream" to it. This would solve the problem of parallel actions since there would be only one reference point: that single server. Of course, having that means you now have a massive single point of failure for your network. Lose the stateful server and all of your channels are toast.

That particular problem could take a page from the GT BBS network which worked a little like Fidonet but had its own quirks. In particular, the GT echomail scheme had "sponsors". Instead of having echo messages "explode" at every point after leaving the origin system, they instead first flowed upstream to the sponsor, much as an ordinary unicast message might. Then they were batched up on the sponsor, and would be released along with others in a daily "bag" (file).

To map this back onto IRC, each channel would have a sponsoring/master server. They wouldn't all need to be in the same place. This would remove the problem of having a single server which held all of the power, but it still could have issues. For one thing, what happens if that one server goes down? Some other server is going to need to pick up the slack.

This starts calling back to matters of distributed state management and consensus. That in turn gets me thinking "Paxos". For those who aren't familiar with it, it's decidedly non-trivial, but what it gives you is easy enough: you have several systems which are in charge of tracking state, and they can come and go over time. If certain conditions are met, the state can continue to be read and updated with confidence.

None of this would be trivial. It would be a mighty big amount of work, in fact. I suspect there might be a way to shoehorn some of this into the existing client-side IRC protocol at the expense of some surprise on the part of experienced users. Things like mode changes and kicking people off a channel wouldn't actually happen just because you issued them. They'd have to travel upstream and would happen on the master, and only then would your local server apply it.

I suspect this lag is probably what historically scared people off from any sort of "submit to the master" concept. The thing is, people probably aren't trying to run IRC networks over "switched 56" leased lines and other bandwidth-starved pipes. It's a different world now, and it's probably not going to take multiple seconds to get a packet out to a host and back again.

In re-reading the Wikipedia page on Paxos, I noticed that the vanilla flavor does not handle malicious injections. I guess this means using it on an IRC network with a multitude of unassociated operators might be a bad thing. After all, server-level code hacks to allow IRC operators to overstep their powers are definitely nothing new. Perhaps the Byzantine variant would make more sense, along with extensive instrumentation to detect and publish synchronization anomalies.

There are solutions to problems which don't exist, and then there are assortments of technologies which might be useful for problems which technically exist, but for which nobody may care about fixing. I suspect the whole IRC netsplit thing is a case of the latter.

Perhaps the patent monster is lurking, and that's keeping people from playing around in this space. That would be rather unfortunate, if so.