Writing

Feed Software, technology, sysadmin war stories, and more.

Thursday, June 16, 2011

Chafing at hand-hacked GT Netmail routing files

Back in the days when I wrote that DOS-based printing system for my school, I had a personal project running in parallel at home. At that time, I ran a bulletin board system which was part of a proprietary store-and-forward messaging system called the GT Power Network. It would only interoperate with like software, and while it superficially resembled Fido, it was a totally separate ecosystem.

First, about the problem I wanted to solve: the GT network had a bunch of nodes and a bunch of ad-hoc connections. Sysops made arrangements with each other and shot things around entirely based on hard-coded config files. There were two major classes of communications: netmail and echomail. They had vastly different data flow requirements, and each new echo required its own configuration.

It was possible to get a working config without a whole bunch of trouble, but it was far from optimal. I wanted to remove latency and make it at least possible to improve things in a provably way, and thus the routing analysis (and later, routing robot) project was born.

It was actually a bunch of little programs written in Turbo Pascal. Each one took care of a different part of the process. The first program read through my message boards and extracted "Route:" lines from things which had already passed through the network. There was a widely-used third-party program called MSROUTE which added tags to outgoing messages. It resulted in things like this:

    Route: 1/99mo1 14/28mo3 38/1mo7
  .ORIGIN: 001/099 - Something or Other BBS - 713-123-4567

This showed which routes already existed in the wild, so to speak. That particular example would say that 001/099 talked directly to 014/028, which then talked directly to 038/001. The trailing characters are an abbreviated day of the week and hour, to get some idea of latency.

That was step 1. Step 2 took all of these and a seed file of manually-contributed info and created a simple list: one line per system, with all of its direct connections following it. Step 3 is where things started getting interesting. It would take two nodes and would look for paths between them.

The original version would just take one node, expand all of its connections, and look to see if the second node appeared. If that didn't work, it would then just expand all of those connections and then look again. It would keep doing this until it got a hit or it gave up entirely. This worked, but it was ridiculously slow.

My second approach at this did something similar, but it started from both ends at the same time. Assuming there was actually a possible known route, it would converge upon it much faster.

At this point, I had the tools to ask questions about the network and get answers for myself, but what about others? Well, that's where part 4 comes in -- the robot. It was something of a convention for software which read mail and answered requests to be called "bot" in the GT world. I went with that name, and wrote a small wrapper to do the work.

If you mailed rtrobot at my system, it would take commands and would pipe them to my step 3 program. Whatever data it yielded would be returned to you in a new netmail message.

Did it work? Sure. Did it run automatically? Not really. Did people use it? Maybe. I think it fielded a half-dozen requests total. It was a step in the right direction, but it wasn't amazing, and it came around too late in the BBS era to be really useful.

People were abandoning such systems for dialup Internet access, and the traffic just kept dropping. Before long, there was no point to maintaining these things, and most of us shut down. For my project to have been really useful, it needed to exist several years earlier. It wouldn't have been perfect even then, but it would have been far better than all of the hand-hacking and doc reading we had to do.

Oh well. I think it probably would have just re-invented UUCP maps.