MyDoom.F and the beginning of the end
Back in 2002, I started work on a system which delayed incoming mail to ferret out wormy spew from open proxies and spam in general. I wrote about this previously from a technical perspective, but so far haven't described the user experience. This is a tale of triumph, then utter failure, and still more triumph.
As previously noted, this started slowly late in 2002, but by some point in 2003, the entire school district's mail was flowing through it. There had been a couple of oddities where certain sender systems couldn't handle a 4xx response properly and mail was returned. This did cause mails to be bounced, but that could have happened for any other kind of 4xx situation.
A bit of code to whitelist a few local sites known to be SMTP-clueless eliminated that problem, and life was good. Most users never knew it was there. Various administrator folks did know about it, including my boss, and occasionally they'd blame it for things it didn't actually do, but that's life.
This went on all throughout 2003 with no real issues. It stopped crazy amounts of spam and other abusive garbage which had no business arriving on my systems. Nobody complained about missing e-mail. I was pretty happy with the whole thing.
Unfortunately, 2003 ended, and 2004 brought a whole bunch of stupidity. Tuesday, February 24, 2004 was the beginning of the end for me.
I got a phone call at 9:47 AM. It was the boss, and he wanted to know why we kept getting tons of bounces. As my monitor warmed up, I could hear the incessant beeping of biff. We were being flooded by double-bounces and all kinds of other garbage mail.
I blocked two hosts with iptables. Two more popped up. I'd kick them off and more would keep coming. Finally, I ripped down my POP server and took down sendmail on my secondary and tertiary mail exchangers to stop this "back door e-mail" stuff. Something was spewing tons of mail from all of our client machines, and it was clever enough to look up the MXs for its own domain name and use them to attempt relays.
I started pulling the directives which would allow relaying through my remaining mail server. This slowed things down, so I started banning individual hosts until the log spew stopped. Finally, my logs weren't going haywire and I could get some idea of what had happened.
I got another call. This time it was one of the "network engineers" who wanted to know if I had a way to locate these broken boxes. I noticed they had been generating tons of connection attempts to TCP port 25, so I could probably do it by logging those attempts.
I got off the phone with him and wrote something really quick and stupid to getpeername() on stdin and had it create a file: /tmp/wormlog/IP-TIME. I put that in inetd on my firewall box, then did some iptables REDIRECT magic to send outgoing port 25 attempts from inside the district to that port. We had always blocked that port anyway, so anyone trying to do it was clearly up to no good. Now, anyone who tried it would leave a file behind.
Next, I wrote a really stupid script to look at that /tmp/wormlog and build a HTML table. I didn't want to run a webserver on my firewall (!), so that became yet another inetd entry on yet another port. I then put a third evil hack up on a "real" web server machine which would then hit this magic port on my firewall to get that table.
So here, a few minutes later, we had a web page which would show you all of the hosts who were causing problems and when they last did it. There were also links to pre-existing tools which told you which school a given IP address was at, and who last checked mail from there. This would aid them in tracking down and disinfecting all of them.
I found the whole mess hilarious at first. They finally got their payback for running Windows everywhere without adequate limits on what mere users could execute. I wondered if they would learn from it.
Later, I noticed that this particular worm installed some sort of listener on port 1080, so I started nmapping our space and added those to the "worm log" web page. At least three of the hosts I found during my initial scans were the actual NT domain controllers at schools, so it was already shaping up to be an interesting day.
It turned out to be MyDoom.F. Some basic analysis of my mail server logs showed that it all started with ONE user. She had received a copy of it the afternoon before from a forged hotmail account. Then, early that morning, when she checked her mail, she must have run the attachment. It started mailing everyone it could, including other people in the district, who also ran the attachment, and ... yeah.
I had my Patient Zero. She was actually one of the technology people who were the liaisons from a given school to the district level technology department. Yep. It was someone who should have known better.
I decided it was time to write some more software. This time, I created a dumb little sendmail milter which looked at the source IP address and SMTP envelope-from to see what was being presented. Our clients were only authorized to send mail as one of our limited number of domain names, so generating mail from anything else was right out.
My logic was simple enough: it's easy for us to catch forgeries and keep them from going out. This makes us a better net neighbor, so we should do it. A few minutes later, I had something quick and dirty to do just that. Any host which tripped this restriction would leave a mark behind. That is, the block would "latch", so that any subsequent mails would also fail until someone manually disinfected it.
I was working remotely as a contractor at the time. Everyone else who was on site as an actual employee was in full-on crisis mode. They had to shut down the entire network to keep it from spreading any more. You see, unlike previous MyDoom variants which just spewed mail, this one apparently went out and started deleting files on the network. Since a nontrivial amount of our users spent the whole time logged in as their "admin" NT domain accounts, this made life very interesting.
I think most of them were up for something like 36 hours straight trying to clean this up. It involved effectively downing the entire network and all client machines and then rebuilding it piece by piece. I am so glad I wasn't anywhere near them and didn't get roped into that.
On February 26th, the first bomb landed on me. I got a mail from the boss in which he informed me that they had ordered a SMTP scanning gateway. It would be installed by the end of the next week.
Despite having been the only person responsible for e-mail handling in the organization ever (since I built the first system from scratch), he hadn't even so much as talked to me on the phone about it. He had just decided to go out and buy this box and stick it online.
I sent a reply indicating this was an evil thing to do.
Half an hour later, I had MIMEDefang and Sophos SWEEP running on my development box in concert with sendmail. We already had a Sophos site license for Windows, and it extended to Linux, so it was easy to do. I sent myself a copy of the "eicar" test file, and it bounced it as expected. Success!
About an hour after that, I had it running on all of our production mail exchangers, including the one used for users to mail each other internally. It added a fair delay of 15-20 seconds to things, but I figured that was fine for a completely untuned installation. It was now filtering badness out of the stream, and that was the important part.
This is about the time when the boss gave me another decree: give root to this friend of his who was back in town and had been poking around the past couple of days. Oh, and he was going to supply and then install the SMTP filter appliance, no doubt at a nice mark-up.
I complied and gave this guy root, but I also installed my bash tattler hack which told me what people with root were doing. This way I'd at have some idea of what kind of clueless meddling was happening on my systems.
By the end of the day on February 27th, I had managed to tweak both MIMEDefang and Sophos (by using "Sophie") so that mail would again go through in under a second, just like before this filtering started.
I figured that would be it. I wrote some more stuff to poll Sophos regularly for updates, and also did some procmail magic to auto-trigger an update when they mailed out a bulletin. My systems were now about as current as you could get in terms of filtering stuff. I had done it as quickly as possible as soon as it was clear that my boss wanted this to happen.
That was not the end of it. On February 29th, yet another dispatch from the boss went out. In it, he said he was "seeing some hesitancy in executing the instructions I gave before I left for vacation". Yeah, he had gone on vacation right when this happened.
I saw the writing on the wall and decided I was done with that job. It was a contract gig which ran month-to-month, and we had already signed contracts through the end of June 2004. I decided to just run them out and then that would be it.
I just needed to get to July 1st intact... somehow.
The real fun was yet to begin, for they had not yet installed the appliance. Once that happened, then life got really interesting. I'll go into that in the next part.
January 26, 2012: This post has an update.