Writing

Feed Software, technology, sysadmin war stories, and more.

Wednesday, August 1, 2012

Feature creation is fun? Not on this code base.

Consider this hypothesis: most developers probably find more pleasure from writing new features as compared to bug fixing, refactoring, or expanding test coverage.

I don't think that's too much of a stretch to imagine either that or something close to it applying to a bunch of people who write software. There tends to be a drive to keep moving forward, and it's possible that only checking off items on the list of major features would be seen as forward progress.

In that view of the world, there are people with machetes who are running forward while clear-cutting the jungle as they go. Everyone else is somewhere behind this front line and is responsible for turning their "rough cuts" into something sensible which will last for a long time.

I'm actually not writing this to judge whether one or the other is better, believe it or not. That can wait for another time. This is more of a post about what happens when you're on a project which is so bad that it scares you away from wanting to create new features.

My last big corporate project before flying the coop last year was a serious disaster. It had been constructed by someone who had a serious aversion to running lots of threads, and this philosophy created abominations of code everywhere. Trying to do anything in this code meant first reading through all of it and drinking enough of the kool-aid to understand what they were doing with all of these asynchronous callbacks scattered in every function.

Only once you had some idea of how it worked could you even attempt to add something new to the system. That's when you ran smack into the second problem: a shocking lack of tests. In addition to all of the complexity and cleverness foisted upon future maintenance programmers in this code, they had also decided to not test large chunks of it.

I can only guess as to why the testing was in such a sorry shape. One possibility is that it was quite difficult to test their code due to the async craziness scattered throughout. The tests which did exist looked like this:

  Class class;
  if (!class.StartFoo(1234)) {
    TestFailed();
    return;
  }
 
  for (int i = 0; i < 10; ++i) {
    run_scheduler_once();
 
    if (class.some_internal_variable == 5678) {
      TestSucceeded();
      return;
    }
 
    sleep(10);  // I wish I was making this up.
  }
 
  TestFailed();
}

They did this because "StartFoo()" would actually start something in the background and would return right away. Trouble is, it didn't really start it by itself. It just created a closure and stuck it onto a queue, and then you had to wait for that thing to get run by a scheduler.

Of course, in a testing environment, there is no scheduler, since you haven't gone through the whole startup process. So, the unit test gets to call run_scheduler_once() to make it turn the crank manually. The test actually has no idea how many times through the scheduler is required to make sure everything happens, so they just run it a bunch of times until they get their magic value (5678) or it falls out.

If you look at it, this test takes up to 100 (!) seconds to run. Given all of these horrible contraptions, I guess now you can see why they didn't test much of it.

Okay, now let's get back to the point here. One day you are given a bug to add a new feature to support the latest braindead corporate decree about how you are going to handle your, uh, cows. You go to add it, and while you're reasonably sure that your new stuff works, you can't be sure that you didn't break anything else. There are no tests for the existing code, after all. You do your best to test your new code, but even that is limited by the rickety framework it's been riveted onto.

Finally, you just say "whatever" and ship it anyway, figuring if it's bad, it'll "come out in the wash" as a failure in production. Hopefully it won't take out too many, uh, cows, before anyone notices.

Repeat this a few times and pretty soon you have a service that's full of holes, or if you prefer, broken windows.

I got to this point with my last project. I was actually worried about taking a feature request, adding it, having to push it without having full confidence that it worked, and then having it blow up on us. I actually started letting the other people on the team take the "fun stuff" of adding new things because it bothered me so much.

For my part, I started doing what would normally be considered the crap work: extending test coverage, removing dead code, and so forth. Every change I did in that realm was another step towards being able to confidently extend some other part of the code. My logic was that eventually I'd be able to start adding new things to it again without worrying about whether it would sink the ship.

Of course, nobody really appreciates people who just fix things in the background and don't make a big splash with shiny new features. You have to be a creator of new stuff to get the fabulous rewards and be shown off on the big screen at the Friday afternoon beer bash (they call it TGIF, but I know what it really was). If your job is to toil in the shadows and make things stable, you'll rarely be noticed or rewarded for anything.

In conclusion, if you accept the hypothesis that new features are the desired work of many programmers, then a project which scares one into purposely avoiding the creation of new features is pretty screwed up.

This was my life. Fortunately, I escaped.