or: Ramblings from the anti-Hedonic Treadmill [1] of Software Quality
Software Quality is Hard
In the grand scheme of things, I haven’t been writing software all that long. But the more I do, one lesson that I find only gets consistently and strongly reinforced is that achieving sustained software quality is an extremely difficult challenge. By that I mean achieving an impression of sustained software quality is the trickiest bit, but we’ll get to that later.
Let’s start with some examples of things that are upsetting:
- Our init() function for command-line scripts can take over one second to initialize.
- Change something simple and you might have to run 200 slow tests.
- The error logs are always full of garbage.
I think these things are upsetting largely because it’s so easy for us to imagine a world in which they are better, and as engineers we feel very much in control of these outcomes. We write the code, the computer only does things that we’ve expressly told it to do. Initialization ought to be blazing fast, small changes should run a few quick tests, and the error logs should be empty. Engineers tend to be somewhat perfectionist by nature, so we find it frustrating when the ideal is not the real.
The simple gut reaction is that if these things are bad, it must be due to human or organizational incompetence, ineptitude of some sort. It must be hopeless to expect these things to get better, because if it were possible we’d have just avoided them in the first place. Or maybe we just should staff up some dedicated team of other people to go fix it.
In reality, I don’t think the gut explanation bears fruit, holds water, or even does a good job of explaining the root cause of our discontent.
The Problem Is Plasticity [2]
One thing I think we often fail to see is the manner in which our own perspectives on quality rapidly change. For nearly every software project I have ever worked on, I end with the same old urge to completely rewrite the thing, because now I can see a much better way to do it. Does that mean the completed version is terrible? Maybe, but probably not, and if I actually did rewrite it, I’d most likely just want to rewrite it again, because the second version would already seem worse than a freshly imagined third version. It takes calculated decision-making to determine when it’s really sensible to revamp something instead of working on another pressing task, of which there tend to be many around here.
It is critical to understand that due to our malleable standards of evaluation, this process fundamentally has no end, ever. We are simply never going to realize a state of software nirvana where everything is supremely satisfying. That’s an important emotional realization. The fundamental reward of an improvement process is the experience of betterment, not some mythical destination.
Badness Emergent
Coming back to the earlier examples, I think most of what we perceive as badness or decay are just emergent properties of a complex system in which we cannot focus on all aspects at all times. We’re often upset about negative characteristics of systems that we were never explicitly working on, but came about as we were making improvements in other areas. We may have a slow init() compared to the ideal, but it’s much faster than the legacy version. Running unit tests may be slower than we’d like, but we had no tests at all before. An error in a log is certainly frustrating compared to an empty log, but not compared to a system that fails silently or the ungodly Facebook error logs of yesteryear.
Thinking Differently
Software just doesn’t age well. As our system grows, there will always be plenty of things to be offended by (in fact there will seem to be more), but that doesn’t necessarily mean they’re all worse than their predecessors. We’ll always be forced to make prioritization trade-offs, as everything can’t get equal, immediate attention.
When we started restructuring our libraries, an effort focused primarily on quality, I tried to be very upfront about my belief that we’d end up being just as unhappy as we were previously (though likely about different particulars). But the big idea was to create a structured mechanism for making things iteratively better, a big part of that being a psychological hack associated with naming, tools, and conventions — essentially resetting expectations in an empowering way rather than a pessimistic one. The trick is to understand the process of course correction not as a slog in the negative, painful sense, but rather to see it as improvement. Though tedious at times, I personally found a lot of that work very fulfilling, and I don’t think I am an outlier in that regard.
The conclusion of all this is that I think we need to divorce our impressions of the current state of affairs from our evaluation of the productivity of investing time and energy into improvements. Our impressions are likely to always be negative (seriously, forever), and we will probably almost always feel things are getting worse (when in fact they are probably not, or are heavily offset by simultaneous improvements).
We certainly do make bad decisions from time to time, and those are fixable. But where we really do lose the whole game is if we compromise on investing in improvement because it feels too futile or difficult or we’d rather blame everyone else, and that is a greater psychological challenge than a technical one.
—
[1] See: http://en.wikipedia.org/wiki/Hedonic_treadmill
[2] This kind: http://en.wikipedia.org/wiki/Neuroplasticity
[3] Also, read this book: http://en.wikipedia.org/wiki/Zen_and_the_Art_of_Motorcycle_Maintenance
[4] Image from http://www.flickr.com/photos/sashawolff/3171917389/