These are testing times: mavericks vs. ice people

One of my earliest engineering jobs, before I fled hardware in favor of the (relative) ease and lucre of software, was in chip design. I remember being shocked when I learned just how much of the processor in question was devoted to test circuitry. Why waste so much on testing, I thought, instead of just getting it right the first time? Oh, how young and incredibly stupid I was.

The practice of engineering soon teaches one that, after hydrogen, the universe is composed largely of condensed mockery of one’s previous assumptions. This is true even when, as with software, the capricious vagaries of physical reality have already largely been abstracted out. Murphy was indeed an optimist: it’s not just anything that can go wrong; it’s factors you couldn’t have imagined as relevant to your problem space triggering a series of cascading disasters that leave you regretting that your parents ever met.

So what do we do? We practice defense in depth. We follow the robustness principle. We “always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.” We practice agile (genuinely agile, not cargo-cult agile) development. And most of all, we write tests. Right? Right?

…Yeah, well, that’s the idea. For my day job at HappyFunCorp I do a lot of interviews, and almost every junior developer I talk to assures me that they’re very enthusiastic about testing. And yet, for my day job at HappyFunCorp I am often called in to help rescue clients who come to us with an existing code base — and you know what we rarely, if ever, see in them? That’s right: functioning, updated tests.

I don’t necessarily blame them. You can make a strong case that modern web development is awful, as is most of modern tool/server development, and modern app development (especially Android) is pretty messy too … and bosses / clients are always pushing devs to go faster, and the natural assumption is that if you have to cut corners to get something done, the test corner is the first to go.

Everyone of course will get in line to condemn that as the kind of false economy that might save you a week in the short term but will soon wind up costing you months. And everyone is right. But here’s the thing that turns so many devs away from testing: bad testing is almost as bad — sometimes even worse — than no testing. Even when junior developers do write tests, they treat it like dental work, something painful to be dealt with as quickly as possible; so they grab a test harness that seems to fit whatever frameworks they’re using, write — or automatically generate — some unit tests, and move on.

Ah, unit tests.

Unit tests clutter up your project, increase its cognitive load, create dependencies that have to be changed when the code change … and very rarely, if ever, find a bug that wouldn’t be unearthed by some well-written end-to-end integration tests. Some. Sure, if you’re writing an autopilot, you want 100% code coverage. But pretending that tests don’t also have implicit costs, both one-time and ongoing, is sheer denial. Like so much of engineering, it’s a trade-off, a hunt for the sweet spot; and for most projects, optimal testing is decidedly not maximal testing.

And “end-to-end” is often, well, flexibly defined. Automated user-interface testing is notoriously difficult. In my experience, like a dog that does arithmetic, while its mere existence may be impressive, its real-world results are rarely all that useful. As a result it’s very hard to bolt on end-to-end testing to a site, service, or app not designed for it from the ground up. (But at least load testing has gotten a lot easier over the years; loader.io, especially, is great.)

The senior developers I interview tend to say — cautiously, because they know it sounds like heresy twice over — “Well, testing is important, but you have to be pragmatic about it.” Yes indeed.

It’s true that development today feels like dining at huge buffet of undercooked dishes; which flawed and half-baked framework would you like to use today? But in the end it’s the mindset more than the materials that leaves tests unwritten, or left as half-hearted unit tests which haven’t been updated to match the code in months.

So many development teams call themselves “agile” these days. So few actually are. (So many think that having a daily standup makes them agile. It is to weep.) As this Sauce Labs state of testing report (PDF) indicates, as an industry, we have a long way to go. (I don’t agree with its definition of agile — I don’t agree with any fixed definition of agile — but its overall trends seem correct.)

It’s easy, and correct, to castigate the maverick developers who cut corners to race against time, fail to design for testing, fail to write tests, and leave the next poor dev to come along with whole icebergs of technical debt. But at the same time, their urge to move fast and break things, to quote young Facebook’s famous motto — to iterate and get shit done — is an admirable one, even if, in especially pathological cases, it can lead to heavy PHP use. (I kid, I kid.)

The thing is that it’s also correct to castigate the ice people, the ones who believe in deep consideration and careful analysis and test-driven development, which are good in theory, but who are also, all too often, the ones who crank out reams of worthless unit tests so that they can claim they have 90% code coverage, who jettison actual agile mindsets in favor of becoming “Certified Scrum Masters,” whose horror of venturing into the unknown leaves them paralyzed.

In a perfect world we’d have both the mavericks and the ice people, each respectfully pushing the other to do better. I’ve worked on a few teams that were balanced in this way; they were excellent. But all too often each side pays mere lip service to the other. I would have hoped that, as an industry, we’d have done better by now.