Understructuring your code: Roughly right instead of precisely wrong
“It is better to be roughly right than precisely wrong.”—John Maynard Keynes
Last November at the last RubyConf, I gave a talk called Testing Heresies. One of the many strange ideas in that talk was the argument that understructuring your code can actually be an agile practice. That is, rather than decomposing everything down into small methods and small classes, you might choose to temporarily leave your code in a state of slight sprawl, as long as you’ve got decent test coverage.
The other day it occurred to me that I’m getting some of my inspiration on this one from, of all places, the area of finance.
Finance and economics are among my many dilettantish interests (much to the dismay of some of my friends). And anyone who’s been studying the fall of Wall Street titans such as Bear Stearns and Lehman Brothers will find that one of the things that killed those firms was the use of leverage—which comes directly from excessive confidence.
Here’s a simplified example of the danger of leverage: Say you and your two friends, Bill and Bob, each run your own tiny hedge funds, with $1 million to invest. You all decide to invest in this new type of security called Can’t Go Wrong (CGW): Those who are selling it say it will give you 7% return on your money and is super-safe, almost as good as a government bond. You and your friends don’t really understand how CGW can give such great returns with so little risk, but everybody else you know is loading up on these things like there’s no tomorrow, so how wrong could they all be?
You’re naturally skeptical of the crowd, but you don’t want to be left out either, so you put 10% of your money into these newfangled securities. At the end of a year, you expect to clear 0.7% from this:
|Return from CGW||$7,000|
|Annual return on $1m||0.7%|
Your friend Bill says, “I guess you’ve always been the careful type, but I can’t see how this could go wrong, so I’m all-in on this one.” He decides to put 100% of his cash into CGW, and he expects to make 7% overall:
|Return from CGW||$70,000|
|Annual return on $1m||7%|
But Bob, who’s never had a problem with confidence, says “you two guys are chumps, this is free money, and I’m getting in on this while the getting’s good.” He decides to borrow $9 million, expecting that this will amplify his returns to 70%:
|Return from CGW||$700,000|
|Total (before repayment)||$10,700,000|
|Total (after repayment)||$1,700,000|
|Annual return on $1m||70%|
... you can probably guess what’s next. You and your friends place your bets at respective sizes, but in the year ahead, disaster strikes. Instead of giving you a 7% return, CGW actually loses 10% of its value. But since you only put in 10% of your fund into CGW, you’ve only lost 1%:
|Loss from CGW||-$10,000|
|Annual return on $1m||-1%|
Since Bill put in 100%, he lost 10% of his overall fund:
|Loss from CGW||-$100,000|
|Annual return on $1m||-10%|
But Bob, well, he’s screwed.
|Loss from CGW||$1,00,000|
|Total (before repayment)||$9,000,000|
|Total (after repayment)||$0|
|Annual return on $1m||-100%|
He’s “blown up”: He got overconfident in the wrong way, and now he’s done. He has to write letters to all his investors telling them that he lost every penny. Every time the phone rings he’ll hesitate at answering it, since it could be one of his investors, angry and in the mood to chew him out for fucking up so severely. And since some of that money was probably his own, he’ll probably have to sell the house in the Hamptons, and that Porsche that he actually doesn’t drive that often anyway. In a few years, he might be able to come back to the finance business, but for now he’s going to go through a tremendous ordeal.
This is basically what happened to Bear Stearns and to Lehman Brothers: They made bad bets, and they made them too big. This is why their employees, who thought they were working for Wall Street institutions as eternal as the sun or the sky, are now all hanging around with severance packages and unemployment insurance, wondering about their next steps.
What’s the point of this roundabout explanation, and what does it have to do with computer programming? Well, here’s the thing: It’s not having a wrong opinion that kills you. It’s the excessive confidence in that wrong opinion that can put you out of business forever.
Being wrong is far less damaging than being overconfident.
In “Testing Heresies”, I showed two ways you could write specs for a Rails app where a BlogPostController interacts with a number of models underneath it. The first was the standard RSpec style, where you isolate layers with mocks (that’s all that “should_receive” stuff) and then test them in isolation.
The second diagram is how I like to do it if the domain’s unclear to me. I put most of the tests at the most visible layer (the controller in this case) and write a lot more tests at the top. But I don’t do nearly as much work to codify the interactions below, since I want to be free to slide things around if my understanding of the domain changes.
There are a lot of downsides to doing it this way. If you have a lot of different application states you want to test, they’re going to be more cumbersome to set up at a high level. And this makes it more likely that a low-level failure in, say, the User class, will bubble up to the high-level tests, which a lot of people find confusing.
But the reason I often avoid doing it with mocks is that those lower-level tests take up time. They take time to write, and to maintain, and to delete if they’re getting in the way. If I feel completely solid about how a model is suppose to act, I’m happy to wrap it in a dozen tests. But just as often I’m not sure where a piece of logic really belongs. So in those cases, I leave it to high-level tests, so I can focus on the what, not the how. Your customer doesn’t care what your models are named, or what their methods return. Your customer cares about the end result. That’s what matters; everything else is in service of that goal.
A lot of computer programmers are smart-asses: They think they’re right most of the time, and they like to prove it. But the thing is, people are wrong all the time—even smart people—and they survive it just fine. But they survive it even better if they know how to adjust their certainty. It’s not a weakness to say you’re not 100% certain of the solution, but that it’s probably good enough for the time being. Life is uncertain, but we have to code anyway. So sometimes I test what matters and I try not to worry too much about the rest.