Outcome Bias in Software Engineering
Outcome Bias in Software Engineering
As an avid poker and Magic: The Gathering player, I spend a lot of energy training myself to remove outcome bias from my thinking. When I look back on a decision, I try to evaluate whether it was correct given what I knew at the time regardless of how it turned out. This is harder than it sounds, because once you know the result, your brain wants to make the right choice feel obvious in retrospect.
As we are coming up on the much anticipated Super Bowl XLIX rematch of Pats-Hawks, I can't help but think of what I believe is one of the best examples of outcome bias in modern sports. The Seahawks famously threw the ball on the 1-yard line when they had one of the best short yardage runners in the league. It was intercepted and they lost. People commonly say "why didn't they just hand it off to Marshawn?" They would never say this of course if they had scored on the throw. While I believe it probably was the more "correct" move to run the ball here, the outcome biases us to think it was a much more obvious decision than it actually was.
Now for what actually bothers me: I almost never see this line of thinking applied in software engineering. We have a project with multiple architectural proposals, we deliberate and pick one, eventually we ship. Applause all around. The winning architecture gets treated as the "obviously correct" choice, and the alternatives quietly disappear. But who is asking if the other approach might have shipped faster? If the other approach might have led to a better user experience? If the other approach left less technical debt? Success has a way of killing these questions before anyone thinks to ask them.
It would be somewhat unfair of me to whine about this problem without proposing a solution. SRE culture already has the right instinct here: postmortems after outages are great at resisting outcome bias. They focus on systems, not blame. They ask "given what we knew, did our decisions make sense?"
But here's the gap: we only do postmortems when something breaks. Nobody runs one when the system works but is fragile and miserable to maintain. Or when a project ships but takes twice as long as it should have. Or when a design technically succeeds but quietly constrains everything that comes after it. Or even when everything works a well as we could have possibly hoped (maybe there was even more juice to squeeze).
To go back to the poker analogy, I don't get sad when I draw into a straight and save what was a semi-bluff, but I sure do think about whether it was the correct decision afterwards. The hands where you win despite bad play are the most dangerous, because they teach you the wrong lesson.
The solution: project retrospectives. Not for outages. Not for failures. For shipped projects that went fine. Sit down after a successful launch and ask: what alternatives did we consider, what assumptions were wrong, and where did we just get lucky? If we re-ran this decision with the same information, would we choose the same path?
You can't build good engineering judgment if you only examine your losses.