Most Decision Tables Are Flawed, But Some Are Useful
A nod to George E. P. Box’s line: “All models are wrong, but some are useful.”
Whether you’re considering your next job, choosing between two candidates, or evaluating options for a product launch, I’d bet you’ll end up comparing pros and cons. You’ll sketch a table—maybe mentally—to line up your options against a set of criteria.
“Alice brings strong technical skills, but Bob made a better impression with his eloquence.”
“Investing more in R&D might give us a long-term advantage, but spending the same amount on Marketing could drive growth now.”
“My start-up is still early, so debt would let me keep control. But equity would share the risk with investors and unlock their network.”
The more methodological among us even assign weights to each criterion, and start computing:
Sₐ₁ × wₐ₁ + … Sₐₙ × wₐₙ > Sᵦ₁ × wᵦ₁ + … Sᵦₙ × wᵦₙ
Your AI-powered spreadsheet cheerfully delivers the verdict: Bob is a significantly stronger candidate than Alice. But something feels off. At the very least, Bob didn’t strike you as that much stronger. So you start doubting your maths. You change the weights, or remove criteria that suddenly seem superfluous. And now, somehow, both options look a lot more comparable.
We’re all exposed to this kind of structured decision-making at some point—when comparing cars, holiday packages, insurance policies, loan options, AI models, investments… Some organizations encourage it. Others try to systemize it. While they’re obviously inappropriate when the number of options is too high, there’s another, more concerning issue for us to discuss today.
The problem? For every bad decision, there’s always a decision table that justifies it.
The Deepwater Horizon disaster in 2010, unfortunately, emerged from a pattern of structured trade-off decisions that consistently under-weighted safety risks compared with cost, schedule, and production priorities1. Who knows how many M&A deals have failed because of flawed scoring sheets—I just wish these decisions were more transparent and documented2. This doesn’t mean decision tables should be banned. But there are a few things worth considering.

The main one: you are the decider; the table is not. In Unpractical Decisions, I dedicate a chapter to what decisions are—and aren’t. One of my favourite conclusions: if there’s no hesitation, you’re not making a decision.
A useful mental exercise is to adopt a third-person view. Imagine someone who takes the algorithm’s recommendation and follows it blindly: what distinguishes this person from a machine? Are they really deciding anything?
If you’re tempted to answer yes, then let me point something out: the real decision lies with whoever designed the table in the first place. That’s a second-order decision—deciding how to decide—and it’s crucial in decision architecture. You’re building a process using abstract concepts, dealing with high uncertainty, and creating something that may be reused hundreds of times for future decisions.
Decision tables can be wrong—but so can any decision. You could even argue that they often lead to better decisions, and I’d agree. But there’s a deeper issue: a decision matrix can give the illusion of reliability—the sense that the decision can be made with high confidence—even when that confidence is not justified. That’s what makes them unpractical.
And if you consider asking your AI friend to generate a better table, beware
AI makes bad decision tables faster, not better.
AI can generate a decision table in seconds; but it cannot decide whether those dimensions matter. If you feed AI poorly framed criteria, it will give you beautifully formatted nonsense. Your decision looks better, but you may just have made it worse.
How Decision Tables Can Be Useful
There are good ways to build decision matrices. But only if you first step up a level and confront the principles — the mindset — that shape the architecture itself. Otherwise the table is just decoration. The recommendations below help escape the aforementioned traps.
Use to summarize, not to decide. This summary will be, by definition, incomplete—accept this simplified form, that’s the point. You want an honest simplification of the larger decision problem, not an exhaustive replica.
You are the decider, the table is not. It’s completely acceptable to choose against what the table suggests, as long as you can articulate why3. That ‘why’ often comes down to risk: maybe you value downside protection more than the table does, or you see an upside the scoring fails to capture.
A blueprint for thinking, not a shield for blame. A decision table should reveal your reasoning, not defend your choice. Once written, nobody can tell whether a table reflects genuine thinking or a post-hoc construction. But when you build it, treat imperfections as a sign that you’re still reasoning, not as a flaw. Used this way, a table becomes a starting point for clarity, not a tool for self-absolution.
With the right mindset, decision tables become companions to judgment and not replacements for it.
George E. P. Box taught us that all models are wrong4; and a decision table is just another model. Treat it as a heuristic. Be doubtful if the recommended choice feels trivial. Be concerned if the matrix explodes into dozens of intricate dimensions. Be wary when weighting tricks give the illusion of scientific rigor.
I write these articles to force progress on Unpractical Decisions, my upcoming book on the silent damage caused by miscalibration. If you want to support the effort, subscribe (free), ask questions, or point out what I got wrong, in the comments or by email.
The quality of a decision shouldn’t be measured only through its outcome, though.
Decisions need to be explainable. This will be the subject of another article.
George E. P. Box — “All models are wrong, but some are useful.” (see Box, G. E. P., Science and Statistics, Journal of the American Statistical Association, 1976).

