Cannae: The Biggest “Oh-Sh*t!” Moment in History
Earlier this year, HarperOne published Blind Spot: Why We Fail to See the Solution Right in Front of Us, a book about I co-authored with Gordon Rugg, a British scientist who works in the field of human error. For lack of a better term, Rugg is an expert on human expertise, particularly what happens when those experts screw up.
I have planned a couple of posts with Rugg that I think you’ll enjoy. The first is about an event that occurred during the Second Punic Wars. Deep in the heart of southern Italy, the great Carthaginian general Hannibal slaughtered about 60,000 Roman soldiers in a single day.
It’s considered the greatest tactical military maneuver ever, and the biggest defeat the Romans ever faced. I asked Dr. Rugg to explain why he calls this event the biggest “oh-shit” moment in history, and why it’s such a major touchstone in our book.
Dr. Rugg?
Cannae and the candle
The Victorian physicist Michael Faraday famously used an ordinary candle to demonstrate some of the key principles of chemistry and physics. It was a brilliant example because of its simplicity and purity. A single candle was all he needed to demonstrate those key principles, and to demonstrate how those key principles were reflected in everyday life.
Cannae does for human error what the candle did for chemistry and physics. It illustrates a key point with elegant simplicity. Cannae is one of the few battles that unfolded exactly as one commander intended. With most battles, there are key points where a single chance event could have changed the outcome. That wasn’t the case at Cannae. What happened on that day in 216 BC was as inevitable and inescapable as the events in a Greek tragedy.
And, like a Greek tragedy, Cannae was inexorable, brutal and bloody in its outcome. It was a devastating demonstration of how one common human error can lead to tens of thousands of deaths. That’s how many Romans were killed in a single day at Cannae.
The Roman commander saw events unfolding just the way he expected, with his large army pushing back the battle line of Hannibal’s much smaller army into a deep arc. That was the shape of a battle line that was just about to break, which is what the Romans wanted; a broken battle line could then be mopped up in detail.
What the Roman commander hadn’t spotted was that the shape of the two armies was also the shape of one army surrounded on three sides by the other army.
The Romans realised their mistake when Hannibal’s cavalry slammed into the back of the Roman line, closing the trap. Some Romans managed to fight their way out while the trap was closing. After the trap had closed, the rest were slaughtered to the last man.
The Roman commander’s error is known as confirmation bias. You see the evidence that fits with what you want to see; you don’t see that the same evidence also fits equally well with a completely different explanation.
I care about confirmation bias because it’s one of the commonest mistakes in research. It’s at the heart of the widespread popular misconception that scientists set out to prove that their theories are right. Trying to prove that your theory is right is a really stupid idea. You can’t be right all the time. Nobody is. Experts actually make more mistakes than novices, because the experts are testing out possible explanations all the time. That’s how the fictional medic Dr. Gregory House works. Nobody remembers the half-dozen possible diagnoses that he tries and abandons before he finds the correct diagnosis that everybody does remember. That’s how good science operates. You test ideas, and see which ones are the best fit for what you’re seeing. It’s about how well the different possibilities match the evidence, not about how good you are at guessing.
A common pattern in bad research is that the researcher starts off with a pet theory, then does a study that turns up evidence that’s consistent with their pet theory, and decides that this is evidence of their theory being right. This one simple type of mistake has squandered huge amounts of time, effort and money, and has led to untold needless human tragedy. It doesn’t just affect the studies themselves. Some bad studies manage to become orthodoxy, leading to wasted opportunities, and making later researchers spend years unpicking the mess before they can start rebuilding on sound foundations.
At Cannae, Hannibal defeated the Romans by using their confirmation bias against them. The number of Romans killed in that one battle was far greater than the number of dead at Gettysburg, or any other battle in the American Civil War; it was greater than the number of American dead in the entire Vietnam War. Sixty thousand dead, in one day.
But that’s probably just a drop in the ocean compared to the number of unnecessary deaths caused by the right answers being missed through confirmation bias, not just by researchers, but by anyone who looks at the evidence before making a decision, and sees only the shape that they want to see in that evidence.
That’s why Cannae was such an important story within Blind Spot. That’s why we’re testing different ways of showing evidence and looking at evidence, in the hope that some of those ways might prevent at least some deaths or loss or suffering. As the examples in Blind Spot show, it’s a realistic possibility. Researchers such as Gerd Gigerenzer have already saved lives by helping emergency doctors to avoid faulty judgments in the emergency wards. There’s real hope, and that’s worth a lot.
Image: Wikipedia