This is a war story.
I know many people don’t like war stories, what with the boom! and the argh! and whatnot. But please bear with me. I’ll omit the nasty stuff. What you will be left with is a vivid illustration of the fundamental cognitive mistake — a mistake that is as common as it is dangerous. Understand the trap, take steps to guard against it, and your decision-making will significantly improve.
Now. On with the war. (Cracks knuckles. Leans over keyboard.)
Unless you are a serious military history buff, you probably don’t know that there was land combat in North America during the Second World War. Technically, at least. It happened in the Aleutian Islands, the long archipelago that sweeps south and east of Alaska, almost to Siberia.
In early June, 1942, the Imperial Japanese Navy seized the Aleutian islands of Kiska and Attu. In part, the move was a diversion designed to draw American forces away from the Japanese conquest of Midway. But once airfields and harbour facilities were built, it would also provide Japan with a north-western stronghold. Seventy-five hundred Japanese soldiers landed, along with 500 civilian labourers.
Although forgotten today, the seizure of the islands shocked Americans and seemed to substantiate fears of imminent invasion of the Pacific coast. Tens of thousands of troops were sent to bolster the tiny Alaska command.
American B-24s started bombing the islands in the fall of 1942. By the following spring, a naval blockade kept Japanese vessels from re-supplying the island. Rear Admiral Thomas Kinkaid prepared an invasion codenamed “Operation Cottage.”
Most of the Japanese facilities had been built on Kiska, so Japanese troops were heavily clustered there. Kinkaid decided to knock out the lightly defended Attu first, then finish the campaign with a second landing.
The Americans’ expected the landing on Attu to be fought hard. To their surprise, it was unopposed. Relieved troops marched inland.
Attu and Kiska are volcanic islands. Although small, they quickly rise into rocky, forbidding hills and mountains whose slopes are riddle with boulders, outcrops, defiles, and caves. For a defender, it is ideal terrain.
As American troops pushed up the slopes, they soon realized that the Japanese had not abandoned the desolate island but had instead made a tactical retreat to the interior. Japanese snipers popped up, fired, disappeared. Mortars detonated from seemingly any direction. With thick fog blanketing the island, naval bombardment was ineffective, and the Japanese reputation for fighting to the bitter end was confirmed as soldiers chose suicidal attacks over surrender. The offensive became a slow, grinding, rock-to-rock struggle.
When the Japanese faced the exhaustion of their ammunition, they drew their remaining forces together and marched silently in the night and fog toward an American ammunition depot guarded only by engineers. A desperate battle followed. The engineers held on and the lifting of the fog the next morning revealed the Japanese had been all but wiped out. Out of an initial force of 2,650, only 29 wounded Japanese soldiers were taken prisoner.
The American commanders were stunned by the ferocity of the defenders. They had outnumbered the Japanese six to one, held massive advantages, yet American forces still took 3,829 casualties, including 549 killed, in what had been expected to be a quick, minor operation.
Now they had to defeat to take Kiska, which was held by a much larger force dug into a warren of bunkers and tunnels.
The Americans bolstered their forces with the addition of 5,300 soldiers from the 13th Royal Canadian Infantry Brigade. The First Special Service Force — a joint American and Canadian commando unit — was also assigned. The total allied force numbered 34,000.
All through July, American ships and planes pounded the island.
On August 15, during a brief break in the weather, the invasion began.
The landing was again unopposed. Soldiers trudged inland as artillery and naval guns boomed behind them. Veterans of the first landing had seen this movie before.
Thick fog rolled in and a cold rain fell. Visibility plunged. As night came on, sporadic gunfire could be heard in all directions. Tracer bullets lit up the fog. Rumours of snipers and ambushes raced from man to man.
Amid fog, fear, and confusion, soldiers encountered Japanese dugouts. Edging ahead, they discovered the dugouts were undefended. They probed tunnels — and again found them abandoned. There were deadly booby traps and mines. Bullets cracked overhead. The fog lit up with tracers. But whenever it seemed they had pinned the Japanese down, the enemy vanished.
Only slowly did the truth emerge from the fog: There were no Japanese troops on the island. As the Americans would learn after the war, Japanese vessels had slipped past the naval blockade and removed their entire force almost three weeks prior to the invasion.
Taking Kiska had killed 92 men and wounded 221. But Allied soldiers had only ever fired at each other.
Prior to the invasion, scraps of intelligence had hinted at the truth. Aerial surveillance revealed a decline in observable activity. Bomb damage went unrepaired. Coded radio traffic ceased. Some intelligence officers even suggested the Japanese had left the island, but with no evidence that the blockade had been evaded, Admiral Kinkaid rejected that possibility. The Japanese don’t abandon a fight, he insisted, pointing to recent events. And all the evidence the intelligence officers cited was consistent with Japanese forces retreating to the island’s interior to put up a defence even more ferocious than that on Atta. Kinkaid’s critical assumption — the Japanese don’t abandon fights — was never really challenged, even though the Japanese had, in fact, abandoned a fight at Guadalcanal, not six months earlier. The invasion went ahead. (For details, see this 2015 paper, published in Joint Force Quarterly.)
When the invasion began with the assumption that the Japanese were in place, and expectations that they would fight a certain way, everyone from the admiral to the soldiers on the ground interpreted what they were seeing accordingly. Abandoned shoreline? Confirmation. The noise of gunfire? Confirmation. Tracer bullets? Confirmation. Only when the evidence that the island had been abandoned was strong enough to overcome even all this “confirmation” did they finally see that their assumptions and expectations were wrong. And by then, scores of men had died.
Neither the literal fog nor the “fog of war” was the cause of the tragedy. Its roots lay in human cognition.
From that Joint Force Quarterly paper:
In his book, The Psychology of Intelligence Analysis, Richard Heuer argues that all individuals assimilate and evaluate information through a personal mental model (or mindset) influenced by perceptual bias. Perceptual bias is not inherently bad. The assumptions we form through this bias allow us to process what would otherwise be an incomprehensible amount of information, but they can also set a lethal trap for unsuspecting mission planners, decision-makers, and intelligence analysts.
But we all know that sometimes people make assumptions and biased judgements. Correcting that seems simple enough. Think more carefully about your decisions. Double-check your conclusions.
So why is this trap so dangerous?
To see why, take a look at whatever you happen to be near. What do you see?
I’m sitting in my office. Looking out my window, I see my backyard. There is a little grass. A maple tree. A fence. Or so it seems to me.
In reality, my perception is a lie.
Thanks to neuroscience, I can say categorically that I do not see grass, a maple tree, and a fence. I see a mental image of grass, a maple tree, and a fence. That image is constructed by my brain. Or if you prefer a more artistic metaphor, what I see is my brain’s painting of reality.
Of course, my painting corresponds to reality in many, important ways. That’s why, when I am in my back yard, I can avoid walking into the tree. Or I can climb it. But still, it is a fact that I do not see objective reality. I see my brain’s subjective interpretation of reality, not reality itself. The two may be similar but they are not identical. Ever.
To see the difference, consider why I can see anything at all in my backyard: There is light. Light is radiation. It bounces off the objects in my backyard and makes its way to my eyes, becoming signals to my brain. My brain draws on that sensory input when it paints its portrait of my backyard.
But visible light is only one small portion of the electro-magnetic radiation bouncing around in my backyard. We apply different names to different frequencies — gamma rays, x-rays, ultraviolet, visible light, infrared, microwave, and radio — but it’s all the same stuff. I can’t see most of it, so most of the electro-magnetic radiation in my backyard is not in my painting of reality. And yet, it’s there in reality.
Or consider something else I can see: In every eyeball, there is a tiny spot on the retina where the optic nerve passes through to create the wiring that sends visual signals to the brain. In that one tiny spot, there are no photoreceptors. As a result, we all have a tiny blind spot in our field of vision. Always. And yet, unless you know that the blind spot is there, and you take special steps to reveal it, the blind spot never appears in your vision. Why? That’s because the brain reads what is around the blind spot and paints in the blind spot accordingly: If I look at the sky and the blind spot is surrounded by blue, the brain paints the blind spot blue. What I am seeing is not reality. I am seeing what the brain assumes reality to be.
Now, I have presented all this in terms of vision. But the brain does a lot more than paint static images. It also makes sense of what it’s seeing. And it anticipates what will happen next.
It does this using prior experience and existing beliefs.
If I see a flash of red in the branches of my maple tree, I’ll immediately conclude it’s the male cardinal, the only bird (in this region) that is all red. I will also feel that it is here, as so often in the past, to get the seed in the bird feeder that hangs off a branch of the tree. And I will expect to soon see the female cardinal, because I know cardinals spend their lives in pairs and seldom appear alone. All these perceptions and expectations are the product of what was described above as a “mental model” — my brain’s construct of what cardinals are and how they behave.
Our brains are stuffed with mental models like these. We couldn’t function without them. For the most part, they work pretty well. But sometimes, they fail.
Want to see a mental model fail?
By a nifty coincidence, there is a good illustration in that essay I linked to. In fact, it is in the very sentence about mental models!
Here’s that sentence again: “In his book, The Psychology of Intelligence Analysis, Richard Heuer argues that all individuals assimilate and evaluate information through a personal mental model (or mindset) influenced by perceptual bias.”
The Psychology of Intelligence Analysis is a famous book in intelligence circles. Here’s the cover:
Look closely at that cover. Now look again at the sentence: “In his book, The Psychology of Intelligence Analysis, Richard Heuer argues that all individuals assimilate and evaluate information through a personal mental model (or mindset) influenced by perceptual bias.”
See the problem? The author’s first name is misspelled. It is “Richards,” not Richard.
His name is right there on the cover. It could not be clearer. So why was it misspelled in that essay?
“Richard” is a common first name. “Richards” is a common surname. But “Richards” is an extremely uncommon first name. In fact, I don’t think I’ve ever seen it used as a first name anywhere else. So when the writer of that paper looked at the book, and he saw that the first name is “R…I…C…H…A…” his mental model expected to see “Richard.”
So that’s what he saw.
If you think I’m over-reading one tiny misspelling, bear in mind that the reason I spotted this mistake is that I’ve encountered it before. Many times.
Google “Richard Heuer” and you’ll see how common it is. For example, look at this Amazon listing, which has a picture of the cover, with the correct spelling, but repeated misspellings beside it:
Every single person who made this mistake looked at the correct spelling but saw something different. Because we don’t see merely with our eyes. We see with our brains and the mental models they construct and use. And when your mental model expects to see “Richard,” you’ll probably see “Richard.”
The great documentary filmmaker Errol Morris wrote a book about photographs, how we see, and how we fool ourselves. It’s called Believing Is Seeing. That’s a useful phrase to keep in your back pocket.
Psychologists and philosophers call the belief that we see reality directly and objectively “naive realism.” It is a profound illusion.
You now know (if you didn’t before) about the blind spot in your retina. So can you reveal the blind spot? Can you tell your brain to take a coffee break and stop painting it in? No. Even though you know it’s an illusion, you can’t stop seeing the illusion. The same is true for “naive realism.”
You can understand, intellectually, that you do not grasp reality directly and objectively. But it is impossible to go about your daily life without thinking and acting as if you do.
And that’s a good thing, for the most part. If your peripheral vision picks up something rushing toward you and you think, “my brain’s representation of reality suggests something is coming at me, but that perception may or may not align with obje…” you will be smacked in the head by the football coming at you. You need to treat perceptions of reality as if they are objective reality.
So did our ancient ancestors. Any caveman philosopher who correctly guessed the neuroscientific reality and thought to himself, “the image of a lion rushing toward me may indicate a lion is rushing toward me, or it may be a misinterpretation of sensory input combined with my prior expectation of lion attack …” would not survive long enough to reproduce. Only people who thought “LION!” passed on their genes.
So naive realism isn’t bad. And our mental models are pretty good. There wouldn’t be almost eight billion humans otherwise.
But that is why all this is so dangerous.
We do not experience the conclusions drawn by our mental models as our own personal perceptions of reality that may be more or less true. We experience them as true. They feel true. They are true. Period.
That flash of red in the tree? It’s a male cardinal.
The name of the psychologist? It’s Richard.
That large thing moving towards me? It’s a lion.
The Japanese? They are on the island. Waiting to kill us.
We do not experience these as plausible hypotheses. They are not best guesses. They are not even rebuttable presumptions. They are true.
When we feel that something is true, it is profoundly unnatural to think, “OK, it feels true, but is that feeling reasonable? What evidence is there to support it? Are there other possible explanations for that evidence? Is there other evidence I am not aware of?”
Scrutinizing your own thoughts this way — surfacing even the most basic assumptions, questioning everything — is the equivalent of the caveman philosopher thinking, “the image of a lion rushing toward me may indicate a lion is rushing toward me, or it may be…” Evolution did not wire us that way.
When you feel something is true, the natural response is “LION!”
And yet we are not doomed to live as puppets in the theatre of our minds. We can catch and correct the occasional, inevitable errors generated by our otherwise wonderful brains. There are essentially two ways.
The first is to use our species’ remarkably social nature.
Remember, my perceptions of reality are personal. They are a subjective interpretation. No other person shares my particular blend of experiences and perceptions, so any other person will necessarily have perceptions of reality that are different.
This is why the scrutiny of others is so valuable. If there are flaws in my perceptions that I am not aware of, someone else may not share those flaws and may spot them in my thinking. And I my see the flaws in that person’s thinking. If we have a good, honest, frank, careful discussion.
Of course “different” is a matter of degree. I am a 50-something white male Canadian raised in the north, educated in universities, who has spent most of his life living in Canadian cities. Another person who shares all those characteristics will be different — but not as different as a 20-something white, English construction worker or a 30-something black female architect from France, or an elderly Vietnamese rice farmer.
it’s unfortunate that “diversity is our strength” has become an eye-rolling nostrum because it is true and important. There really is strength in diversity, at least if implemented wisely. When people with very different perspectives work together constructively, they can combine their perspectives and produce far more acute perceptions and judgements than they ever could alone. (Phil Tetlock and I wrote about the value of multi-perspectivism, and the evidence in support of it, in Superforecasting.)
Maybe you’ve heard of the parable about the blind men and the elephant? It’s ancient and comes in many forms. But basically, it imagines a group of blind men who encounter an elephant. One grasps the trunk and says, “it’s a snake.” Another feels a leg and says, “it’s a tree.” And so on. They’re all a little right but a lot wrong. But what happens if you combine all their perceptions? You discover that it’s an elephant.
That’s the hippie version of how to improve judgement.
The military way is to create a command structure in which commanders share their thinking and subordinates are encouraged to share their perspectives and question their commanders. This is very different than the salute-and-shut-up approach most non-military people think of when they imagine how militaries operate, but militaries across the Western world have all moved in this direction — broadly labelled “mission command” — over the past half century. This is particularly true of the special operations units that are among the smartest, most adaptable, and most successful. (Meanwhile, the Russian military continues to be a classic salute-and-shut-up organization. That’s a big reason why it is, and always has been, a slow, clumsy, stupid organization that can only win with crude force and overwhelming numbers.)
Formalized processes can also be implemented, including a designated “devil’s advocate” — the contrarian whose job is to find fault and argue against — and “red teams” whose task is to prove that a plan is a terrible idea.
But for any process of this kind to work, the indispensable first step is having people on top with a sense of intellectual humility. It’s easy to accept that “to err is human.” It’s much harder — particularly when you are powerful and surrounded by people who nod and grin when you speak — to accept that you are human and will err. But without that, no process can save you. (And in my experience, an unsettling number of powerful people are indeed without that.)
Which brings us back to the individual.
So much of our cognition is automatic. As with the brain painting in the blind spot, we do not direct it and cannot turn it off. But we also possess the amazing power of conscious thought. It is under our control. We can even turn it around, if we choose, and look at the workings of the mind.
No, we cannot peer inside the black box and look at particular processes. But we can study what science has learned about those processes. And with that knowledge, we can scrutinize our conclusions — the things that feel true, that we are not naturally inclined to question, much less doubt.
We can think to ourselves, “OK, it feels true. But is that feeling reasonable? What evidence is there to support it? Are there other possible explanations for that evidence? Is there other evidence I am not aware of?” This is, as I said, profoundly unnatural. But it is not impossible.
What’s needed to make it happen is a thorough understanding of why such self-scrutiny is necessary — which is to say, an understanding of basic psychology — and a determination to do the work necessary to see the world more accurately and make better decisions. It takes “metacognition.”
Metacognition is simply thinking about thinking. Like any other skill, it’s hard if you’ve never done it before, but the more you do it, the easier it gets. And the more you do it, the better you get at it (provided you have some way to get accurate feedback, as we discuss at length in Superforecasting).
I’ve been formally studying how people think and decide as I wrote or co-wrote four books over 15 years. In that time, I’ve had the privilege of meeting and/or studying many intelligent people who, far more importantly, had a demonstrated record of good judgement. The one and only thing I can say they all have in common is metacognition. They think about thinking. A lot. I cannot name, in the present or the past, someone who had a proven record of consistently good judgement — an occasional hit doesn’t cut it — who was not psychologically astute and did not think carefully about his or her own thoughts and how they may have gone awry.
What comes naturally to us often works well. But occasionally, it may produce friendly fire and other disasters. To avoid disasters, there is no substitute for the hard road of thinking about thinking.
Great article - filling in the blanks is necessary to survive - but if you do not know you are filling in the blanks you could be reducing your prospects of survival.
Excellent as usual