Why do we so often listen to blowhards who are sure they know what’s coming and ignore people with real expertise and foresight?
My answer to that usually starts with landmark research completed a little more than a decade and a half ago by University of Pennsylvania psychologist Philip Tetlock (who is also my co-author on Superforecasting).
Here, I’ll first summarize that research. Then I want to walk you through an essay about the future of radio written by a New York Times writer almost a hundred years ago. Why such an ancient forecast? Because it shows vividly the difference between reasonable and unreasonable forecasts — and why we so often fall for the latter.
Back in the mid-1980s, Phil recruited hundreds of experts to make forecasts about global politics and economics. After collecting thousands of forecasts on a huge array of subjects, Phil waited for time to pass so the forecasts could be scored. By 2005, when he published the full results, Phil had completed the biggest examination of its kind and a few conclusions were clear:
The average expert was about as accurate as a dart-throwing chimp, to use the joke Phil himself used and came to regret because it’s all anyone remembered.
Accuracy tended to be highest in the shorter term and declined the further out people had to look. At a certain point, nobody had any real predictive ability, and past that point you were simply fooling yourself. This was confirmation of something that anyone familiar with forecasting should already know, and treat as fundamental, but it is astonishing how often people overlook this simple truth: It’s a heck of a lot easier to predict next week than next year or next decade, and at some point it becomes impossible. What that point is varies by domain. With weather forecasts, it’s roughly a week to ten days ahead. With simple demography, it may be decades ahead. Often, we don’t have nearly enough data and research to be confident where the horizon is. But there is a horizon. Always. And as a general rule of thumb, being cautious about it — assuming it’s closer than we think or want it to be — is a good idea.
The experts tended to cluster into two groups. One group was far from perfect, but still they had modest-but-real predictive accuracy. The other group was a total mess. They were so bad they would have been beaten by that dart-throwing chimp. What made the difference between the two groups? Not political orientation, academic qualifications, level of optimism, or anything you care to think of. What made the difference was their style of thinking.
One group of experts used soup-to-nuts thinking: They looked at problems using different lenses and dissonant views. They tried to glean what they could from each and synthesize it all into a single judgment. This process made their thinking more complex and uncertain — if you have some views pointing in one direction and others pointing in another, you’ll seldom come to a clear and confident conclusion — but they saw reality as complex and uncertain so they were comfortable with conclusions that reflected that. There was an easy way to spot these experts: Because they often shift from one view to another, and the views often contradict each other, they commonly use phrases like “however” and “on the other hand.”
The other experts took the opposite tack, viewing problems through a single analytical lens. This produced a simpler, clearer picture of reality and more confident judgements. With all the information at hand pointing in the same direction, these experts could be spotted by their frequent use of language that suggested “here’s another reason why my conclusion is right” — like “furthermore” and “moreover.” What also distinguished them was certainty. They were much more likely to say something was “certain” or “impossible.”
Citing a scrap of Ancient Greek poetry — “the fox knows many things but the hedgehog knows one big thing” — Phil called the first sort of expert “foxes” and the second “hedgehogs.”
Which type of expert had modest-but-real foresight? Foxes.
But which type of expert do people tend to listen to? Hedgehogs. Phil’s data showed there was an inverse correlation between fame and accuracy, meaning the more famous an expert is, on average, the less accurate his or her forecasts are. That’s because hedgehogs may not deliver accurate forecasts but they do provide a simple, clear, confident explanatory story — and the certainty we psychologically crave.
I wrote about this in Future Babble and Superforecasting but I was reminded of it recently while doing some research on the early days of radio.
It’s hard to imagine now, but in the years between 1910 and 1930, roughly, radio was as new and thrilling as artificial intelligence is today. How it would develop and how it would change the world were blazing hot topics.
I recently read an essay about the future of radio published in 1924 that typifies exactly how the hedgehog mind works, why others find it so compelling, and why it so badly fails to grasp reality.
Keep reading with a 7-day free trial
Subscribe to PastPresentFuture to keep reading this post and get 7 days of free access to the full post archives.