For anyone in the risk business, some elements of the debate about AI are depressingly familiar.
AI hasn’t hurt people yet, at least not that we know of (although there has been reporting on what may be an early case.) But there are obvious ways it could, even in its nascent form (like the industrialized production of sophisticated disinformation.) And if we apply a little imagination? AI could deliver enormous potential benefits. (Think of the productivity boost if much of the administrivia handled by white-collar workers is taken over by AI.) But that same modest application of imagination also allows us to foresee AI creating enormous potential harms. (Think of how AI could boost the harmful effects of social media, including filter bubbles, echo chambers, and group polarization. Or think of how propaganda, fraud, and other forms of manipulation could be supercharged by bots that can look and sound exactly like anyone — even your neighbour, your son, or your mother — and carry on a conversation that feels entirely human.) And this is restricting the analysis to the short- and medium term, thus excluding worries about AGI and freaky existential risks.
So will we sit down, think about this carefully, and try to craft a regulatory scheme that captures the benefits while minimizing the risks? Of course not. This is Homo sapiens we’re talking about.
For obvious evolutionary reasons, we are excellent at responding to experience. If something goes horribly wrong, we learn, we adapt, we act to reduce the risk of it going horribly wrong again. But something that could go wrong, but hasn’t yet? No matter how foreseeable — even obvious — the threat is, that is a mere abstraction. It doesn’t move us. We don’t act.
Until it actually goes horribly wrong. Then we act.
Cynics have dubbed this the “tombstone mentality.” History is littered with exhibits of its handiwork. So are cemeteries.
This is why, when it comes to regulating the hypothetical risks of AI, I would bet a Benjamin Franklin that we will not seriously regulate, or take any other major steps to minimize risks, until something bad happens.
Let’s just hope that when it does the damage isn’t too severe. And it’s reversible.
Of course, it’s likely that by doing nothing now and waiting until AI goes haywire to act, the technology will be so dispersed and embedded in our social and economic systems that all sorts of regulatory alternatives we could choose today will be effectively impossible. But on the bright side, doing nothing now should juice the Nasdaq. And make some billionaires more billionairey.
I don't want to be a fatalist about this, of course. We are not slaves of our biology. And some countries — hello, European Union — have stronger cultures of precaution. If anything is done, I expect it will be done there.
But in all probability? Forget it. In fact, I think I’ll raise my bet from a Benjamin Franklin to one Bitcoin.
Following is an article that elaborates on the tombstone mentality. I published it in April, 2021 in a Canadian publication, so it starts with a pandemic-related story from Canada. But this stuff is universal, as the multinational examples from history demonstrate.
Near the end, the article sounds more hopeful than my tone today. What has Canada done since it was published? Nothing. Which is a big reason why I sound so gloomy now.
(Published in The Hub, April 12, 2021)
Stage one: A threat is foreseen. When and how it will strike isn’t known but that it will, sooner or later, is clear.
Stage two: We do little or nothing.
Stage three: The threat strikes. Lives are lost. Our failure to prepare is bemoaned. We prepare for the next time the threat strikes.
Stage four: Time passes. Memories fade, and the sense of threat with it.
Stage five: Although the threat is as real and inevitable as before, preparedness lapses. Return to Stage One.
Few noticed, but we just witnessed a grand illustration of this cycle.
At the end of March, the federal government, the Ontario government, and the French biopharmaceutical company Sanofi announced they would jointly spend almost $1 billion to greatly expand Sanofi’s Toronto vaccine-production facility. The purpose was neatly summed up by Alan Bernstein, a member of Canada’s Covid-19 Task Force: “There will be future pandemics and so we need to be ready next time and we clearly weren’t this time.”
The Sanofi facility is on the campus of the former Connaught Laboratories. Once a Crown corporation, and before that a non-profit affiliated with the University of Toronto, Connaught Laboratories was a center of excellence in biomedical research and a major vaccine manufacturer until the Mulroney government sold it to the French company that became Sanofi. That sale is the reason Canada lacked domestic vaccine production capacity when Covid-19 struck.
And how did Connaught come into existence? In the late 19th-century, German scientists discovered an antitoxin that could save the lives of children stricken with diphtheria. American pharmaceutical companies produced the antitoxin but no Canadian facility could. Canadian parents watched their children slowly asphyxiate and could do nothing because the medicine was too hard to get and too expensive. Between 1880 and 1929, 36,000 children died in Ontario alone.
Connaught Laboratories was created to defend Canadians against diphtheria, rabies, smallpox, and future pandemics. The era in which it was created is evident in its name: The “Connaught” it honours is the Duke of Connaught, Canada’s governor-general from 1911 to 1916.
So that announcement in March was not merely Canada reaching Stage Three in dealing with the threat of pandemic. It was the second time Canada reached Stage Three and realized it needed domestic vaccine production.
The wait-until-someone-dies approach is so common it has several names. One is “tombstone mentality.” Another is “postcautionary principle.”
Examples abound.
In the 1990s, aviation safety experts warned that terrorists would hijack planes and crash them into buildings. They urged that cockpit doors be ordered reinforced and kept locked in flight. The airlines balked and rallied political allies to defeat the proposal because reinforced doors are heavier and therefore use a little more fuel. After 9/11 happened, the airlines relented. President George W. Bush congratulated them for their quick action.
Scientists warned for years that the lack of a tsunami early warning system in the Indian Ocean was a tragedy in the making. They were ignored. A tsunami struck on Boxing Day 2004, killing almost a quarter of a million people. Now there’s an early warning system.
In the late 1940s and 1950s, as civil aviation grew rapidly in the United States, experts warned that radar, ground control, and federal regulation were needed to avoid disaster. No one budged. In 1956, two passenger jets slammed into each other in mid-air, killing 128 people. All that had been called for was enacted, including the creation of the Federal Aviation Administration.
In 1855, the great scientist Sir Michael Faraday warned that the putrid waters of the Thames, long used as London’s toilet, would soon inflict a terrible toll. He was ignored. Three years later, a cholera outbreak and “the Great Stink” — an event that needs no more description than its name — finally spurred London to create a sewage system.
In all these cases, and so many others like them, there are many factors at work, from ideology to institutional self-interest. But the fundamental driver is psychology.
Intuition typically trumps calculation in our judgement and therefore is the principal source of what we do. And what we don’t do.
Intuition does not care for statistics. It does not do a cost-benefit analysis. It does not think carefully and logically about risk and risk management. Instead, it relies primarily on experience and feeling. A risk that is known from experience to be a threat, or that stirs strongly negative emotions, will feel like a threat. Internal alarms ring. A risk that does neither will feel like nothing. We shrug.
This creates a predictable dynamic: From complacency and inaction, we swing to panic and action, then slowly forget and return to complacency.
We purchase earthquake insurance only after the earthquake then let the policies lapse as the years go by and the risk grows. We buy portable generators and emergency rations only after ice storms or hurricanes then let them fall into disrepair and go out of date as time passes without disaster.
Leaders in governments and corporations possess the same psychology and are susceptible to the same miscalculations. But even if they recognize the mistakes, they must respond to external forces, such as popular perception, that are themselves shaped by the psychology.
Will they put resources into the threat that feels abstract and distant? A threat few care about and will earn them no reward if they take it seriously? Or will they put those resources into the threat that feels urgent? The one on the front pages of newspapers? The one that puts them at the centre of the action, in line for praise and promotions? It’s not a hard call.
I have no doubt there will be extensive post-mortems of our pandemic preparedness and response. And I’m very confident that we will be vastly better equipped for such an event in future. At least for a while.
But as we recover from the pandemic, we need to see that the problem is bigger than this pandemic — and the goal should be bigger than preparing for the next.
Along with pandemic preparedness, I see four national conversations we should have. And one mechanism for having them.
We need to study and clarify how and why we so routinely fail to prepare for foreseeable disasters until a body count forces us to act. The psychology is permanent but it can be illuminated and misperceptions corrected. And institutional incentives can be altered if we know what needs to change.
We need to surface what we are ignoring. This time, it was a pandemic. Earlier, it was financial meltdown. Before that, terrorism. What other low-probability, high-consequence events are we not taking as seriously as we should? Are there cost-effective steps we could take to mitigate them?
We need to consider how preparedness efforts could be structured and implemented to protect them against future short-sightedness. An insurance policy that must be renewed every year is highly vulnerable to fading memories while one that automatically renews is much safer. How else can we make human nature work for preparedness and not against it?
While foresight and preparedness can be improved, they will never be foolproof. When they fail, resilience is the last line of defence. Beyond identifying and preparing for individual risks, how can we make systems more resilient?
Each of these questions is enormous. And they should all be tackled in the context of a thorough review of the failure of pandemic preparedness and recommendations for improvement.
This is a mammoth job. It seems obvious that nothing less than a Royal Commission will do.
Is it worth such an effort? The lives and money lost to the pandemic would seem to easily justify it. Now add the lives and money that could be lost to future shocks if the tombstone mentality continues to dominate. And consider that so many of these questions speak directly to the fight against climate change, the challenge of the century.
Put all that on the scale and a Royal Commission seems a modest step.
I like this article. I do think we need to study and prepare for existential level (low likelihood, high consequence) risk better than we do. One added consideration: The types of people who innovate and invent (say, entrepreneurial types) are fundamentally different in character than regulation types, there is very little overlap. So innovators are always ahead of the curve. and the regulation types always play catch up. It was ever thus.
"We only learn from blood" , was my attempt at a dramatic turn of phrase, in the lecture on the Titanic that got me "internet famous" for a few days in 1997, when the NYT was giving people good links to "Titanic" material on the new Internet thingy with the movie out.
It got me flown cross-continent to, as I told my wife "Lecture the US Navy on marine architecture", at least in that a little science conference sponsored by the Navy picked me as the general-meeting presentation for all.
My point came from how many disasters had already shown the Titanic disaster to be quite possible, and indeed, how earlier ships had been safer, safety factors chopped out one by one to save money, for sixty years...until a sufficiently-LARGE disaster shook everybody up, and new regulations came overnight. But it took a lot of blood to make people notice. The "Atlantic" disaster, 500 dead, just not big enough!
All still collected at my old Unix-user-group web site, 25 years later:
https://www.cuug.ab.ca/branderr/titanic/
the original short essay(#1) is the quick read, but #3, the longer lecture with graphics, talks the most about the "wait until people die" topic that Dan raises today.
My amendment, is that ENOUGH people have to die. ( On gun regs, the US is still getting there.)
Dan touches on that, though, with the same topic I highlight in the longer lecture: the newspapers went wild. The "Titanic" stayed on the front pages with agony stories about individual survival for weeks. THEN there was a big commission of inquiry. And it really counted that the Titanic killed *celebrities*.