Let’s talk about AI. But before we get down to that, I want to take you back to 1924.
We are at a Standard Oil refinery in New Jersey. Employees at the refinery call one particular worksite here “the loony-gas building.” The reason: The building, where gasoline is refine, makes workers not right in the head.
As the New York Times reported on October 27, 1924:
Men who took up work in the plant were in for ‘undertaker jokes’ and serio-comic handshakings and farewell greetings when their comrades learned of their action. So far as could be learned, no special warnings were given employees working with ‘loony gas,’ nor apparently did they sign documents relieving the company of responsibility.
The Times reported on the “loony-gas building” because four men who worked there had recently gone mad — hallucinations, paranoia, rage — and been hospitalized. One had to be subdued by four men and put in a straitjacket. A fifth man was dead.
What made the “loony gas” dangerous was lead. The toxicity of the element had been well known since Roman times. Benjamin Franklin and Charles Dickens had written about it. And by 1924, mental breakdown as a symptom of lead poisoning was also notorious. “Insanity In Lead Workers” is the title of a paper published in the British Medical Journal in 1900.
But in the early 1920s, General Motors and Standard Oil started producing gasoline mixed with lead, which made engines run more smoothly and efficiently. At the time, America was being transformed by an explosion of cars and trucks on its roads and turnpikes. The additive seemed a godsend.
The incident at the refinery was a flashing red alarm that “loony gas” was something less than a godsend.
Even if you don’t know the rest of this story, you can probably guess how it unfolds. After the incident at the refinery, the press turned it into a major story and “loony gas” became a common label for leaded gasoline. Standard Oil lied and said the press invented the smear. In fact, it came from the company’s own employees.
Public health officials in the federal government — including scientists who had been alarmed by the use of lead as an additive from the beginning — called hearings to investigate. The hearings were to include discussion of the many alternatives to lead which had not been explored. The oil and automobile industries fought back and got the hearings shut down before partway through. Under pressure, the officials later reversed their position — internal memos called the research they used to do this “half-baked” — and concluded there was no public health issue. Leaded gasoline was perfectly safe.
Except it wasn’t. In the 1960s and 1970s, as science increasingly demonstrated that lead was even more dangerous than previously believed, it became clear that leaded gasoline was contaminating the air we all breathe and, because lead bio-accumulates, levels of lead contamination in people’s bodies were worryingly high. Leaded gasoline was shown to cause heart disease, stroke, and cancer. It impaired development in children. It reduced IQ. It changed behaviour.
Unleaded gasoline was introduced in the US in the 1970s and lead was finally phased out entirely in 1996. But it took more than two decades for the rest of the world to follow.
The United Nations Environmental Programme estimated that the elimination of leaded gasoline worldwide prevented more than 1.2 million premature deaths per year, boosted IQ among children, and reduced crime. As a result, it saved $2.45 trillion for the global economy.
These sorts of calculations are extremely difficult so I’m not sure how much stock I would put in these numbers. But we can be sure of one thing: Leaded gasoline was a disaster for humanity. And it could have easily been avoided if industry and government officials had slowed down, talked, done proper research, and looked at all the available options. But they didn't. Industry just went with the easiest and cheapest option. And a hell of a lot of people suffered and died as a result.
So what’s this got to do with AI?
This week, more than 1,000 scientists, academics, and industry leaders called for a six-month “pause” on AI development.
“These things are shaping our world,” said Gary Marcus, an entrepreneur and academic who has long complained of flaws in A.I. systems, in an interview. “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.”
Is this particular approach the best course of action? I have no idea. I’d have to know far more than I do about AI and the AI industry to have anything like an informed opinion.
But I do know that one widespread response to the call was both predictable and foolish.
We must flag it. And reject it firmly.
Brian Armstrong is the billionaire CEO of cryptocurrency platform Coinbase. He has 1.2 million followers on Twitter.
But despite the rarefied air Armstrong breathes — which has 98% less lead than it did in 1980, thanks to the “committees and bureaucracy” he scorns — he speaks for a lot of people who love and embrace technology and don’t want bedwetters and bureaucrats slowing the march of progress. They were loud and clear on Twitter this week, in response to the call for a pause.
In some ways, I respect their views. I even share them to a degree.
In my first book, Risk: The Science and Politics of Fear (The Science of Fear in the US) I wrote at length about how we should be grateful that we are the healthiest, wealthiest, and longest-lived humans who ever lived. I wrote about the psychology that promotes irrational or excessive fears, and those who use that psychology and those fears — activists, politicians, corporations, bureaucrats — to promote their own agendas. Safety turned into a narrow-minded, short-sighted ideology — “safetyism” — is indisputably a threat to progress and human well-being.
But there is an equal and opposite ideology. It is also dangerous.
In this ideology, technology is always for the best. Sure, there may be bumps now and then, but if people and corporations are left free to do what they wish with technology, the good will outweigh the bad. And we will roll along the straight and golden road of progress.
In this techno-libertarian faith, worries about technology are always misguided, if not downright foolish. Believers love digging up stories from the past of people whose worries about a technology appear ridiculous in hindsight.
Radio promotes juvenile crime!
Movies destroy eyesight!
Bicycles cause insanity!
Hilarious!
Sometimes they are indeed hilarious. I’m writing a book about the history of new technologies. I have a thick file of past fears and, I assure you, that file has made me laugh out loud many times.
But I also have files on things like leaded gasoline. And radium. And cigarettes. And many other technologies that we were once told were perfectly safe.
In every one of these files, you can find people in the past who expressed fears about these technologies. But their fears aren’t hilarious in hindsight and reading them today does not make anyone laugh.
The techno-libertarians don’t collect these fears from the past. They don’t share them on social media. They don’t read them and smile smugly, feeling the warm glow of confirmation bias.
They don’t see them at all.
This blind eye is crucial to the maintenance of their faith. If they acknowledged that people in the past have both over-reacted and under-reacted to the risks posed by new technologies, they would have to acknowledge that there is simply no reason to believe, a priori, that the good of a technology will always outweigh the bad. They would have to acknowledge that each technology is different. That each case must be judged according to its particular circumstances.
They would have to acknowledge that humanity can fall off either side of the tightrope.
But that would be the end of their faith. So they keep blinders on, share stories about the goofy fears of the past, and continue “marching forward with progress.”
Onward Christian soldiers, as an earlier generation would have put it.
Postscript: Ideological clarification. And some navel-gazing.
I wrote Risk in 2007.
The psychology I discussed in it can lean in either direction, causing us to exaggerate dangers (“stranger danger”) or badly underestimate them (cigarettes in the 1970s). The “risk entrepreneurs” I discussed likewise can push perceptions in either direction (on terrorism, the Bush administration pushed risk perception up; on leaded gasoline, GM and Standard Oil pushed it down).
But in that book, I mostly illustrated the material with exaggerated fears. There were two reasons. One, it was 2007, terrorism had been the obsession of governments for six years, and inflated fears of terrorism were doing major harm. There were many other risk misperceptions at the time, but that was the giant. And two, I felt there is a profound lack of appreciation that we are the healthiest, wealthiest, and longest-lived people who ever lived. This is the most fundamental fact of our existence. Acknowledging it matters, both because it should shape how we see and respond to the world, and because, well, a little gratitude to the universe is in order.
But the strongly positive tone of the book led many people to think I was a relentless optimist who believed everything was great and would only get better. I even became a go-to guy for reporters who wanted an “ignore the fear mongering!” comment. I was Dr. Pangloss, who believed, in the mocking words of Voltaire, “all is for the best in the best of all possible worlds.”
But that’s not what I believed. Not then. Not now. Not ever.
I have been reading history since I was five. I read William Shirer’s Rise and Fall of the Third Reich when I was nine (mum and dad were pleased, or unsettled, or both.) I’ve read a lot of history, is what I’m saying — and no one who has read a lot of history can possibly believe that there are straight lines in this world.
Lines may run straight for a stretch but inevitably there are bumps, curves, and sharp corners. Even U-turns. My maxim: When things are wonderful, they may soon go to hell; when things have gone to hell, they may be approaching the gates of heaven. This maxim can be summed up in two years — 1914 and 1945. In the former, decades of the “march of progress” suddenly turned into a march into Belgium and an inconceivable nightmare. In the latter, humanity’s long spiral into the abyss shocked informed observers when it turned into the post-war golden age.
So I am neither an optimist nor a pessimist. I am a possibilist.
I often mention the old Harry Truman joke about wanting to hear from a one-armed economist because he was sick of hearing “on the one hand…but on the other….” Optimists and pessimists are one-handed but possibilists have two hands. And they insist on using both. (This is the “integrative complexity” Phil Tetlock and I wrote about in Superforecasting. It’s more fun to call it two-handed thinking.)
This made for some awkward interviews. A CNN reporter once called me to get the expected “ignore the fear mongering!” comment but I provided a two-handed response: Yes, there was some mongering of fear, I said. However, there was also some legitimate concern. The first part was included in the story. The second was cut.
Not surprisingly, my insistence on complicating things eventually caused the calls from reporters to dry up. (The first rule of media: If you want to boost your profile in the news, take a single position, argue it relentlessly, and never acknowledge any contrary evidence, exceptions, caveats, uncertainties, or doubts.)
My two-handedness also set a torch to a number of career prospects.
Want to work for a DC think-tank? Your thinking must align with the think-tank’s and never deviate. Two-handedism is most unwelcome.
Want to get foundation funding for a project? See above.
As the “ignore the fear mongering!” guy, I had options. When I kept using both hands, they evaporated.
A little-known fact about some of the prominent people and organizations who peddle techno-libertarianism is that they owe their jobs or funding to money from the Koch foundations — the group of foundations set up by the fantastically rich, mostly right-wing Koch family. Some people attack recipients of this money as mercenaries. I think that’s unfair. For the most part, I believe, people are sincere. They have some success in advancing their beliefs. Those with the money see that their beliefs align, so the money is handed over. But that money isn’t inconsequential. Once someone has been given a job or funding, he or she has to keep arguing the party line or be cut off. So even those who want to be more thoughtful and integrative have a powerful incentive to avoid that. And thanks to the wildly creative human capacity for rationalization, sincere belief tends to march in lock-step behind self-interest.
At the risk of sounding self-pitying, writing non-fiction books is an extremely insecure way to make a living. (Even before ChatGPT!) A job in a think-tank can help enormously. So can funding from a foundation or a rich benefactor. A cultish following can also be mighty handy. But these options are almost invariably reserved for one-handed sorts who can be counted on to repeat what the people with money want repeated.
My own circumstances aside — I’m doing fine, really! — this is one of the many problems with democratic life in 2023.
Partisans, dogmatists, and zealots are rewarded. People who see and respect complexity are not.
And what we reward, we get more of.
It sounds like you are annoyed that people assume you are always in favor of going ahead and never think that things can go wrong and so you pick on libertarians for causing that perception, because you usually agree with them. You then pick a tiny number of cases where you think libertarians are wrong and label them as having “faith”. You do not pint the faith label on those who automatically jump up to regulate everything and get it wrong.
Also, you smear libertarians by pointing out that some are funded partly by corporate interests. As if this isn’t the case for all other causes. You correctly say that you think that, for the most part, these people are sincere. It’s probably true of many people who are non-libertarians and anti-libertarians. But you use this to select attack libertarians. If the funding does not give cause to dismiss libertarian views, why bring it up at all? Of course, there are many, many libertarians who have never benefited from corporate money. Many of us have campaigned against government funding that benefits us financially.
It is probably true that few libertarians write much about lead although I’ve seen plenty of writing about cigarettes. That’s hardly surprising since just about everyone else writes about lead and cigarettes and the need for regulation. However, you join the regulation train too easily. Do you believe that consumers, once well-informed on real dangers (unlike most of the “dangers” we hear about, as you well know) will ignore them and can only be saved by our wise, benevolent, and impartial politicians and bureaucrats? When you dig into the history of regulation, what you will usually find is the regulations follows awareness and consumer pressure for change (as well as economic developments that make the change workable and affordable. Restrictions on child labor being a good example.
“Faith” is much better applied to those who see a problem and immediately turn to the coercive solution, despite all the failures throughout history, and despite the public choice issues that explain why regulation is systematically bad and gets worse over time. (Let’s also distinguish regulation from application of general law, which libertarians obviously support. If a company is emitting something definitely harmful and people are being hurt without their consent, you don’t need regulation to stop it.)
Your criticism is especially inappropriate in the AI risk/AI apocalypse panic. Lead in gasoline is clearly unhealthy and has no upside apart from a (temporary) mild lowering of costs. AI has enormous likely benefits. We are just beginning to see them. Just as AI is actually starting to be useful – increasing productivity, accelerating medical advances, and so on – some people want to stomp on it and kill it. What you call the libertarian response was indeed predictable. And correct. Stopping AI is a terrible idea that will cause people to die when AI could have accelerated cures. Just to name one area. And you are wrong that this is the universal libertarian response (sadly). Yudkowsky is a libertarian and rejects calls for moratoriums in every other area. He makes an exception for this one because he’s gone down an intellectual rabbit hole and become hysterical.
I do however, agree with your general premise, that Techno-Libertarianism isn't necessarily good. New tech usually brings a set of unintended consequences, some can be pretty bad. Techno-Optimism (the idea that new tech can be developed quickly, easily, and at low cost) is another problem, especially in the energy space.