In 2021, Ezra Klein interviewed Sam Altman, the CEO of OpenAI, the company that astonished the world when it released ChatGPT at the end of November. In December, Klein interviewed Gary Marcus, a psychologist and the leading critic of AI as it is developing today. (Marcus also hosted another remarkable forum bringing together leading thinkers with a wide range of perspectives.)
Laypeople like me should listen carefully to both interviews. If you have, or if you have taken a good look into the AI world by other means, I’m pretty sure I know how you feel about the hubbub.
You feel excited, hopeful, alarmed, maybe a touch frightened. Mostly, you feel bewildered. (“What is AI, anyway?”)
You worry about which industries AI will wipe out and who will get the immense wealth that wiping them out may generate. Even if you don't think your job will be lost — and there’s a decent chance you are very unsure about that — you get a little vertigo when you try to contemplate how AI could change your work and life.
How will it be regulated? Will it be regulated? What will billionaires and undemocratic governments do with it?
What will the world look like when my kids are my age?
It could be wonderful, you feel. It could be hell. Images of science-fiction utopias and dystopias jostle in your head.
Most of all, you feel uncertain. You’re confident this is big, but beyond that? It’s all up in the air
I’m as unsettled by these feelings as everyone else. But I try to remind myself that — contrary to what hindsight bias urges us to believe — we are far from the first people to feel this way about a new technology.
In part, I do this because misery loves company. But more constructively, I remind myself that people were dazzled and daunted by new technologies in the past because, however uncertain they were, they had to make choices about those technologies. Their choices mattered. In their time. In the time of their children. Even today.
And we, too, will have to make choices that matter for generations to come. Technology may shape the future, but how it shapes the future isn’t inevitable. Because people shape technology.
As regular readers will know, my day job for the past while has required going deep into the history of technology. I’m particularly interest in technologies that are now commonplace, even banal, technologies so embedded in society that we scarcely give them a moment’s thought. Electricity. Radio and television. Refrigerators and washing machines. Sliced bread.
The sheer ordinariness of these technologies has implications for how we see them. “As we become accustomed to new things, they are woven into the fabric of our daily life,” wrote David E. Nye, the renowned historian of technology. “Gradually, every new technology seems to become ‘natural,’ and therefore somehow ‘inevitable’ because it is hard to imagine a world without it.”
That sense of inevitability invites certain conclusions.
Of course radio was invented when it was. Science and technology had advanced to the point where that was inevitable. Of course radio was put to use the way it was. That’s what the technology does.
People didn’t choose that outcome. The technology made it happen.
This is “technological determinism.” It’s the idea that technologies create other technologies and the inherent qualities of these technologies decide when and how they will be used. As Nye notes in Technology Matters: Questions to Live With, “a single scene in Stanley Kubrick’s film 2001 capture the essence of this idea. A primitive ancestor of modern man picks up a bone, uses it as a weapon, then throws it in the air, where it spins, rises, and metamorphoses into a space station. The implications of this scene were obvious: a direct line of inevitable technological development led from the first tools to the conquest of the stars.”
To a technological determinist, people don't decide what technologies to adopt and how we will use them. Technologies impose themselves on us and we adapt. Technologies change societies, not the other way around.
Many famous theorists were and are technological determinists. Spotting one isn’t hard. They go on about technologies “imposing on,” “causing,” “shaping,” “guiding” and “changing” people and society. They seldom or never mention people and society imposing on, causing, shaping, and changing technology. A complete list of determinists would be long. It would include Karl Marx, Jacques Ellul, Herbert Marcuse, Marshall McLuhan, Michel Foucault, and Alvin Toffler.
Conspicuously not on that list are the names of historians of technology.
That’s because when you leave the airy heights of theory and descend to the dusty world of archives — the records of what people actually thought, said, and did — you quickly realize that technological determinism is contradicted by masses of evidence.
Technologies shape people and societies, yes. But people and societies shape technologies, too. And in this game, people and societies go first.
Nye has many illustrations. My favourite involves guns in Japan.
The gun would appear to be the classic case of a weapon that no society could reject once it had been introduced. Yet the Japanese did just that. They adopted guns from Portuguese traders in 1543, learned how to make them, and gradually gave up the bow and the sword. As early as 1575 guns proved decisive in a major battle (Nagoshino), but then the Japanese abandoned them, for what can only be considered cultural reasons. The guns they produced worked well, but they had little symbolic value to warriors, who preferred traditional weapons. The government restricted gun production, but this alone would not be enough to explain Japan’s reversion to swords and arrows. Other governments have attempted to restrict gun ownership and use, often with little success. But the Japanese samurai class rejected the new weapon, and the gun disappeared. It re-entered society only after 1853, when Commodore Perry sailed his warships into Japanese waters and forced the country to open itself to the West.
The power of people and societies to shape technologies — even reject them — is forgotten, at least in part, because after choices are made and a technology becomes embedded in society, systems are built on it. At that point, our ability to choose really does become constrained.
The classic illustration is the QWERTY keyboard I am typing on. When typewriters were a new technology, the keys could have been configured any way we chose. Mostly for reasons of accident and happenstance — not the best way to make choices, but the most common — we started with QWERTY. Better configurations have been invented but they failed to catch on because, once people trained to use QWERTY, and QWERTY became the standard, the cost of switching was too high.
But notice that it’s not really the technology that constrains our choices today. It’s the choices people made long ago.
Radio underscores the point. It was invented roughly 120 years ago and, starting in 1920, radio broadcasting exploded from a trivial hobby of nerds to a major new industry. In the first half of the 1920s, there was an enormous debate about what the technology was, how it should be used, who should pay for it, and how it should be regulated. A wide variety of models was put forward. The US military initially wanted an exclusive monopoly on all radio (that was quickly shot down and forgotten, fortunately.) Some high-minded folks wanted government-run educational radio. A handful of major corporations wanted radio to be private, commercial, and dominated by national networks.
In the United States, this debate was effectively settled by federal legislation passed in 1927 and 1934.
Today, radio is deeply embedded in law, regulations, business, entertainment, and daily life. And not only radio. The broadcast models chosen in the 1920s were applied without serious debate to television when it came along. And they’re still there. All the horrendously complex systems that run and govern broadcasting in the United States today can be traced back quite directly to decisions made in the 1920s by the likes of Herbert Hoover.
Now imagine we want to bulldoze those systems.
Maybe we think Hoover screwed up and we want to start over with something radically different, like, say, making radio exclusively government-run and educational. (Not a recommendation. Thought experiment only.) Broadcasting is so deeply embedded that such a revolution would be fantastically difficult — so difficult that, barring an actual revolution, it is effectively impossible.
So our choices really are constrained. But it’s not so much the technology constraining us. It’s the decisions made by people generations ago.
The importance of people in shaping technologies helps explain why technological development is so hard to predict. After all, if the inherent qualities of technology were all that determined when they would be invented and how they would be used, prediction should be relatively easy. But as the history of technological predictions shows, it’s light years from easy. And that’s thanks to those damned humans.
Before it happened, who could have predicted that Japanese samurai, unlike European knights, would reject fighting with guns and go back to bows and swords?
Or to take another illustration from radio: When the debate over broadcasting began in the United States, essentially all participants agreed there should not be any direct advertising on radio. They said so explicitly and repeatedly. If anything about radio would be predictable, then, it would be that American radio would not have any advertising. And yet, within less than a decade, radio was awash in advertising and advertisers were the real power behind the microphone. How that happened isn’t a technological story. It’s a story about people, money, and power
This human messiness helps explain why even the people who invent and manufacture technologies routinely fail to foresee how — or if — society will embrace and use them. Edison famously thought his phonograph would principally be used as a dictation machine. The Bell Telephone Company marketed telephones exclusively to business because they failed to see that people would enjoy picking up the phone and gossiping with friends. When radio broadcasting exploded suddenly, it came as a complete surprise to almost all the major names in the field, from scientists and engineers to corporate executives.
The history of technology is a history of surprises.
And that is what makes the AI debates thrilling — and frightening — to me.
What the hell is this technology? How does it work? How can we use it? Is it just amusing bullshit or can we do important work with it? Who’s going to control this stuff? What if it’s as important as some people say? What will it change? What won’t it change? Do we want that? I just don’t know. It’s all so … unclear.
History is rife with periods when people wrestled with questions like these and felt as excited, confused, and fearful as we feel now. AI is merely the latest iteration of a very old story.
Like everybody else, I spent some time monkeying around with ChatGPT. Some of the results made me rub my eyes. It was surreal. “Have I gone crazy?” I thought. Then I remembered what a wireless telegraph operator wrote about the first time he heard voices via radio waves in 1910. “Have I gone crazy?” he thought.
Keeping this history in mind is crucial. So is understanding that the technology will not determine the future. We will. Our choices will.
And in the future, our children and grandchildren will be constrained by the choices we made, just as we are constrained by the choices of Herbert Hoover and so many who came before us.
Of course we can always choose not to choose, close our eyes, and let things develop however they will. But that just hands the power of choice to others. And in any event, choosing not to choose is itself a choice.
There’s no way around this responsibility. We will decide.
Nothing is inevitable.
Reminds me of an economics lesson this old history teacher used to teach: the birth of the miniskirt. Post war Britain taxed adult clothing but not children's. Enterprising young women took advantage of this loophole and purchased the cheaper but more revealing clothing and in the process invented the miniskirt. History is a dynamic "climate" in which we all live.
Following your example with Japan and guns, it seems to me that technology is inevitable. Yes, Japanese soldiers didn't like guns and stuck to swords... for a while. Then Commodore Perry came (with guns) and imposed guns on them. Today, security forces in Japan don't use swords. They use guns.