In their classic book Thinking In Time, Ernest May and Richard Neustadt offered advice for executives confronted with an urgent new problem: When everyone is running around moaning “what should we do?!” set that question aside, they said. Instead, ask, “what’s the situation?” In answering that question, your attention will inevitably explore the history that led up to this moment. Which is where good decisions begin.
When it comes to artificial intelligence, it appears no one is following this advice.
The term “artificial intelligence” was coined more than 60 years ago. Practical work on artificial intelligence is at least that old. Much of the theory underlying artificial intelligence is even older: Alan Turing published his seminal “imitation game” paper in 1950. And yet, in the vast volume of excited and alarmed commentary about AI appearing in the news media and social media over the past six months, there’s been hardly a word about history. The average person could be forgiven for thinking AI started bubbling up only a few years ago.
So I contacted Michael Wooldridge, professor or computer science and head of the department of computer science at the University of Oxford.
In 2020, Wooldridge published A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going. If you are, like me, a layperson trying to make sense of a crazy field turning the world upside and shaking it by its ankles, it’s a valuable read. If you agree with May and Neustadt that history is where all difficult and important discussions should start — as I fervently do — it’s downright essential.
In an interview this week, Wooldridge told me I’m right to think that the popular discourse about AI is devoid of history. And not only the popular discourse.
“There's a huge number of people working in artificial intelligence at the moment, internationally, and I think a great many of them actually have got only a rather sketchy understanding of what happened before 2012,” Wooldridge told me. “A lot of people really think [AI] started, roughly speaking, then.”
That year, 2012, was when Geoffrey Hinton unveiled startling new work that led to the current AI boom. It was a watershed, no doubt about it. But to treat it as the beginning of AI is like thinking that Microsoft began when Satya Nadella became CEO in 2014. There’s a little more to the story.
If we look back on the history of AI, one immediately useful insight absolutely leaps out: AI has not progressed in straight lines. It is a roller coaster.
Someone invents a new technique. The technique leads to rapid progress. Computer scientists are impressed and excited. They promise that progress will only accelerate. Soon, major milestones will be passed. Spectacular new feats will be possible within a decade or two. The future is nigh!
Then the technique runs into trouble. Progress slows or stops altogether. Money flees the field. Talent follows money. The wildly optimistic predictions of a few years earlier start to look ridiculous. AI passes into a “winter.”
Then a new technique is invented and the cycle cranks up again.
This whole boom-and-bust drama has played out several times over the decades. Knowing that this AI boom is far from the first AI boom sounds at least a little relevant, doesn’t it?
This is not to suggest nothing is different today, of course. The latest technique — machine learning — has already made progress vastly outstripping anything accomplished before and, for the first time, it has delivered products that are already used by millions of people worldwide. Existing systems like ChatGPT really are spectacular and, even in their current state, are likely to have many, bone-jarring consequences.
But expectations have galloped far ahead of even those accomplishments. “The progress in the last few years has been pretty incredible. I don’t see any reason why that progress is going to slow down. I think it may even accelerate,” said the CEO of DeepMind, Demis Hassabis, on Tuesday. Hassabis even suggested we could have the holy grail — artificial general intelligence, or AGI — in a matter of years.
The belief that we can extrapolate the recent, stunning advances in a straight line into the future is commonplace in the field. In fact, the possibility of progress slowing, much less stalling — absent regulatory intervention — is almost never mentioned. It’s as if that’s impossible.
But given the history of AI, should we not consider it, if not probable, at least possible? Wooldridge thinks it is.
“I think what we've seen is OpenAI, in particular, what they bet on is just, let's scale it up, let's throw ten times more data and ten times more computing resources,” Wooldridge says. “So they did that with GPT-2. They then did it with GPT-3. Literally ten times more computer resources, ten times more data. In GPT-3, which was really the breakthrough system in 2020, it was unimaginable. I mean, jaws dropped at the scale of the data that they use to train the system. It's basically every piece of digital text in the world. And they had AI supercomputers running for months, in order to do to digest that and process all of that data in order to train the neural networks.”
“And we don't have the details, but it looks like they've done a similar thing with GPT-4. More data, more compute resources. Well, if you've already used every piece of digital data in the world, literally, and you know, they download the entirety of the World Wide Web [to train the system on], where are you going to go next?” Wooldridge asks. “Where are you going to get your data from? And when scaling up compute resources, that works for a couple of cycles. But when you're doing ten times each time, to train one of these things, within a couple of iterations it's the entirety of the world computer resources that are going to be required in order to train one.” That sure sounds like a wall ahead. Or at least a very big barrier.
In mid-April, Sam Altman, the CEO of OpenAI publicly agreed that the era of getting better by getting bigger is coming to an end. If the tested and proven method which delivered the results that are making the world gasp is indeed played out, or will soon be, continued progress must depend on developing new techniques. And there are no guarantees or schedules with that sort of work. “You've just got to try to do scientific advances,” Wooldridge says. “And those scientific advances will be much slower and much less predictable.”
If I were a Silicon Valley billionaire, or a more ordinary sort of investor, I’d definitely leave “AI slows down” on the table.
Another reason to delve into the history? To protect ourselves from misunderstanding what we are seeing.
Wooldridge cites the recent example of the Google engineer — who is young, not coincidentally — who warned that a Google project had achieved sentience. “That’s complete nonsense,” Wooldridge says. But it’s a mistake easily made, for a specific reason.
To understand it, take a look at the following video. When it ends, imagine how you would describe what you just saw to someone else.
This film is part of a 1944 experiment in which researchers asked people to watch the images then simply state what they saw.
Almost nobody described the images as consisting of abstract shapes that moved this way and that. Instead, test subjects described the shapes as, in effect, little beings with motives and intentions. These beings perceived and understood their environment. They interacted with each other, with each treating the others as little beings with thoughts and intentions of their own. Far from describing this film as consisting of abstract shapes undergoing meaningless motions — which is literally all it is — they saw a little story about people.
Humans are hard-wired to be social so it’s no surprise that we are compulsive anthropomorphizers. That obviously means our intuitions about whether a machine is “sentient” or “conscious” or “intelligent” should not be trusted. We didn’t evolve to make that sort of judgement. But we did evolve to trust our intuitions — so we generally do, even where it’s clearly a mistake to do so.
“I think people tend to read a lot more into those systems than is actually there,” Wooldridge says. And that makes perfect sense given what the systems are. “Enormous amounts of time and energy have been spent simply on getting them to sound plausible, to sound human and natural. That's literally the whole point of large language models, to be able to do that. And it then turns out that when we see something like that we attribute agency, we attribute mind and intention.” But that’s an illusion.
If you have a long conversation with ChatGPT, it looks and feels astonishingly like a conversation with a fellow human. But imagine you abruptly walk away from this conversation and go on holiday for a week, Wooldridge says. “It's not wondering where you are. It’s not getting irritated. It is literally not doing anything. It is not thinking about anything whatsoever. It's a computer program, which is just paused in the middle of a loop. As a colleague put it, when you look at the clouds, it’s amazing how many faces you can see in them. And that's really what people are doing with this technology.”
Would knowing the history of AI help people avoid falling for this illusion?
In the mid-1960s, a program called ELIZA — notice how we even anthropomorphize when naming objects — allowed users to engage in a conversation in which they are the patient and ELIZA is a psychiatrist. It was a simple program whose limitations were easily revealed. And yet, many computer scientists were blown away by ELIZA, feeling that they were having meaningful conversations with an actual intelligence. But rather than question their intuition and think about the matter more deeply — what would a machine have to do to truly prove itself aware? — they got excited and ran with it. The reactions to ELIZA “entered the folklore of AI,” Wooldridge wrote. “I remember as a Ph.D. student in the 1980s hearing that lonely and socially ill-adjusted graduate students would converse all night with ELIZA, as a proxy for the human relationships they were incapable of forming; there was even a story — hopefully apocryphal — about ELIZA unwilling encouraging a depressed student to commit suicide.”
“Training people not to be taken in, I think, is going to be quite a useful skill,” Wooldridge says. The story of ELIZA is a good start. Folklore or not, it is a salutary warning.
But the history of AI can also help on a more fundamental level.
From Alan Turing on, the pioneers of AI recognized that quick and glib claims about what constitutes agency, intelligence, and consciousness would not do. They thought long and hard about what it would mean for a machine to have these attributes and how we would know it did. It was heavy lifting. And people who know that history are likely to be suspicious of anyone who blithely announces that true artificial intelligence — like people but better! — has arrived. Or is just around the corner.
They will demand precise definitions and careful consideration. And evidence. Lots of evidence.
In a word, anyone who knows the history of AI will be skeptical. Skepticism is always a good idea. But when confronted with novel technologies, grand claims, panic, exploding valuations, and FOMO, it is essential.
Unfortunately, there is a remarkable absence of skepticism in all the buzz about AI. People routinely make claims about what is and what will be, and when, without even defining key terms. Worse, others let them. We don’t ask “what do you mean by that? How will we know it when we see it?” We just assume we all know and agree what terms like “intelligence” and “consciousness” and “AGI” really mean. And what machines that had those qualities could do.
Those are ideal conditions for breeding confusion, misunderstanding, hype, and hysteria.
Don’t want to get sucked in? Start reading the history.
There’s a final way in which history is, I think, invaluable in understanding where we are now with AI.
As Wooldridge makes clear in his book, AI researchers haven’t spent the past sixty years trying to replicate the human brain because the brain is almost indescribably complex and we understand so little of it. Instead, from the beginning of AI, they broke down human thinking into components and worked on developing these separately.
We now have many different components.
As spectacular as it is, ChatGPT is just the latest addition. ChatGPT detects patterns and uses them to make predictions. That’s it. That’s all it does. I don’t mean that in a belittling way, please note. All an atom bomb does is explode. Similarly, the one thing ChatGPT does is a very big thing. But it is only one thing.
Human intelligence is a whole lot more than one thing. It’s the collective ensemble of these many things, their smooth interoperation, that makes human thought what it is.
Looking at the history of AI from 30,000 feet, it would seem the greatest hope for progress toward the dream of artificial intelligence in the fullest sense of the phrase lies in synthesis. Put the parts together. Make one from many.
So how’s that coming along?
“It really is unexplored territory,” Wooldridge says. “There's a lot of interest about how it might work, but nobody, I think, has got any killer ideas about how to pull them together.”
Considering how little we know about how the components in our own skulls work together — we don’t even know what consciousness is, much less how it is created — that’s not terribly surprising. It is also a little reassuring for those of us, including me, intimidated by the pace of change and skeptical of giving immense new powers to a species still capable of spectacular follies and crimes.
We have not become gods capable of creating beings who can contemplate the universe alongside their makers.
We may one day. But we still have a long, long way to go.
Nice overview, and I agree that there is a common misunderstanding about what current chatbots do, which is not AGI. But even though chatbots are "only" mimickers, they're very good mimickers and potent tools, including in the hands of bad actors. When a computer in 1982 allowed a photography editor to move an Egyptian pyramid for the cover of National Geographic magazine, there was a general unease not unlike what current chatbots have caused. Was there good reason for the unease? I think there was, because it became harder to have confidence in the reality that photos were thought to represent. Today's chatbots are doing this on steroids, in both images and texts. Even if the current chatbot model of artificial intelligence is only a fancy mime, I worry that its cumulative effect on a functioning society could be the metaphorical last straw ... as I look again at those camels in front of the pyramids in that National Geographic cover photo.
“Humans are hard-wired to be social so it’s no surprise that we are compulsive anthropomorphizers.” This phrase summed up the trap we have fallen into. Let’s not anoint AI as a demigod and forget it is a computer loop playing in the background. This article is a great starting point for future discussion.