17 Comments

Nice overview, and I agree that there is a common misunderstanding about what current chatbots do, which is not AGI. But even though chatbots are "only" mimickers, they're very good mimickers and potent tools, including in the hands of bad actors. When a computer in 1982 allowed a photography editor to move an Egyptian pyramid for the cover of National Geographic magazine, there was a general unease not unlike what current chatbots have caused. Was there good reason for the unease? I think there was, because it became harder to have confidence in the reality that photos were thought to represent. Today's chatbots are doing this on steroids, in both images and texts. Even if the current chatbot model of artificial intelligence is only a fancy mime, I worry that its cumulative effect on a functioning society could be the metaphorical last straw ... as I look again at those camels in front of the pyramids in that National Geographic cover photo.

Expand full comment

“Humans are hard-wired to be social so it’s no surprise that we are compulsive anthropomorphizers.” This phrase summed up the trap we have fallen into. Let’s not anoint AI as a demigod and forget it is a computer loop playing in the background. This article is a great starting point for future discussion.

Expand full comment

People are reacting as if we're in a self-created first-contact scenario. 👽 👾

“We have no need of other worlds. We need mirrors. We don't know what to do with other worlds. A single world, our own, suffices us; but we can't accept it for what it is.”

― Stanisław Lem, Solaris

Expand full comment

Counterpoint - I anticipate AGI not because I don't understand the history, but because I do.

The history of AI is full of extraordinary efforts toward solving small, contained problems: Play chess or backgammon very well. Identify handwritten words from a picture. Decide whether the tone of an e-mail is generally positive or negative. Distinguish whether a photograph contains (for instance) a dog.

I well remember how difficult each of these problems has been. Whole subfields of research have been devoted to gathering, labeling, and processing data using novel and specialized statistical techniques. Sometimes, they've worked well! Deep Blue was an astounding achievement, 25 years ago. The post office's ability to automatically read addresses has been a boon to mail sorting.

This is the level of achievement you mean, when you write, "Someone invents a new technique. The technique leads to rapid progress. Computer scientists are impressed and excited. They promise that progress will only accelerate." I don't doubt that over-excited people have made such promises from time to time, though it's hardly been the norm.

LLMs now solve entire, vast classes of these problems, without incorporating any of the original researchers' insights. The idea of "sentiment analysis" as a field is suddenly laughable; why would you syntactically decompose sentences or assign valence numbers to words, when you can just ask ChatGPT?

And the Turing test was an unscalable peak in the distance, until now. ELIZA and its ilk are easy for computational linguists to break consistently with unusual syntax alone, but the greater challenge has been humanlike knowledge and reasoning, two general capabilities so far outside the scope of engineering possibility that hardly anyone has even attempted them, let alone succeeded. That's what the Turing test measures, at root.

There's more to humanlike intelligence than humanlike knowledge and reasoning, which is why ChatGPT is not as smart as you or me. But maybe there's not that much more -- no other similar peak looms. Turing's test for thinking machines is old, but I think it's still more right than wrong.

Expand full comment

Thank you. This is a welcome antidote to some of the preposterous hype surrounding AI today. No, we are nowhere close to AGI. We are nowhere close to "the singularity". As someone who wrote a primitive chatbot in BASIC in high school decades ago, I see chatGPT as a massive advance. But I still think its similarity to ELIZA is much greater than its similarity to genuine intelligence.

Expand full comment

I don't think it's true that "no one" is thinking about or aware of the history of AI. Certainly not all of us who have been following it for more than three decades. I agree completely, though, that people read too much into the outputs and also that people tend to project recent rapid progress while ignoring the typical s-curve of technological development.

Expand full comment

Thank you for keeping Neustadt and May’s work alive! I found a great irony in your quotation about AGI, as “the holy grail”. I realized that this is one of those analogies that they warned us about and in that spirit, I reflected that I was not as clear as I should be about the origin of that term and how its initial meaning evolved. I knew that it was supposed to be the cup Christ drank from at the Last Supper, but when did that story first emerge?

According to Encyclopedia Britannica (https://www.britannica.com/topic/Grail) (yes I know!) evidence of it first appeared as a romantic tale in the late 12th century and thereafter (among other things) got woven into the Arthurian quest Legends. It took on many different forms (in one case, it was a stone that fell from Heaven) and many different explanations. It was supposed to be invested with a variety of different miraculous powers, depending on the version.

So, in summary, it is a mystical object that is impossible to ‘discover’ at the end of any quest, because it is the creation of myth-makers and troubadours (I say that with great respect).

AGI as the Holy Grail? That sounds about right!

Expand full comment

If you think about AI and the related future, then this post is a must read!

Expand full comment

Great piece! You make a number of very good points here. AI really isn't "intelligent"... It is simply mimicking what humans do, not duplicating it. It's not so much that brain function is inscrutable, as it is that the processes are non-ergodic (they can't be adequately modeled mathematically). You are also correct in that anthropomorphism and fundamental misunderstandings of biological psychology are driving much of the hype over AI. Consider this essay (which is in fundamental agreement with you):

"ChatGPT, Lobster Gizzards, and Intelligence"

https://everythingisbiology.substack.com/p/chatgpt-lobster-gizzards-and-intelligence

Thanks for this really great read. Sincerely. Frederick

Expand full comment