17 Comments

Nice overview, and I agree that there is a common misunderstanding about what current chatbots do, which is not AGI. But even though chatbots are "only" mimickers, they're very good mimickers and potent tools, including in the hands of bad actors. When a computer in 1982 allowed a photography editor to move an Egyptian pyramid for the cover of National Geographic magazine, there was a general unease not unlike what current chatbots have caused. Was there good reason for the unease? I think there was, because it became harder to have confidence in the reality that photos were thought to represent. Today's chatbots are doing this on steroids, in both images and texts. Even if the current chatbot model of artificial intelligence is only a fancy mime, I worry that its cumulative effect on a functioning society could be the metaphorical last straw ... as I look again at those camels in front of the pyramids in that National Geographic cover photo.

Expand full comment

If AI creates more skepticism among the general public regarding the accuracy of what they see and read in the news, on the web, and elsewhere, that could be a good thing. Perhaps it could cure us of the the Gell-Mann Amnesia Effect once and for all.

Expand full comment

More skepticism — in the scientific sense — is a good thing for people and society. But greater mass confusion about facts, and about what is true, would not be a good thing. I think chatbots and their visual equivalents are likely to create more mass confusion, unfortunately.

Expand full comment

I guess my view is that the average quality of the information out there is already incredibly low. It's better that people recognize this than that they continue to credulously accept what they see and read. Even smart people are already more confused than they realize. Joyce Carol Oates being suckered by the poorly photoshopped Infinity Gauntlet on King Charles is just one recent (and hilarious) example.

Maybe waking up to their confusion will inspire people to learn and practice better epistemological principles than, "Seeing is believing" and, "If I read it in the news, it must be true." We'll see.

Expand full comment

Many years ago I went on a 5-week mountain wilderness survival course in the Wind River Range of Wyoming. At one point during that course, I and a group of about 20 other people, including 3 instructors, had to cross the narrow but deep torrent of a glacier-fed stream.

The only way to get across — without going unknown miles off the trail through damp and dense forrest — was on a natural bridge made by a large fallen tree. Unfortunately, that fallen tree was wet, without bark, and very slippery.

I was reluctant to cross, knowing that if I fell into the stream, not only would it be very cold — itself potentially fatal for me — but that I could very easily drown with my 80-pound pack. (Like everyone else I had unbuckled my waist strap, but the pack was still attached to my shoulders.)

Because of my concern I was the last to attempt a crossing. Halfway during my attempt one foot slipped on the wet bark, and I did a little “dance” with flailing arms and one leg in the air.

Of course I lived to tell the tale, because I eventually found enough balance and stability to finish the crossing.

I think human society is at a similar uncertain point now.

I’m not opposed to A.I. research. But I think the erosion of facts and truth — likely to be amplified greatly by current commercial A.I. technology — is contributing to a very unstable and unpredictable path for humanity.

“We’ll see.” I’m 73 years old now, so I may not witness a successful crossing of the current information landscape, if it happens. I certainly hope to do so.

Expand full comment

Good story. With a useful message, in my read at least.

It wasn't rational to risk your life just to get across a stream. Too much price, not enough reward.

Same applies to AI, imho. Humanity already faces enough unstable and unpredictable forces of huge scale. The last thing we need is yet another one. The fact that we're going to race ahead anyway is just more evidence that we aren't ready.

Consider how we proceed when considering how much power to give to a child. We attempt to match the power to their level of ability. Sensible!

But when it comes to we adults, we throw that common sense out the window and holler for more, and more, and more, without limit, as if we were gods, creatures of unlimited ability. Completely nuts. With a huge price tag attached.

Expand full comment

Thank you, Phil. My story was less an indictment of humanity and more about the unknown chances for our survival ... or the chances of survival for *any* newly developing technological civilization.

In the back of my mind I was thinking about the "Fermi Paradox," which suggests that the Milky Way galaxy should show detectable signs of intelligent extraterrestrial life but doesn't seem to. Is there some kind of "Great Filter" that eliminates most — or all — young technological civilizations like ours?

For a while the most likely potential filter seemed to be the development of nuclear weapons. (Which haven’t gone away.) Then bioweapons or a bio-accident became possibilities. (Not to mention a natural pandemic.) Now the focus seems to be on artificial intelligence.

I have long thought of humanity as a self-evolving child, with no parents to guide us, and that somehow we’ll just have to muddle through. Nature doesn't care about the survival of individual species. Species either survive or they don’t, depending on how well they’re adapted to their environment.

Don’t get me wrong — I think humanity has great potential, and I hope it survives. (I’m naturally prejudiced.)

But it may be possible that young “organic” civilizations like ours survive only by intentionally creating offspring with more efficient information processors — “brains” — that aren’t limited by relic biological evolutionary code.

(Think “kluge.”)

Or maybe our much-vaunted intelligence is the result of a runaway selection process akin to the ones that resulted in the fragile evolutionary extremes of peacocks, hummingbirds, and cheetahs…

I don’t know. I’m only speculating. I think we are on a slippery bridge to something else, but — unlike on my wilderness trek — I don’t think going back is an option.

Expand full comment

“Humans are hard-wired to be social so it’s no surprise that we are compulsive anthropomorphizers.” This phrase summed up the trap we have fallen into. Let’s not anoint AI as a demigod and forget it is a computer loop playing in the background. This article is a great starting point for future discussion.

Expand full comment

People are reacting as if we're in a self-created first-contact scenario. 👽 👾

“We have no need of other worlds. We need mirrors. We don't know what to do with other worlds. A single world, our own, suffices us; but we can't accept it for what it is.”

― Stanisław Lem, Solaris

Expand full comment

Counterpoint - I anticipate AGI not because I don't understand the history, but because I do.

The history of AI is full of extraordinary efforts toward solving small, contained problems: Play chess or backgammon very well. Identify handwritten words from a picture. Decide whether the tone of an e-mail is generally positive or negative. Distinguish whether a photograph contains (for instance) a dog.

I well remember how difficult each of these problems has been. Whole subfields of research have been devoted to gathering, labeling, and processing data using novel and specialized statistical techniques. Sometimes, they've worked well! Deep Blue was an astounding achievement, 25 years ago. The post office's ability to automatically read addresses has been a boon to mail sorting.

This is the level of achievement you mean, when you write, "Someone invents a new technique. The technique leads to rapid progress. Computer scientists are impressed and excited. They promise that progress will only accelerate." I don't doubt that over-excited people have made such promises from time to time, though it's hardly been the norm.

LLMs now solve entire, vast classes of these problems, without incorporating any of the original researchers' insights. The idea of "sentiment analysis" as a field is suddenly laughable; why would you syntactically decompose sentences or assign valence numbers to words, when you can just ask ChatGPT?

And the Turing test was an unscalable peak in the distance, until now. ELIZA and its ilk are easy for computational linguists to break consistently with unusual syntax alone, but the greater challenge has been humanlike knowledge and reasoning, two general capabilities so far outside the scope of engineering possibility that hardly anyone has even attempted them, let alone succeeded. That's what the Turing test measures, at root.

There's more to humanlike intelligence than humanlike knowledge and reasoning, which is why ChatGPT is not as smart as you or me. But maybe there's not that much more -- no other similar peak looms. Turing's test for thinking machines is old, but I think it's still more right than wrong.

Expand full comment

Thank you. This is a welcome antidote to some of the preposterous hype surrounding AI today. No, we are nowhere close to AGI. We are nowhere close to "the singularity". As someone who wrote a primitive chatbot in BASIC in high school decades ago, I see chatGPT as a massive advance. But I still think its similarity to ELIZA is much greater than its similarity to genuine intelligence.

Expand full comment

I don't think it's true that "no one" is thinking about or aware of the history of AI. Certainly not all of us who have been following it for more than three decades. I agree completely, though, that people read too much into the outputs and also that people tend to project recent rapid progress while ignoring the typical s-curve of technological development.

Expand full comment

Thank you for keeping Neustadt and May’s work alive! I found a great irony in your quotation about AGI, as “the holy grail”. I realized that this is one of those analogies that they warned us about and in that spirit, I reflected that I was not as clear as I should be about the origin of that term and how its initial meaning evolved. I knew that it was supposed to be the cup Christ drank from at the Last Supper, but when did that story first emerge?

According to Encyclopedia Britannica (https://www.britannica.com/topic/Grail) (yes I know!) evidence of it first appeared as a romantic tale in the late 12th century and thereafter (among other things) got woven into the Arthurian quest Legends. It took on many different forms (in one case, it was a stone that fell from Heaven) and many different explanations. It was supposed to be invested with a variety of different miraculous powers, depending on the version.

So, in summary, it is a mystical object that is impossible to ‘discover’ at the end of any quest, because it is the creation of myth-makers and troubadours (I say that with great respect).

AGI as the Holy Grail? That sounds about right!

Expand full comment

If you think about AI and the related future, then this post is a must read!

Expand full comment

Great piece! You make a number of very good points here. AI really isn't "intelligent"... It is simply mimicking what humans do, not duplicating it. It's not so much that brain function is inscrutable, as it is that the processes are non-ergodic (they can't be adequately modeled mathematically). You are also correct in that anthropomorphism and fundamental misunderstandings of biological psychology are driving much of the hype over AI. Consider this essay (which is in fundamental agreement with you):

"ChatGPT, Lobster Gizzards, and Intelligence"

https://everythingisbiology.substack.com/p/chatgpt-lobster-gizzards-and-intelligence

Thanks for this really great read. Sincerely. Frederick

Expand full comment

Thank you for introducing me to a most comforting phrase: my brain is not possessed of an inferior intelligence (or worse), it is simply non-ergodic! A perfect defence!

Expand full comment

you're still on the top of the evolutionary mountain...

Expand full comment