"But if we think AI does what it does the way we do it, and it’s better at it than we are, what makes us special?"

This really is the key thing, and it's not uncommon that new tech gets interpreted as "making people obsolete"... which is somehow never does (economists will tell you that the higher productivity of new tech creates employment with higher wages than in the jobs it displaces, which is why unemployment today is 3.5% and not 35%, despite persistently confident predictions for the past 150 years that it will be Real Soon Now.)

AI produces intelligent behaviour without consciousness. There is every reason to believe that that's extremely limited, because nature hasn't been able to produce human--or even dog--style intelligence without consciousness (and embodiment, which LLMs also lack). As I wrote last summer:

"There's been a lot of breathlessness in the press lately about machine intelligence, and how the behaviour of predictive text programs like GPT-3 implies that they are conscious.

This is very much like saying an automobile's ability to move across the ground implies it has legs, or at least feet: in nature, terrestrial locomotion is almost universally a product of something like a foot pressing against the ground, usually at the end of a leg. Even slugs and snails are of the class gastropoda, which literally means 'stomach footed'.

If you want to get terrestrial locomotion in nature, you probably need feet.

So do cars have feet?

Nope. Cars have wheels. The fact that they move across the ground implies nothing.

Artificial devices--machines--routinely emulate the behaviour of biological organisms even though they lack the features that are essential for the biological production of that behaviour.

It follows from this that intelligent behaviour is not evidence of consciousness."


Expand full comment

Nobody sucks worse at AI than SF writers. They're writers. They either make the AI just another character (Asimov's 3-laws guys, or "Mr. Data") or a God, an imponderably wise source of all wisdom.

They really suck at spotting the capacity Edward Tufte noted when he pointed out that the human and computer are each brilliant at processing data, but in very different ways, and great synergy comes from getting them to work together in a tight loop. (That's what you see with everybody's face in their phones, getting into a tight question/answer loop.)

Replacing human intelligence is an insanely hard goal. But *partnering* with human intelligence, filling in the gaps where it sucks (crunching numbers, visualizing large amounts of data) there, we have prospered hugely. I wouldn't be researching what "AI can do" but what "People can do with tools".

When I wrote code at work that was just big SQL statements with a lot of OR and AND clauses - a sewer backflow is an actual backup, mains water going backwards to the house - IF the responder's notes contained terms like "running backward" or "depth increasing" or sixteen other popular phrases - the office staff called them the "AI programs".

I demurred, but realized that for them, it wasn't joke. That's because the programs did something, anything, that had previously required a human reader, and now could be even partially automated. If a human intelligence can be freed up, the thing that freed it is, in practical working terms, an AI.

It took dozens of those long statements to sharply raise productivity in that one little office. Over decades I did that in many offices. Others did dozens of offices, whole departments. Collectively, the utility might get by on half the staff at the end of many years of incremental development - you'd look back after 20 years and admit that the place had been, yes, "transformed".

Marketeers are wanting to sell you a product, dust their hands, proclaim you've been transformed. The real transformation will take years of integrating the AI capabilities into the still fundamentally-human analysis and decision process - and the integration must be deep, and under tight-loop human control for best results: every AI will be too stupid and clueless to do good work without tight human interaction.

The current sales pitch is the 21st equivalent to a 1969 IBM salesman slapping the top of the 1-ton mainframe and telling you that this iron genius will soon be "Running your whole office".

Expand full comment

This is the best analysis I’ve read yet. Sharing and saving. Thank you

Expand full comment

I recommend If Nietzsche was a Narwhal for a nice naturalistic take on whether human intelligence is the wondrous peak of nature’s accomplishments.

It’s amazing that so many don’t question trusting say an investment advisor, aka salesperson, but our skeptical about trusting an AI.

Trusting a single AI seems as smart as trusting one of the pundits Dan regularly describes.

I guess that’s the issue. People including governments and corporations regularly do.

But why would it be suddenly worse if they trust an AI?

Expand full comment

Item 12 “They can make serious mistakes because they fail to take the human factor into account “ stands out from the page and as I JUST finished listening to Dan Carlin’s “The Destroyer Of Worlds “ about the history of nuclear weapons I have a renewed sense of unease.

BTW Dan, really enjoying How Big Things Get Done - I’m recommending it to all senior management in our company

Expand full comment

Great piece. I would recommend a book that focuses on exactly the theme of the perceived threat of AI, artificial life, and machines in general to central place humans see themselves holding: The Fourth Discontinuity by Bruce Mazlish, published way back in 1993.

Going further back, to 1863, Samuel Butler under the pseudonym Cellarius, wrote "Darwin Among the Machines." Mazlish does a great job looking at the history of the challenge to human centrality. In that piece (not well known like Erewon) Butler urged a halt to the process of machines evolving. Good thing he didn't get his way then! I hope the AI panickers don't get their way today.

Expand full comment
Mar 20·edited Mar 20

By watching it together last night, I introduced my wife to the classic 1956 movie, “Invasion of the Body Snatchers.” (She loved it.) It occurred to me that the movie could have a new layer of interpretation or application beyond any statements about Communist infiltration, or immigrants “taking away our jobs,” or the depersonalization of modern society. It could now also be a statement about the displacement and impersonation of humans by chatbots and their subsequent iterations. “You’re Next!” is a famous quote from the movie.

I found an informative retrospective review of the movie in — of all places — “The Irish Echo,” which was published in 2006 on the movie’s 50th anniversary. It was republished in 2010 on the death, at age 96, of the principal actor, Kevin McCarthy. (Link to article below.)

(Don’t confuse mentions of actor Kevin McCarthy — and his sister, Mary McCarthy — with the 1950s U.S. Senator, Joseph McCarthy, given that all three are mentioned in the article.)


There’s also an extensive Wikipedia entry about the movie:


The movie is currently available for streaming on several platforms. My wife and I watched it on Amazon Prime. The image and sound was surprisingly crisp and clear, and the quality is plausibly better than when it was originally shown in theaters, since digital reproduction has removed the jitter of celluloid film moving through a projector. The acting was also better than what my childhood memory of the movie suggested it would be. (This version of the movie includes a prologue and epilogue, which I like, though purists don’t.)

Sometime soon I plan to watch the 1987 adaptation of the movie, which I’d also previously seen, but this time to see if the chatbot association might still apply.

Expand full comment

Automatic Decision-Making and "Rules as Code" should come to pass, and will only be accelerated if tax collectors and other government clerks strike in the near future. What I don't see is Artificial Imagination or Artificial Judgment that can be relied upon yet.

Expand full comment