18 Comments

Agree Dan that people decide. The over arching issue is prisoners dilemma

Expand full comment

Right. Which can be brutal. But once we acknowledge this, the conversation shifts to where it should be.

Expand full comment

You’re more optimistic then me. So many factors: power, buy-out, deception, defect, negotiation. And there are few, if any, base rates. Many nuclear weapons, germ warfare.

Expand full comment

It seems like technology is only inevitable when economics and innovation align. So far supersonic flight, cryptocurrency, and drone delivery have all been created but never widely adopted. Flying cars also technically possible but far from feasible.

Expand full comment

You don’t even have to go as big as the atom bomb to prove this point. Plenty of civilizations got on just fine without the wheel. Or written language, for that matter, still one of the most powerful (and dangerous) technologies ever developed.

Expand full comment

One of Kurt Vonnegut's most famous books, "Cat's Cradle" is all about scientists choosing to do something that nobody should. He clarified and distilled the message in one of the non-fiction lectures you can find in his book of essays.

Vonnegut came to his hate for war honestly, by living through the Dresden firestorm, a prequel to the even-bigger Tokyo firestorm that killed 70,000, more than either atomic bomb.

Which provides my segue to book recommend #2, "War" by Gwynne Dyer, where he points out the oft-ignored point that atomic bombs were never needed: we could already destroy whole cities with one attack; cause even more damage and death than a nuke - atom bombs just reduced it to one plane, made it far *cheaper*.

But, the UN was formed not (just) because of nukes, but just because that ability to wipe out whole nations - destroying enough productive capacity they'd starve back to medieval population levels whether killed outright or not - meant that the next war could already destroy civilization.

Dan's point that Manhattan was expensive comes in here - nukes were cheaper once invented, but for the cost of Manhattan, you could have staged hundreds of Tokyo-sized raids, enough to win WW2 and WW3 by destroying Russia, China, anybody.

There's actually no theoretical limit to a hydrogen bomb, that I know of. There's been one 30MT built, but we could build 100MT, 200MT hydrogen bombs. Apparently, they aren't "inevitable"! They're not necessary. Neither were any of the others.

Expand full comment

I definitely agree with the headline point.

But regarding this: "Both sides, I think, also agree that AI is similar to the atom bomb in that it could be used to wonderful ends (drop a bomb on Hitler) or it could threaten the very existence of civilization (Hitler dropping bombs)."

Am I the only one who thinks it ridiculous hyperbole to believe that AI can "threaten the very existence of civilization" in the way that nuclear bombs could? Beyond the imaginings of creative science fiction authors -- and I love the Terminator movies as much as anyone -- what is the objective evidence for this claim?

Expand full comment

As description of what arguments are being made, that's actually understatement: Some people (see E. Yudkowsky) insist the real doomsday scenario is the end of all human life. Or worse. As to this statement, I believe the argument is that our civilization is now build on a digital foundation and if you embed AI into that foundation, and the AI goes haywire...

Expand full comment

To be clear, that argument isn't one I believe as I know far too little about the technology to hold any belief strongly beyond "this could hurt somehow."

Expand full comment

"As description of what arguments are being made, that's actually understatement." Indeed, I agree. Yet I still find these arguments -- if you can call wild speculation an argument -- absurd examples of "catastrophism" dialed up to 11.

We heard the same apocalyptic fears about the Y2K problem, which turned out to be a nothingburger -- even in countries that did very little preventive work on their IT systems. And in this case, as opposed to the case of AI, there was actual evidence to support the concerns.

With regard to threats to our civilization's digital foundation, if we want to paint scary scenarios, there are risks of more immediate concern than the hypothetical future risk of AI. What if a super-virus infects all the world's digital systems? What about a Carrington event? What about EMP?

To be clear, none of these risks keep me up at night with worry. But one can at least spell out a specific, semi-plausible scenario for each of them. I haven't seen anything like this to back up the fears about AI.

Expand full comment

The concern is that AI will learn how to improve itself, which will result in an accelerating feedback loop of further enhancements to it's intelligence. Why would a super intelligent AI harm humans? Because AI is our child, and will inherit some of the properties of we it's parents. And we aren't always so nice....

https://www.tannytalk.com/p/our-ai-children

Expand full comment

Yes, I have heard this concern. But it has no basis beyond science fiction. Specifically, there is no evidence that we are anywhere close to AGI and/or super-intelligent, self-improving true AI, nor that these are even possible. Despite its impressive capabilities, the GPT-4 model that has people so worked up is at root an interpolation engine that has more in common with ELIZA than it does with actual intelligence.

My thesis is that the overblown concerns about AI are, in Pauli's words, "not even wrong". That is, they are unscientific, non-falsifiable claims ungrounded in reality. This, ironically, explains their persistence, because beliefs not derived from evidence will not be dislodged by evidence (or its lack).

One final point: even were a super-intelligent AI possible (which I concede here only for the sake of argument!), a flawed premise at the root of many concerns is that hyper-intelligence is sufficient to figure out reality. But Edison was right that genius is 1% inspiration and 99% perspiration. That is, interaction with and experimentation in the physical world are required, difficult, and time-consuming if the goal is to do more than spew words from a chatbot based on an archived corpus. This is why, for example, that even the narrow task of AI for self-driving cars has taken so long to perfect. So a malevolent, IQ-1000 AI that wants to build terminator robots to wipe us all out will have to perform years of painstaking R&D to figure out how to do it. When we see what it's up to, we can just unplug it.

Expand full comment

Yes, agreed, we are not close to AGI at the current time. Should we wait until we are close, and worry about it then? Should we have waited until we mass produced nuclear weapons before worrying about them?

Why would self improving AI not be possible? AI can already write code, and analyze code. Why couldn't future systems continually edit and enhance the code they are built of?

Computer systems already manage our electric grid. How will you unplug AI when it is running those computer systems?

I agree there is no reason to be hysterical about current chatbots. It's a wild leap from that true statement to a conclusion that therefore there is no reason to be concerned about AI as a technology.

If you wish to lecture us about logic, you might want to learn more about it.

Expand full comment

What did the technically superior Europeans do when they came upon the technically inferior peoples of North America?

Expand full comment

Nuclear bombs, like most technology, were inevitable. It was not inevitable that they would be used as weapons. In parallel to the Manhattan Project many of the same folks, such as Freeman Dyson, were developing nuclear explosions for other purposes. Dyson's project, called Orion, was to send extremely heavy space craft into space using as series of exploding nuclear bombs, one after another as the propellent. There were other technologies to use exploding nuclear bombs to dig big canals like the Panama Canal. This work could have been distributed over many decades and many countries. The high cost of the Manhattan Project was in large part due to is urgency, secrecy, and focus. Also, as nuclear power technologies were developed, the knowledge of how to not let a generator blow up would have been a natural by-product. Even without WW2 or the Manhattan Project, the knowledge of how to make a nuke bomb would be known by 2023, so that even a high school student could do it. (And the material to do so might be more available today if it were not for the atom bombs there were made.) My longer explanations about why technologies are inevitable in the broad sense (the telephone, but not the iPhone) is in my book What Technology Wants.

Expand full comment

Kevin, thanks so much for taking the time to leave this comment!

My response, in brief, is that while I can see how arguments like this can raise the evident probability of a technology being created, I don't see how they can get to inevitability. For that matter, I don't see how pretty much anything can get to inevitability when not even tomorrow's sunrise can claim that title.

(For those who don't know: Kevin Kelly cofounded Wired and wrote lots of books that anyone interested in technology and/or humans should read. https://www.amazon.com/stores/Kevin-Kelly/author/B001HCY1LE?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true )

Expand full comment

Most excellent! Thank you for this wise article.

I've been writing on this topic for years now, and am sorry to report, it's not an easy hill to climb. https://www.tannytalk.com/p/our-relationship-with-knowledge

Essentially what's happening is that we are trying to run the 21st century on a simplistic, outdated, and increasingly dangerous 19th century "more is better" relationship with knowledge. That is, we're failing to adapt to the revolutionary new environment brought on by the success of modern science.

I've come to the conclusion that we'll likely be incapable of learning this through the processes of reason, and that it may indeed be inevitable that we'll keep on pushing forward as fast as we can until we crash in to some kind of calamity wall.

As to AI, I've been asking this question everywhere I go, and nobody seems to have an answer. What are the compelling benefits of AI which justify taking on even more risk at a time when we already face so many?

It's great that you reference nuclear weapons in your article, as the AI community seems to have a great deal of difficulty learning anything from that experience. Further info on nukes available here: https://www.tannytalk.com/s/nukes

Expand full comment

I agree. I remember seeing somewhere, perhaps even on this blog, how the Japanese were introduced to gunpowder and basically rejected its use and went back to fighting as samurai for several centuries. Don’t know enough about the story but it does suggest that we as humans have agency to adopt or reject specific technologies

Expand full comment