16 Comments

Reminds me of an economics lesson this old history teacher used to teach: the birth of the miniskirt. Post war Britain taxed adult clothing but not children's. Enterprising young women took advantage of this loophole and purchased the cheaper but more revealing clothing and in the process invented the miniskirt. History is a dynamic "climate" in which we all live.

Expand full comment

Following your example with Japan and guns, it seems to me that technology is inevitable. Yes, Japanese soldiers didn't like guns and stuck to swords... for a while. Then Commodore Perry came (with guns) and imposed guns on them. Today, security forces in Japan don't use swords. They use guns.

Expand full comment

You might replace we in we will choose with someone. As in the case of the samurai and guns at some point your options are set by someone else.

Either make we everyone or watch out.

Expand full comment

While the details of future developments are impossible to predict, the bottom line is easy to see.

The Peter Principle will govern our future. We'll keep inventing ever more powerful technologies until we invent one or more existential scale technologies that we lose control of.

This is not futuristic speculation, but rather a description of what is already happening. Thousands of massive hydrogen bombs can bring down our civilization in just minutes, and we have no idea what to do about it. And so we've decided to ignore nuclear weapons and keep on inventing ever more powerful technologies as fast as we possibly can.

Don't look at the technologies. Look at the humans. And once you realize that we're insane, the rest is easy.

Expand full comment

Dan, I am looking forward to your new book.

On AI, I recently read Eric Larson’s “myth of artificial intelligence”, it is definitely worth a read and should help with some of the confused feelings.

My observations are that there is a lot of anthropomorphic behaviour around ML. My wife calls our dog intelligent and I have to correct her that the dog is just responding to a trained response or simple intonation, not actual words. This is the same for these Machine systems. As Turing put it: “ Talking about computers thinking is like talking about submarines swimming. To refer to “swimming” is already to anthropomorphize. Dolphins swim, but submarines don’t. Turing thought the use of the word thinking was like this, too. If a computer played chess, who could say whether it was thinking or just calculating?” P.102 Larson.

The same is the case for learning, Machine Learning is a very simple form of learning (reinforcement). Simple biological systems have far more advanced learning capacity.

The computational processing required for these relatively simple abilities that these machines have is very expensive energy wise. For example, GPT-3 requires the energy to drive to the moon and back to train the system (Bender et al., 2021, p. 610-623). So I wonder what the energy demand might be to train systems for anything nearing simple intelligence?

Having said that, these things are and have no way of being truly intelligent, they do pose concerns and are liable to be used in ways we cannot imagine. For example, plagues of dumb automated collectives of machines have potential to cause mayhem (e.g. “Modelling the Threat from AI: Putting Agency on the Agenda”, Hossaini, 2019). There’s also lots of useful applications of current and more advanced ML (AI), even if much of the conjecture is fanciful.

Expand full comment

How bad or good would it be if writing went the way of computing. That is where once a computer was a person who’s job was various forms of mathematical computing now it is a machine. We still have people doing mathematics but they don’t do the chore part in general. We have people finding when the machines are wrong (eg https://en.m.wikipedia.org/wiki/Pentium_FDIV_bug)

The Dan Gardner of 2042 might still produce excellent writing but would be deciding what to have the AIs write about and checking the correctness and style, perhaps using a range of AI tools that he is expert in directing.

Expand full comment

All power gets used, by someone, and technology is power.

As your example of Japan shows, once a technology is present in the world the people who use it will often have an advantage over the people who don't.

Technology can be restricted. Gun laws in modern Japan and most other civilized nations are an example of that. But the fact of the gun's existence necessitates a response, and regulating many technologies has proven to be an incredibly difficult undertaking, both politically and practically. There's widespread support for gun control in Canada and a culture of gun safety among owners, but we still have a high murder rate from illegal guns.

Different models of deployment and--yes--historical contingencies do show there's some flexibility in how any society responds to new technology, but major technologies present societies with major problems and opportunities, and responding to those will affect social structure, either by requiring a significant investment in regulation and control, or letting it all run wild, or something in between.

Society does not lead in this relationship: without the fact of the technology, there would be no need to respond. And technological power can only be restricted or regulated, not eliminated, once the technology exists. Even relatively modest restrictions come at significant cost: the FCC in the US, the CRTC and the CBC in Canada, and so on, are not cheap.

So with all due respect, you're wrong: AI is inevitable and our response to it is probably constrained by its capabilities. We have some wiggle room, but we do not have the freedom to ignore it if we don't want the cybernetic equivalent of Commodore Perry's gunboats to eventually show up on our borders.

The good thing about LLMs is that the only jobs endangered by them are things like generating political and corporate bumpf that a non-conscious intelligence can easily emulate because it's writing that's already mindless.

Expand full comment

Are ‘we’ able to make choices as to how new technologies are used and integrated into our society, or is the choice always made by those in positions of power and influence ? I assume that there was no inevitable reason why our cities would end up being designed to favour transportation by automobile but it was in the interest of key economic actors. I met an entrepreneur in Cameroon when on posting who deliberately operated his factories at lower than peak efficiency so as it be able to give jobs to people in his community. The production line was capable of producing everything in an almost fully automated fashion. He chose to apply a value beyond pure efficiency and profit seeking. When we look at our consideration of how we will use AI in the future how do we ensure that all angles are taken into account ? How to avoid the radio and advertising outcome ?

Expand full comment

Further, with respect to ChatGPT, one needs to be very careful in acquiring/using it => https://gizmodo.com/iphone-app-store-chat-gpt-3-ai-chatbot-fake-799-free-1849969157

Expand full comment

Your usual (and optimistic?) examination of technological advances historically. But I would submit that AI is something completely different. Why? Machine learning. What we now know, and have known for several years, is that machines (computers) are "learning" and "teaching" themselves independent of human intervention. We also know that they are doing so in a language (languages?) that was not created by human programmers and is not understood.

To my mind, this is cause for concern. That feeling of unease is exacerbated by the fact that in their rush to advance AI, none of the developers (to my knowledge) have incorporated something akin to Asimov's First Law of Robotics - in essence, "Thou shalt cause no harm to humans." I know this is true for Google's Deep mind AI efforts, because I asked that very question of the person in charge of that effort about 7 years ago. At that time, Deep Mind was at the forefront of AI efforts - I don't know whether that's still the case.

The other thing that I, as a lawyer, think about is: "How can you enact laws that govern/constrain the behaviour of AI? And even if you could, how could they be enforced?"

Pull the plug? Not likely.

Yes, it may be possible to enact laws intended to constrain unwanted behaviours of humans "using" AI, and enforce them against those humans, but that is a different thing from the AI itself. For example, I and my colleagues that teach university courses are already trying to figure out whether, and if so how, we can detect students using ChatGPT to "ghost" written assignments. In theory, if you think you have detected it, you can punish the student for a breach of academic integrity, but even if you think you have detected it - how do you prove it? If the student denies having done that, I can't think of any way way that you could prove it - even on the balance of probabilities.

Expand full comment