8 Comments

very astute.

Since you're basically saying that there's a whole universe of plausible opinions out there, here's my attempt:

Right now, the GPT AI(s) are basically only "thinking" in the context of answering the questions that we ask of it. So right now, the risk of any sort of AI catastrophe is low. I can't prove this, but I must admit that I'm skeptical that the AIs can "take over" without an autonomous central goal-seeking intelligence

When we start asking our AIs how to produce smarter versions of the AIs, that's the first warning bell. Because once the AIs start designing future AIs, given the statistical nature of our AI strategies, we may never really know what kind of secret capabilities they embed in their progeny.

The second (and possibly last warning bell) is when we give the AIs the robotic tools required to build smarter AIs autonomously. I'm torn on this one - on the one hand, just having robot hands isn't enough - you need power, storage, various exotic minerals, physical space, etc. At least, based on reality as we perceive it. On the other hand, once an AI has intelligence, appendages and the will to power, who knows of what it might be capable?

Expand full comment

Thank you for this. It brings to mind the aphorism that economists have accurately predicted 12 out of the last 3 recessions.

Accurately predicting the weather up to a week in advance has become routine. The best military strategists can foresee medium term plausible scenarios and prepare for them. But no one is able to accurately predict the long term implications of any disruptive new technology.

When it comes to such predictions, my rules of thumb are (1) discount the hyperbolic pundits as they have almost always been wrong in the past, and (2) maintain humility about one's own (in)ability to predict the future. When my wife asks me what I think will happen about this or that issue, I am fond of saying, "I am not a prognosticator." It's OK to say, "This is interesting. And I have no idea how it's going to turn out."

Expand full comment

ICYMI in a 1968 forecasting exercise, a single (now forgotten) MIT prof got the closest.

The 1968 Book That Tried to Predict the World of 2018

https://www.newyorker.com/culture/culture-desk/the-1968-book-that-tried-to-predict-the-world-of-2018

Expand full comment

Reading a Churchill biography led me to read the British Hansard for March 14, 1938. Austria had just been taken over by the Nazis. I learned that Austria was a dictatorship and hadn’t had an election in 10 years, but nonetheless British MPs saw right through Hitler’s justifications and pretences. Even more interesting, they saw that Czechoslovakia would be next and that the Nazis intended to roll right to the Channel. There was good imagination in the House that day, but of course good people could and did disagree on the best course of action

Expand full comment

Let’s add water usage. I couldn’t find any reference to that. The point is infrastructure implications and ‘limited resource consumption” in AI usage

Expand full comment

Please include energy consumption in your scenario planning, forecast musing. I read that GPT monthly usage estimate 23M KWh is roughly the same amount of energy that 175,000 Danes use in a month on average.

Expand full comment