There's a proposition in metaphysics (epistemology?) that, "If it's conceivable, it's necessary." As far as I'm aware, the logic supporting that is unassailable - though it has been over 50 years since I studied philosophy so it may be that the proposition has since been disproved.
In any event, the issue remains, in my view, that of machine learning, and the possible outcomes that can arise from that. Artificial intelligences are learning from one another, and are doing so in a language unknown (and untranslatable) to humans. They are a fundamental component of the internet of things, which is far more than "intelligent" refrigerators and thermostats.
What is "good" or "bad" is a human construct - they are the values of a society at any particular time. Does it even make sense to consider whether AI is capable of incorporating human values (which are dynamic and not universal)?
Within that context, the next issue that causes me considerable unease is the idea that humans can somehow regulate, presumably through legislation, what AI can and cannot do. the interconnectedness of AI is already global, but there are literally hundreds of legislative jurisdictions, and while the UN can adopt Conventions, it cannot enforce anything. And in any event, even if some sort of AI Convention could be adopted, how could it possibly be enforced?
Hence my apprehension. If the worst comes to pass will I still be around to experience it? Probably not - but my kids and their kids probably will. I don't like that thought.
As an afterthought, not that I've ever used it, but I wonder what kind of essay ChatGPT would come up with on this subject.
Andreessen and the Pessimists' Archive might overstate the "widespread moral panic" angle for the reasons you state. However, the more relevant point is not how widespread Luddism/techno-fear was for previous emerging technologies, but whether it was correct. Has it ever been?
If AI fear is more widespread now than, say, radio fear was in the past, that doesn't prove anything except perhaps that in the West of today, people seem to be more fearful about *everything*, despite living longer more prosperous lives than ever before.
I agree we need to judge specifics, not popular moods, in making risk assessments. But excessive fear is not the only mistake human beings can make. We have often erred on the other side, too. One example among many: https://dgardner.substack.com/p/the-techno-libertarian-faith
Lead in gasoline is indeed a good example of improper dismissal of legitimate fears of a particular technology. But note in that case that the fears had an objective basis in observed symptoms of lead poisoning. By contrast, the basis for most AI fears appears to be mere speculation based on science fiction stories or hypothesized net job destruction.
I suppose in deciding whether there was a moral panic about the arrival of radio by looking at press articles we also have to consider whether these articles are truly reflective of societal views. What would our numbers look like if we pulled 100 people off the street at random in 1922 and asked for their views about radio ? I suspect that a significant portion would have no views at all. That said, the proportions of positive and negative articles in your made up example are illustrative of the absence of a moral panic.
Exactly right. There’s much more to the context that has to be considered, which PA never does. For example, many of the negative stories report some claim of harm but there’s a tone of bemusement or even ridicule — the newspaper is reporting a negative claim, yes, but it’s rolling its eyes at the claim. I didn’t want to get into it (the post is already too long!) but simply reading old newspaper stories straight and literally can be badly misleading.
There's a proposition in metaphysics (epistemology?) that, "If it's conceivable, it's necessary." As far as I'm aware, the logic supporting that is unassailable - though it has been over 50 years since I studied philosophy so it may be that the proposition has since been disproved.
In any event, the issue remains, in my view, that of machine learning, and the possible outcomes that can arise from that. Artificial intelligences are learning from one another, and are doing so in a language unknown (and untranslatable) to humans. They are a fundamental component of the internet of things, which is far more than "intelligent" refrigerators and thermostats.
What is "good" or "bad" is a human construct - they are the values of a society at any particular time. Does it even make sense to consider whether AI is capable of incorporating human values (which are dynamic and not universal)?
Within that context, the next issue that causes me considerable unease is the idea that humans can somehow regulate, presumably through legislation, what AI can and cannot do. the interconnectedness of AI is already global, but there are literally hundreds of legislative jurisdictions, and while the UN can adopt Conventions, it cannot enforce anything. And in any event, even if some sort of AI Convention could be adopted, how could it possibly be enforced?
Hence my apprehension. If the worst comes to pass will I still be around to experience it? Probably not - but my kids and their kids probably will. I don't like that thought.
As an afterthought, not that I've ever used it, but I wonder what kind of essay ChatGPT would come up with on this subject.
Andreessen and the Pessimists' Archive might overstate the "widespread moral panic" angle for the reasons you state. However, the more relevant point is not how widespread Luddism/techno-fear was for previous emerging technologies, but whether it was correct. Has it ever been?
If AI fear is more widespread now than, say, radio fear was in the past, that doesn't prove anything except perhaps that in the West of today, people seem to be more fearful about *everything*, despite living longer more prosperous lives than ever before.
I agree we need to judge specifics, not popular moods, in making risk assessments. But excessive fear is not the only mistake human beings can make. We have often erred on the other side, too. One example among many: https://dgardner.substack.com/p/the-techno-libertarian-faith
Lead in gasoline is indeed a good example of improper dismissal of legitimate fears of a particular technology. But note in that case that the fears had an objective basis in observed symptoms of lead poisoning. By contrast, the basis for most AI fears appears to be mere speculation based on science fiction stories or hypothesized net job destruction.
I suppose in deciding whether there was a moral panic about the arrival of radio by looking at press articles we also have to consider whether these articles are truly reflective of societal views. What would our numbers look like if we pulled 100 people off the street at random in 1922 and asked for their views about radio ? I suspect that a significant portion would have no views at all. That said, the proportions of positive and negative articles in your made up example are illustrative of the absence of a moral panic.
Exactly right. There’s much more to the context that has to be considered, which PA never does. For example, many of the negative stories report some claim of harm but there’s a tone of bemusement or even ridicule — the newspaper is reporting a negative claim, yes, but it’s rolling its eyes at the claim. I didn’t want to get into it (the post is already too long!) but simply reading old newspaper stories straight and literally can be badly misleading.
Speaking of motivated cognition…
https://www.bostonreview.net/articles/the-fake-news-about-fake-news/