It sounds like you are annoyed that people assume you are always in favor of going ahead and never think that things can go wrong and so you pick on libertarians for causing that perception, because you usually agree with them. You then pick a tiny number of cases where you think libertarians are wrong and label them as having “faith”. You do not pint the faith label on those who automatically jump up to regulate everything and get it wrong.
Also, you smear libertarians by pointing out that some are funded partly by corporate interests. As if this isn’t the case for all other causes. You correctly say that you think that, for the most part, these people are sincere. It’s probably true of many people who are non-libertarians and anti-libertarians. But you use this to select attack libertarians. If the funding does not give cause to dismiss libertarian views, why bring it up at all? Of course, there are many, many libertarians who have never benefited from corporate money. Many of us have campaigned against government funding that benefits us financially.
It is probably true that few libertarians write much about lead although I’ve seen plenty of writing about cigarettes. That’s hardly surprising since just about everyone else writes about lead and cigarettes and the need for regulation. However, you join the regulation train too easily. Do you believe that consumers, once well-informed on real dangers (unlike most of the “dangers” we hear about, as you well know) will ignore them and can only be saved by our wise, benevolent, and impartial politicians and bureaucrats? When you dig into the history of regulation, what you will usually find is the regulations follows awareness and consumer pressure for change (as well as economic developments that make the change workable and affordable. Restrictions on child labor being a good example.
“Faith” is much better applied to those who see a problem and immediately turn to the coercive solution, despite all the failures throughout history, and despite the public choice issues that explain why regulation is systematically bad and gets worse over time. (Let’s also distinguish regulation from application of general law, which libertarians obviously support. If a company is emitting something definitely harmful and people are being hurt without their consent, you don’t need regulation to stop it.)
Your criticism is especially inappropriate in the AI risk/AI apocalypse panic. Lead in gasoline is clearly unhealthy and has no upside apart from a (temporary) mild lowering of costs. AI has enormous likely benefits. We are just beginning to see them. Just as AI is actually starting to be useful – increasing productivity, accelerating medical advances, and so on – some people want to stomp on it and kill it. What you call the libertarian response was indeed predictable. And correct. Stopping AI is a terrible idea that will cause people to die when AI could have accelerated cures. Just to name one area. And you are wrong that this is the universal libertarian response (sadly). Yudkowsky is a libertarian and rejects calls for moratoriums in every other area. He makes an exception for this one because he’s gone down an intellectual rabbit hole and become hysterical.
Max, with respect, I think you have misread what I wrote.
First, regarding funding, I brought that up in a post-script -- that is, a section I clearly indicated was separate from the main piece and has a very different point. I also made it clear I was referring only to some individuals, not libertarians in general. And you say "why bring it up at all?" I answered that. As I wrote: "But that money isn’t inconsequential. Once someone has been given a job or funding, he or she has to keep arguing the party line or be cut off. So even those who want to be more thoughtful and integrative have a powerful incentive to avoid that. And thanks to the wildly creative human capacity for rationalization, sincere belief tends to march in lock-step behind self-interest."
On your point about my presumed motivation: No. Not true. My motivation was I read a great many people making an argument which I consider clearly wrong and potentially dangerous. Agree or disagree with my view, that's all there is to my motivation.
You also express offence that I didn't criticize at length people who inflate fears and too readily seek regulation and restriction. Why would I? I noted their existence. I noted I disagree with them, too. But why would I elaborate when the piece isn't about them?
Lastly, you say I am "wrong that that is the universal libertarian response." When did I make such a sweeping claim? All I wrote that comes even close to that is saying that Armstrong "speaks for a lot of people." Because he does.
Dan, thanks for your response. You did write "the techno-libertarians" and not "some techno-libertarians", but thanks for stating that you don't mean all of the group. My point on "why bring it up" is that the very same point about possible motivation applies to all groups and points of view.
If I mistook your motivation, I apologize. Inferring motives is always risky and I should not have done so. I was perturbed because I really like and respect your writing. Coming from someone else, it wouldn't have meant anything.
I do however, agree with your general premise, that Techno-Libertarianism isn't necessarily good. New tech usually brings a set of unintended consequences, some can be pretty bad. Techno-Optimism (the idea that new tech can be developed quickly, easily, and at low cost) is another problem, especially in the energy space.
I am not sure your story about the history about tetraethyl lead is entirely correct. Your rendition is the typical one from left-leaning pundits (big bad unethical corporations), and I think there is some truth to it, but its more complex, you know, on the other hand. The issue was the desperate need for a cheap and high volume anti-knock agent for better and better ICE engines, worldwide. TEL was developed, and suited the bill, and was produced safely for 50 years (once its acute toxicity was recognized). Why do I know this? My father worked on TEL plants for Dupont in the 1960s. Its build up in the environment DID become a thing, its actual severity I am not sure about. I am perfectly fine with its phase out, better to err on the side of caution. That said, don't underestimate the controversies about its replacement (stuff like MMT), gasoline STILL uses anti-knock agents.
While its true that science didn't fully appreciate the dangers of lead in the early 1920s, it was well known to be a dangerous neurotoxin (which the incident I said made painfully clear to everyone). The producers simply assumed that it was safe diluted. I don't doubt their sincerity (the famous hand-washing being a case in point!). But in a situation where economic self-interest leans in a particular direction, assumptions that lean in the same direction are inherently dubious. In any event, I think it's perfectly reasonable -- and neither a "right" or "left" position -- to suggest that the absolute minimum that should have been done was more research by disinterested parties before whole populations were exposed.
I would agree with the statement about disinterested party/third party research before populations are exposed. I also agree that it was just assumed that at the low levels of TEL used, and dilution effects, it wouldn't be an issue. And you know, it probably wasn't for a time. But as the use of petroleum fuels grew (enormously) and then over time, it started to add up.
There were other anti-knock agents discovered shortly after tetraethyl lead but they weren't used because they were slightly more expensive. Gardner's use of this is a good example of government being needed to fill in a safety void created by private enterprise.
I think regulation of the free market sometimes really matters, this is a good example. There are many others. About their being better options for anti-knock compounds in the same time frame as TEL, I don't really know. But I do know that the oil business is (and was) pretty international, and other countries (UK, Germany, France, others) would have presumably used something better if they could have found it.
Found the source of my claim that there were alternatives:
> Midgley’s innovation was quickly patented, and General Motors and Standard Oil jointly established the Ethyl Corporation to produce and market it. They claimed that ethanol – often used in automobile fuels before the war – was an unsatisfactory alternative. However, the fact that ethanol could not be patented, nor its production controlled by a single company, also made it commercially unattractive.
One correction - the reference I provided was to the EIA. not the EPA, but I believe it is still correct. The EPA website implies TEL is still legal and used in some applications, like piston engined aircraft - and there are moves in the EPA to ban it outright. If you do a random search on the internet, you will see a litany of articles from left of center sources (grist, NPR, salon, Guardian, vox, etc, etc) that talk all about the subject, condemning leaded gasoline, yadda, yadda, (It IS toxic, but enough already) but trying to get a really objective take on what really has happened is hard to find.
What baffles me is that anybody thought we needed more content (text, image, video) when there's no time for all the content I'm bombarded with, already. The AI I need is *filtration*.
An AI-equipped journalist should be unaware that her column criticizing guns resulted in thousands of curses and threats, save for a tabular report on how many were misogynist, racist, threatening; how many were forwarded to police as actionable. Yet emails from total strangers, with a story-tip about gun-control and murders, would get through, because the filter was not just using keywords, it was smart enough to distinguish insults and threats from a tip.
Heck, I could use a columnist filter: The WaPo main page would be half as long, replaced with filter notes like 'David Brooks using trickle-down argument debunked by Krugman in 1998', or "George Will bringing back Iraq arguments from 2003 on Syria".
I need AI to get rid of content, not to generate it.
Your comment made me chuckle because I thought the same thing: if we do AI, then I need AI to get rid information overload.
The humorous part of this is that in a way the FAANG algorithms _already_ filter in the form of feeding us more of whatever we pushed the "Like" button on. But the results are so-called Echo Chambers, and I've already experienced disturbing challenges with such algorithmic filtering (hence I'm on Substack now.)
Reminds me of Polanyi's description of the classic economic liberals of the mid 1800s. Dismantling social institutions and imposing markets was progress. The reverse was "intervention" and bad.
The best analogy I've heard about AI's "worst case scenario" is that it's a "demonic portal" [0]. We summon the portal to fulfill our desires, but it spills out demons and takes over.
reminds me of my early youth. I grew up 100 yards from a chemical factory. My father told me in the 50s Stories about the products: example Trichlorethylene : A fantastic degreaser, used in watch factories (I used it to prepare my animal skull collection), was also used as animal anesthetic - but made women working in watch factories addicted (inhaling) -> many more examples
Really really good, thoughtful article. I have been largely ignoring this, but I agree with your view on this. Stuart Russell said on CNN's Smerconish this morning that AI may be creating its own goals that are not what it was asked to do, but no one knows. AI operations are a complete unknown. So, I guess we better pull back and examine this before we unleash an unknown technology on the world. Let's take a big breath and take stock. Thank you.. Doug
Perhaps atomic research would have been a better analogy to make than lead. By "atomic", I mean how folks back in the 1940s had a similar debate over nuclear energy vs nuclear weaponization. They were talking about the Dawn of the Era of electricity everywhere and clean, new atomic power plants were the rage in the 1950s. What could go wrong? Well some nuclear missles parked near Florida in the 1960s, that's what.
That screenshot sounds like niave idealists not optimists. Policy had seemed to optimisticly secure the world from Cold War annihlation only a few decades ago, but those policies are unraveling. So even if governments come up with an AI policy, hopefully enough folks are realistic and realize AI will be used for both peace and war.
Also, I feel compelled to add a life experience from working in Big Tech for decades. Technologists are smart, committed people who are often poor judges of ethics. And I don't mean abstract, social ethics but the applied ethics of technology itself. Sure, some technologists are required to take an ethics class in college, like I was, but those classes were about professional details like digital copyrights and how to avoid prison time. Techies are really poor judges of ethics for society, like whether to go for AI.
The rise of tech has shown me 2 types of professionals:
1) the ethical agnostic
2) the ethical narcissist
Type 1 takes the view of similar to the saying "knives don't kill; people kill" -- that technology is like a knife that can be used to make a wholesome dinner or kill someone, and eitherway the maker of the technology -- so themself -- is innocent. This techie sounds great until you realize they actively create all sorts of things that society takes issue with, like "(CEE) Customer Experience & Enhancement" tech that also spies on peoples every move, and enable the Police State.
Type 2 takes their own ethics as what's best for all, like Yoel Roth at Twitter, so we hope for the best under their tyranny. At least we know where they stand. (I'm Type 2, LOL, but at least I have no power.)
Recent leaks and testimony from Social Media Giants shows these technocrats have challenges on self-policing and for figuring out what's good or bad. Yet we are to trust these same people -- my fellows -- with AI?! My life experience tells me someone else needs to do the policing.
It sounds like you are annoyed that people assume you are always in favor of going ahead and never think that things can go wrong and so you pick on libertarians for causing that perception, because you usually agree with them. You then pick a tiny number of cases where you think libertarians are wrong and label them as having “faith”. You do not pint the faith label on those who automatically jump up to regulate everything and get it wrong.
Also, you smear libertarians by pointing out that some are funded partly by corporate interests. As if this isn’t the case for all other causes. You correctly say that you think that, for the most part, these people are sincere. It’s probably true of many people who are non-libertarians and anti-libertarians. But you use this to select attack libertarians. If the funding does not give cause to dismiss libertarian views, why bring it up at all? Of course, there are many, many libertarians who have never benefited from corporate money. Many of us have campaigned against government funding that benefits us financially.
It is probably true that few libertarians write much about lead although I’ve seen plenty of writing about cigarettes. That’s hardly surprising since just about everyone else writes about lead and cigarettes and the need for regulation. However, you join the regulation train too easily. Do you believe that consumers, once well-informed on real dangers (unlike most of the “dangers” we hear about, as you well know) will ignore them and can only be saved by our wise, benevolent, and impartial politicians and bureaucrats? When you dig into the history of regulation, what you will usually find is the regulations follows awareness and consumer pressure for change (as well as economic developments that make the change workable and affordable. Restrictions on child labor being a good example.
“Faith” is much better applied to those who see a problem and immediately turn to the coercive solution, despite all the failures throughout history, and despite the public choice issues that explain why regulation is systematically bad and gets worse over time. (Let’s also distinguish regulation from application of general law, which libertarians obviously support. If a company is emitting something definitely harmful and people are being hurt without their consent, you don’t need regulation to stop it.)
Your criticism is especially inappropriate in the AI risk/AI apocalypse panic. Lead in gasoline is clearly unhealthy and has no upside apart from a (temporary) mild lowering of costs. AI has enormous likely benefits. We are just beginning to see them. Just as AI is actually starting to be useful – increasing productivity, accelerating medical advances, and so on – some people want to stomp on it and kill it. What you call the libertarian response was indeed predictable. And correct. Stopping AI is a terrible idea that will cause people to die when AI could have accelerated cures. Just to name one area. And you are wrong that this is the universal libertarian response (sadly). Yudkowsky is a libertarian and rejects calls for moratoriums in every other area. He makes an exception for this one because he’s gone down an intellectual rabbit hole and become hysterical.
Max, with respect, I think you have misread what I wrote.
First, regarding funding, I brought that up in a post-script -- that is, a section I clearly indicated was separate from the main piece and has a very different point. I also made it clear I was referring only to some individuals, not libertarians in general. And you say "why bring it up at all?" I answered that. As I wrote: "But that money isn’t inconsequential. Once someone has been given a job or funding, he or she has to keep arguing the party line or be cut off. So even those who want to be more thoughtful and integrative have a powerful incentive to avoid that. And thanks to the wildly creative human capacity for rationalization, sincere belief tends to march in lock-step behind self-interest."
On your point about my presumed motivation: No. Not true. My motivation was I read a great many people making an argument which I consider clearly wrong and potentially dangerous. Agree or disagree with my view, that's all there is to my motivation.
You also express offence that I didn't criticize at length people who inflate fears and too readily seek regulation and restriction. Why would I? I noted their existence. I noted I disagree with them, too. But why would I elaborate when the piece isn't about them?
Lastly, you say I am "wrong that that is the universal libertarian response." When did I make such a sweeping claim? All I wrote that comes even close to that is saying that Armstrong "speaks for a lot of people." Because he does.
Dan, thanks for your response. You did write "the techno-libertarians" and not "some techno-libertarians", but thanks for stating that you don't mean all of the group. My point on "why bring it up" is that the very same point about possible motivation applies to all groups and points of view.
If I mistook your motivation, I apologize. Inferring motives is always risky and I should not have done so. I was perturbed because I really like and respect your writing. Coming from someone else, it wouldn't have meant anything.
Oh, wait. It's April 1. Really, you love libertarians. :-)
I do however, agree with your general premise, that Techno-Libertarianism isn't necessarily good. New tech usually brings a set of unintended consequences, some can be pretty bad. Techno-Optimism (the idea that new tech can be developed quickly, easily, and at low cost) is another problem, especially in the energy space.
I am not sure your story about the history about tetraethyl lead is entirely correct. Your rendition is the typical one from left-leaning pundits (big bad unethical corporations), and I think there is some truth to it, but its more complex, you know, on the other hand. The issue was the desperate need for a cheap and high volume anti-knock agent for better and better ICE engines, worldwide. TEL was developed, and suited the bill, and was produced safely for 50 years (once its acute toxicity was recognized). Why do I know this? My father worked on TEL plants for Dupont in the 1960s. Its build up in the environment DID become a thing, its actual severity I am not sure about. I am perfectly fine with its phase out, better to err on the side of caution. That said, don't underestimate the controversies about its replacement (stuff like MMT), gasoline STILL uses anti-knock agents.
While its true that science didn't fully appreciate the dangers of lead in the early 1920s, it was well known to be a dangerous neurotoxin (which the incident I said made painfully clear to everyone). The producers simply assumed that it was safe diluted. I don't doubt their sincerity (the famous hand-washing being a case in point!). But in a situation where economic self-interest leans in a particular direction, assumptions that lean in the same direction are inherently dubious. In any event, I think it's perfectly reasonable -- and neither a "right" or "left" position -- to suggest that the absolute minimum that should have been done was more research by disinterested parties before whole populations were exposed.
I would agree with the statement about disinterested party/third party research before populations are exposed. I also agree that it was just assumed that at the low levels of TEL used, and dilution effects, it wouldn't be an issue. And you know, it probably wasn't for a time. But as the use of petroleum fuels grew (enormously) and then over time, it started to add up.
There were other anti-knock agents discovered shortly after tetraethyl lead but they weren't used because they were slightly more expensive. Gardner's use of this is a good example of government being needed to fill in a safety void created by private enterprise.
I think regulation of the free market sometimes really matters, this is a good example. There are many others. About their being better options for anti-knock compounds in the same time frame as TEL, I don't really know. But I do know that the oil business is (and was) pretty international, and other countries (UK, Germany, France, others) would have presumably used something better if they could have found it.
Found the source of my claim that there were alternatives:
> Midgley’s innovation was quickly patented, and General Motors and Standard Oil jointly established the Ethyl Corporation to produce and market it. They claimed that ethanol – often used in automobile fuels before the war – was an unsatisfactory alternative. However, the fact that ethanol could not be patented, nor its production controlled by a single company, also made it commercially unattractive.
https://www.chemistryworld.com/features/thomas-midgley-and-the-toxic-legacy-of-leaded-fuel/4014684.article
Jon, thanks for this. For my own selfish purposes, it's incredibly helpful when people elaborate on a subject, post links, etc. Dan
One correction - the reference I provided was to the EIA. not the EPA, but I believe it is still correct. The EPA website implies TEL is still legal and used in some applications, like piston engined aircraft - and there are moves in the EPA to ban it outright. If you do a random search on the internet, you will see a litany of articles from left of center sources (grist, NPR, salon, Guardian, vox, etc, etc) that talk all about the subject, condemning leaded gasoline, yadda, yadda, (It IS toxic, but enough already) but trying to get a really objective take on what really has happened is hard to find.
Sure. Ethanol can be used, and a lot of North American gasoline contains it (for 'enviro' reasons, although it is really a subsidy for corn farmers - Any LCA analysis shows there is really no net emissions benefit). I believe in Canada the main anti-knock agent used is MMT (even if there is also ethanol in the fuel), but I am not sure. I am familiar with hydrocarbon fuels, but its a very complex subject. If you look at the EPA website, they say that for some applications, leaded fuels are still approved. I honestly think that TEL has some unique positive properties that are hard to replicate. Especially if you want a high octane rating. https://www.eia.gov/energyexplained/gasoline/gasoline-and-the-environment-leaded-gasoline.php#:~:text=Because%20leaded%20gasoline%20damages%20catalytic%20converters%2C%20leaded%20gasoline,aircraft%2C%20racing%20cars%2C%20farm%20equipment%2C%20and%20marine%20engines.
What baffles me is that anybody thought we needed more content (text, image, video) when there's no time for all the content I'm bombarded with, already. The AI I need is *filtration*.
An AI-equipped journalist should be unaware that her column criticizing guns resulted in thousands of curses and threats, save for a tabular report on how many were misogynist, racist, threatening; how many were forwarded to police as actionable. Yet emails from total strangers, with a story-tip about gun-control and murders, would get through, because the filter was not just using keywords, it was smart enough to distinguish insults and threats from a tip.
Heck, I could use a columnist filter: The WaPo main page would be half as long, replaced with filter notes like 'David Brooks using trickle-down argument debunked by Krugman in 1998', or "George Will bringing back Iraq arguments from 2003 on Syria".
I need AI to get rid of content, not to generate it.
Your comment made me chuckle because I thought the same thing: if we do AI, then I need AI to get rid information overload.
The humorous part of this is that in a way the FAANG algorithms _already_ filter in the form of feeding us more of whatever we pushed the "Like" button on. But the results are so-called Echo Chambers, and I've already experienced disturbing challenges with such algorithmic filtering (hence I'm on Substack now.)
Reminds me of Polanyi's description of the classic economic liberals of the mid 1800s. Dismantling social institutions and imposing markets was progress. The reverse was "intervention" and bad.
The disincentives for accurately describing complex issues in the media and think tanks is jarring to learn about.
The best analogy I've heard about AI's "worst case scenario" is that it's a "demonic portal" [0]. We summon the portal to fulfill our desires, but it spills out demons and takes over.
[0] https://www.nytimes.com/2023/03/21/opinion/ezra-klein-podcast-kelsey-piper.html
That sounds like Season 1 of the TV show Picard.
reminds me of my early youth. I grew up 100 yards from a chemical factory. My father told me in the 50s Stories about the products: example Trichlorethylene : A fantastic degreaser, used in watch factories (I used it to prepare my animal skull collection), was also used as animal anesthetic - but made women working in watch factories addicted (inhaling) -> many more examples
Really really good, thoughtful article. I have been largely ignoring this, but I agree with your view on this. Stuart Russell said on CNN's Smerconish this morning that AI may be creating its own goals that are not what it was asked to do, but no one knows. AI operations are a complete unknown. So, I guess we better pull back and examine this before we unleash an unknown technology on the world. Let's take a big breath and take stock. Thank you.. Doug
Perhaps atomic research would have been a better analogy to make than lead. By "atomic", I mean how folks back in the 1940s had a similar debate over nuclear energy vs nuclear weaponization. They were talking about the Dawn of the Era of electricity everywhere and clean, new atomic power plants were the rage in the 1950s. What could go wrong? Well some nuclear missles parked near Florida in the 1960s, that's what.
That screenshot sounds like niave idealists not optimists. Policy had seemed to optimisticly secure the world from Cold War annihlation only a few decades ago, but those policies are unraveling. So even if governments come up with an AI policy, hopefully enough folks are realistic and realize AI will be used for both peace and war.
Also, I feel compelled to add a life experience from working in Big Tech for decades. Technologists are smart, committed people who are often poor judges of ethics. And I don't mean abstract, social ethics but the applied ethics of technology itself. Sure, some technologists are required to take an ethics class in college, like I was, but those classes were about professional details like digital copyrights and how to avoid prison time. Techies are really poor judges of ethics for society, like whether to go for AI.
The rise of tech has shown me 2 types of professionals:
1) the ethical agnostic
2) the ethical narcissist
Type 1 takes the view of similar to the saying "knives don't kill; people kill" -- that technology is like a knife that can be used to make a wholesome dinner or kill someone, and eitherway the maker of the technology -- so themself -- is innocent. This techie sounds great until you realize they actively create all sorts of things that society takes issue with, like "(CEE) Customer Experience & Enhancement" tech that also spies on peoples every move, and enable the Police State.
Type 2 takes their own ethics as what's best for all, like Yoel Roth at Twitter, so we hope for the best under their tyranny. At least we know where they stand. (I'm Type 2, LOL, but at least I have no power.)
Recent leaks and testimony from Social Media Giants shows these technocrats have challenges on self-policing and for figuring out what's good or bad. Yet we are to trust these same people -- my fellows -- with AI?! My life experience tells me someone else needs to do the policing.