23 Comments

It sounds like you are annoyed that people assume you are always in favor of going ahead and never think that things can go wrong and so you pick on libertarians for causing that perception, because you usually agree with them. You then pick a tiny number of cases where you think libertarians are wrong and label them as having “faith”. You do not pint the faith label on those who automatically jump up to regulate everything and get it wrong.

Also, you smear libertarians by pointing out that some are funded partly by corporate interests. As if this isn’t the case for all other causes. You correctly say that you think that, for the most part, these people are sincere. It’s probably true of many people who are non-libertarians and anti-libertarians. But you use this to select attack libertarians. If the funding does not give cause to dismiss libertarian views, why bring it up at all? Of course, there are many, many libertarians who have never benefited from corporate money. Many of us have campaigned against government funding that benefits us financially.

It is probably true that few libertarians write much about lead although I’ve seen plenty of writing about cigarettes. That’s hardly surprising since just about everyone else writes about lead and cigarettes and the need for regulation. However, you join the regulation train too easily. Do you believe that consumers, once well-informed on real dangers (unlike most of the “dangers” we hear about, as you well know) will ignore them and can only be saved by our wise, benevolent, and impartial politicians and bureaucrats? When you dig into the history of regulation, what you will usually find is the regulations follows awareness and consumer pressure for change (as well as economic developments that make the change workable and affordable. Restrictions on child labor being a good example.

“Faith” is much better applied to those who see a problem and immediately turn to the coercive solution, despite all the failures throughout history, and despite the public choice issues that explain why regulation is systematically bad and gets worse over time. (Let’s also distinguish regulation from application of general law, which libertarians obviously support. If a company is emitting something definitely harmful and people are being hurt without their consent, you don’t need regulation to stop it.)

Your criticism is especially inappropriate in the AI risk/AI apocalypse panic. Lead in gasoline is clearly unhealthy and has no upside apart from a (temporary) mild lowering of costs. AI has enormous likely benefits. We are just beginning to see them. Just as AI is actually starting to be useful – increasing productivity, accelerating medical advances, and so on – some people want to stomp on it and kill it. What you call the libertarian response was indeed predictable. And correct. Stopping AI is a terrible idea that will cause people to die when AI could have accelerated cures. Just to name one area. And you are wrong that this is the universal libertarian response (sadly). Yudkowsky is a libertarian and rejects calls for moratoriums in every other area. He makes an exception for this one because he’s gone down an intellectual rabbit hole and become hysterical.

Expand full comment

I do however, agree with your general premise, that Techno-Libertarianism isn't necessarily good. New tech usually brings a set of unintended consequences, some can be pretty bad. Techno-Optimism (the idea that new tech can be developed quickly, easily, and at low cost) is another problem, especially in the energy space.

Expand full comment

I am not sure your story about the history about tetraethyl lead is entirely correct. Your rendition is the typical one from left-leaning pundits (big bad unethical corporations), and I think there is some truth to it, but its more complex, you know, on the other hand. The issue was the desperate need for a cheap and high volume anti-knock agent for better and better ICE engines, worldwide. TEL was developed, and suited the bill, and was produced safely for 50 years (once its acute toxicity was recognized). Why do I know this? My father worked on TEL plants for Dupont in the 1960s. Its build up in the environment DID become a thing, its actual severity I am not sure about. I am perfectly fine with its phase out, better to err on the side of caution. That said, don't underestimate the controversies about its replacement (stuff like MMT), gasoline STILL uses anti-knock agents.

Expand full comment

What baffles me is that anybody thought we needed more content (text, image, video) when there's no time for all the content I'm bombarded with, already. The AI I need is *filtration*.

An AI-equipped journalist should be unaware that her column criticizing guns resulted in thousands of curses and threats, save for a tabular report on how many were misogynist, racist, threatening; how many were forwarded to police as actionable. Yet emails from total strangers, with a story-tip about gun-control and murders, would get through, because the filter was not just using keywords, it was smart enough to distinguish insults and threats from a tip.

Heck, I could use a columnist filter: The WaPo main page would be half as long, replaced with filter notes like 'David Brooks using trickle-down argument debunked by Krugman in 1998', or "George Will bringing back Iraq arguments from 2003 on Syria".

I need AI to get rid of content, not to generate it.

Expand full comment

Reminds me of Polanyi's description of the classic economic liberals of the mid 1800s. Dismantling social institutions and imposing markets was progress. The reverse was "intervention" and bad.

Expand full comment

The disincentives for accurately describing complex issues in the media and think tanks is jarring to learn about.

Expand full comment

The best analogy I've heard about AI's "worst case scenario" is that it's a "demonic portal" [0]. We summon the portal to fulfill our desires, but it spills out demons and takes over.

[0] https://www.nytimes.com/2023/03/21/opinion/ezra-klein-podcast-kelsey-piper.html

Expand full comment

reminds me of my early youth. I grew up 100 yards from a chemical factory. My father told me in the 50s Stories about the products: example Trichlorethylene : A fantastic degreaser, used in watch factories (I used it to prepare my animal skull collection), was also used as animal anesthetic - but made women working in watch factories addicted (inhaling) -> many more examples

Expand full comment

Really really good, thoughtful article. I have been largely ignoring this, but I agree with your view on this. Stuart Russell said on CNN's Smerconish this morning that AI may be creating its own goals that are not what it was asked to do, but no one knows. AI operations are a complete unknown. So, I guess we better pull back and examine this before we unleash an unknown technology on the world. Let's take a big breath and take stock. Thank you.. Doug

Expand full comment

Perhaps atomic research would have been a better analogy to make than lead. By "atomic", I mean how folks back in the 1940s had a similar debate over nuclear energy vs nuclear weaponization. They were talking about the Dawn of the Era of electricity everywhere and clean, new atomic power plants were the rage in the 1950s. What could go wrong? Well some nuclear missles parked near Florida in the 1960s, that's what.

That screenshot sounds like niave idealists not optimists. Policy had seemed to optimisticly secure the world from Cold War annihlation only a few decades ago, but those policies are unraveling. So even if governments come up with an AI policy, hopefully enough folks are realistic and realize AI will be used for both peace and war.

Also, I feel compelled to add a life experience from working in Big Tech for decades. Technologists are smart, committed people who are often poor judges of ethics. And I don't mean abstract, social ethics but the applied ethics of technology itself. Sure, some technologists are required to take an ethics class in college, like I was, but those classes were about professional details like digital copyrights and how to avoid prison time. Techies are really poor judges of ethics for society, like whether to go for AI.

The rise of tech has shown me 2 types of professionals:

1) the ethical agnostic

2) the ethical narcissist

Type 1 takes the view of similar to the saying "knives don't kill; people kill" -- that technology is like a knife that can be used to make a wholesome dinner or kill someone, and eitherway the maker of the technology -- so themself -- is innocent. This techie sounds great until you realize they actively create all sorts of things that society takes issue with, like "(CEE) Customer Experience & Enhancement" tech that also spies on peoples every move, and enable the Police State.

Type 2 takes their own ethics as what's best for all, like Yoel Roth at Twitter, so we hope for the best under their tyranny. At least we know where they stand. (I'm Type 2, LOL, but at least I have no power.)

Recent leaks and testimony from Social Media Giants shows these technocrats have challenges on self-policing and for figuring out what's good or bad. Yet we are to trust these same people -- my fellows -- with AI?! My life experience tells me someone else needs to do the policing.

Expand full comment