This reminds me of what I understand to be the ancient Greek approach to science. I’m no expert, but it’s my understanding that Greek philosophers felt that hands-on experimentation was vulgar and beneath them (with some exceptions, like Eratosthenes’s measurement of Earth’s circumference). So they did their science through thinking and dialogue. To be sure, some of that thinking was accurate, like atomism. But imagine where we might be today if the Greeks and deigned to conduct experiments.
For a slightly more modern take on communication leading to physical isolation, see Clifford D. Simak’s novel City.
A story would have broken out had he inclided a rebel who'd gotten his hands on a machine to carry him to the earth and see for themself. Reads more like a satire of high schoo,l everyone wants to learn the answer to pass the test - hence the truth, which thsy're nkt really interested in, just the acceptable. Coles Notes does Shakespeare; alls you'll ever need to know.
I'm not sure I agree. Mostly because the complaint seems vague. Is it silos? A failure to engage with the "real world"? How does engagement here preclude engagement elsewhere? And the same discourse that enables thinking inevitabiy limits it as well. Objective sets the value. I come for refuge, for Auden's "ironic points of light, wherever the just exchange their messages". I consider irony the last defense of the defenseless. But irony can be self-defeating if it becomes an excuse for disengagement. And isolating. Communities of people who share our views, our values are necessary for sustaining the hope and resolve needed for action, no matter that irony predisposes us to the cynical view that change is impossible.
Ellul’s diagnosis of modernity—centered on the autonomy of “technique,” the idolatries that follow from it, and the necessity of prophetic, communal response—provides a penetrating lens for reading artificial intelligence.
Though Ellul did not live to see contemporary machine learning and large‑scale data infrastructures, his insights map uncannily well onto today’s AI landscape. In particular, AI tends to foster what can be called a neo‑Manichean ethic: a flattening of moral complexity into algorithmically convenient binaries, enforced and naturalized by systems that both seduce and deform moral imagination.
At the heart of Ellul’s critique is the concept of technique: an autonomous logic that prizes efficiency, predictability, and optimization above other goods. Once technique governs, the ends that cannot be made measurable are sidelined or reinterpreted to fit quantification.
Contemporary AI is a paradigmatic expression of this dynamic. Machine learning systems optimize for accuracy, throughput, or engagement; automatic decision‑making privileges calculable objectives; predictive governance treats social life as patterns to be anticipated and controlled. In such regimes, what counts as right is often reduced to what an algorithm judges successful.
Moral nuance—vocation, dignity, the irreducible singularity of persons—becomes costly, awkward, or invisible.
This instrumental logic encourages a neo‑Manichean flattening. Where ancient Manicheism sharply divided the world into opposing cosmic principles, modern AI moralities often reduce ethics to a binary calculus: optimized versus unoptimized, efficient versus inefficient, true data signal versus noise.
Complex human goods that resist binary judgment—mercy, discretion, vocation—are marginalized because they do not fit neatly into loss functions or performance metrics. The result is a public morality that privileges technical success as moral adequacy and treats what is not optimized as inferior or expendable.
Ellul’s warnings about idolatry and the displacement of responsibility are especially relevant. Systems can become secular idols—appearing neutral and inevitable, yet commanding allegiance and shaping social structure. When decisions are delegated to opaque models, individuals and institutions find convenient alibis: “the algorithm decided,” “the model recommends,” “the data shows.” This diffusion of responsibility mirrors the bureaucratic rationality Ellul described, where moral agency is attenuated and accountability obscured. Ellul insisted that technological systems must not absolve human actors of ethical judgment; applied to AI, that means insisting on clear lines of responsibility, human oversight, and institutional checks where life‑altering decisions are at stake.
A further danger Ellul anticipated is the reduction of persons to data. Modern institutions, he argued, tend to treat human beings instrumentally—means for system ends. AI operationalizes this reduction: profiling, scoring, and predictive categorization translate persons into variables to be optimized, segmented, or excluded. Such processes hollow out relational depth and the covenantal responsibilities that Ellul saw as central to Christian social thought. Where covenantal ethics demand attention to neighbor and fidelity to concrete obligations, algorithmic regimes encourage abstract correlations and impersonal interventions.
The political consequences of this techno‑ethical orientation are substantive. AI‑driven recommendation systems and targeted messaging amplify polarizing content because extremes often maximize engagement; the resulting feedback loops harden public discourse into simpler, antagonistic frames. Ellul’s analyses of propaganda and mass persuasion show how technical systems can accelerate social fragmentation and produce a politics of binary moralism. Moreover, his view of technique as self‑perpetuating can foster fatalism: either one capitulates to technological inevitability or withdraws into a moral purity that is incapable of reforming institutions. The neo‑Manichean framing thus narrows the moral imagination to choices between capitulation and withdrawal.
Ellul’s remedy is neither technophobia nor uncritical embrace. His corrective centers on presence, discernment, and prophetic resistance. “Presence” for Ellul means embodied, faithful communities that refuse to let institutions define their identity or moral horizons. Translating this into an AI context implies cultivating institutions and practices that preserve relational accountability: keeping humans in the loop for high‑stakes decisions; creating communal forums for deliberation about deployments that affect livelihoods and liberties; and resisting the offloading of final moral judgment to machines. Discernment, in Ellul’s dialectical mode, resists neat syntheses and holds tensions—innovation and dignity, efficiency and justice—together, refusing to reduce one to the other.
From these principles follow concrete prescriptions consistent with Ellulian thought. Societies should reject narratives that present AI as a salvific cure for moral and political problems without ethical and democratic frameworks. Human judgment must remain decisive in contexts affecting life, liberty, and fundamental rights, backed by accountability mechanisms that prevent diffusion of responsibility. Participatory impact assessments should be required for high‑risk systems, with meaningful voice given to those most affected. Concentrations of data and modeling infrastructure should be regulated to prevent their becoming techno‑priests of a secular cult. Finally, moral formation—whether in corporations, universities, or churches—must be cultivated so that technical virtuosity is tempered by habits of humility, care, and public responsibility.
Ellul’s method and tone do raise practical challenges. He excels as diagnostician and prophet, less as a policy engineer; translating his critique into workable regulatory architectures demands additional political and technical work. His penetrating account of technique can also tilt toward pessimism; applying his insights constructively requires pairing prophetic critique with pragmatic institutional designs capable of constraining and redirecting technical power rather than succumbing to paralysis.
Seen through Jacques Ellul’s lens, AI encourages a neo‑Manichean ethic that simplifies moral life into binaries optimized by systems indifferent to the qualitative goods of personhood and community. Ellul does not call for blanket rejection of technology but for an ethic that judges technological capability by whether it sustains the relational, covenantal, and moral life essential to human flourishing. The question he prompts us to ask of AI is not only what it can do, but whether it preserves the kind of human presence, responsibility, and dignity that resist becoming mere inputs in an ever‑expanding apparatus of technique.
Been giving out copies of “The Machine Stops” for some years now, particularly over the years during and after the pandemic, which brought physical constraint and technological dependence into an unholy alliance.
I’m not a luddite, hating the sounds of progress. But too often its music sounds like a death march. Or the last plunge of a rocket.
I can do little but continue my floundering in this whirlpool (hot tub?) which in my youth was so enticing. Now I try to convince myself that some ‘Great Chain of Being’ will magically throw me a life belt. Alas, artificial intelligence gets in the way & pushes me off the safety ladder, while removing yet more rungs.
I really like wish I could revisit my 20-year-old undergrad self — but set in 2025, not pre-Internet — reading this excerpt late in the evening after a big bong hit. Whoa! Mind. Blown.
Really enjoyed this. I am rereading Elizabeth E's famous work "The Printing Press as an Agent of Change" for similar insights. McLuhan's "Gutenberg Galaxy" is also similarly prescient.
I agree with you almost completely, and your Forster framing really helped sharpen how I think about this. The only place I’d add a wrinkle is that these whirlpools don’t seem neutral in practice: some get protected and amplified, others get punished or shut down. That asymmetry feels like where real damage starts. Curious how you think about that part: is it just an emergent property, or is something actively selecting which whirlpools survive?
Thanks for unearthing for us that short story, "The Machine stops" by E. M. Forster. I had never thought of him as one of the progenitors of science fiction.
I found that story gobsmacking, in part because of just that. I'm sure some literary scholar has explored the influences on his thinking. I'd love to know more of the specific context and what drove him.
Along those lines, you might like the formerly prescient Men, Machines, and Modern Times by Elting E. Morison. It's out of print, but I reread my copy every few years. Might it apply to Generative AI, but in a seemingly paradoxical way? Its basic concept is that society always is slow to make beneficial use of new technologies. I'm seeing an apparently opposite effect beginning with the Internet, with economically and socially harmful rushes to build out new stuff faster than society can mitigate their harms. Might our switch from technological hindrances to over exuberance point toward the cause of the Fermi Paradox? https://www.seti.org/research/seti-101/fermi-paradox/
OK, then, let us know when we can reorder your next book. Anyone who can simultaneously consider both Elting E. Morison and E. M. Forster is a genuine predictioneer in my mind (this being a reference to Bruce Bueno de Mesquita).
The fact that contemplation of radio produces such an uncanny prediction of the current situation suggests that there really is something about instantaneous broadcast media that represents the biggest break with literary culture
Actually, that last bit sounds like LLMs: no contact with the world, only with what people have said about it.
This reminds me of what I understand to be the ancient Greek approach to science. I’m no expert, but it’s my understanding that Greek philosophers felt that hands-on experimentation was vulgar and beneath them (with some exceptions, like Eratosthenes’s measurement of Earth’s circumference). So they did their science through thinking and dialogue. To be sure, some of that thinking was accurate, like atomism. But imagine where we might be today if the Greeks and deigned to conduct experiments.
For a slightly more modern take on communication leading to physical isolation, see Clifford D. Simak’s novel City.
Fantastic tale, thank you for bringing it forward. I think I’ve got some students who might benefit from it.
A story would have broken out had he inclided a rebel who'd gotten his hands on a machine to carry him to the earth and see for themself. Reads more like a satire of high schoo,l everyone wants to learn the answer to pass the test - hence the truth, which thsy're nkt really interested in, just the acceptable. Coles Notes does Shakespeare; alls you'll ever need to know.
I'm not sure I agree. Mostly because the complaint seems vague. Is it silos? A failure to engage with the "real world"? How does engagement here preclude engagement elsewhere? And the same discourse that enables thinking inevitabiy limits it as well. Objective sets the value. I come for refuge, for Auden's "ironic points of light, wherever the just exchange their messages". I consider irony the last defense of the defenseless. But irony can be self-defeating if it becomes an excuse for disengagement. And isolating. Communities of people who share our views, our values are necessary for sustaining the hope and resolve needed for action, no matter that irony predisposes us to the cynical view that change is impossible.
Good.
Let’s hear the voice of Jacques Ellul.
Ellul’s diagnosis of modernity—centered on the autonomy of “technique,” the idolatries that follow from it, and the necessity of prophetic, communal response—provides a penetrating lens for reading artificial intelligence.
Though Ellul did not live to see contemporary machine learning and large‑scale data infrastructures, his insights map uncannily well onto today’s AI landscape. In particular, AI tends to foster what can be called a neo‑Manichean ethic: a flattening of moral complexity into algorithmically convenient binaries, enforced and naturalized by systems that both seduce and deform moral imagination.
At the heart of Ellul’s critique is the concept of technique: an autonomous logic that prizes efficiency, predictability, and optimization above other goods. Once technique governs, the ends that cannot be made measurable are sidelined or reinterpreted to fit quantification.
Contemporary AI is a paradigmatic expression of this dynamic. Machine learning systems optimize for accuracy, throughput, or engagement; automatic decision‑making privileges calculable objectives; predictive governance treats social life as patterns to be anticipated and controlled. In such regimes, what counts as right is often reduced to what an algorithm judges successful.
Moral nuance—vocation, dignity, the irreducible singularity of persons—becomes costly, awkward, or invisible.
This instrumental logic encourages a neo‑Manichean flattening. Where ancient Manicheism sharply divided the world into opposing cosmic principles, modern AI moralities often reduce ethics to a binary calculus: optimized versus unoptimized, efficient versus inefficient, true data signal versus noise.
Complex human goods that resist binary judgment—mercy, discretion, vocation—are marginalized because they do not fit neatly into loss functions or performance metrics. The result is a public morality that privileges technical success as moral adequacy and treats what is not optimized as inferior or expendable.
Ellul’s warnings about idolatry and the displacement of responsibility are especially relevant. Systems can become secular idols—appearing neutral and inevitable, yet commanding allegiance and shaping social structure. When decisions are delegated to opaque models, individuals and institutions find convenient alibis: “the algorithm decided,” “the model recommends,” “the data shows.” This diffusion of responsibility mirrors the bureaucratic rationality Ellul described, where moral agency is attenuated and accountability obscured. Ellul insisted that technological systems must not absolve human actors of ethical judgment; applied to AI, that means insisting on clear lines of responsibility, human oversight, and institutional checks where life‑altering decisions are at stake.
A further danger Ellul anticipated is the reduction of persons to data. Modern institutions, he argued, tend to treat human beings instrumentally—means for system ends. AI operationalizes this reduction: profiling, scoring, and predictive categorization translate persons into variables to be optimized, segmented, or excluded. Such processes hollow out relational depth and the covenantal responsibilities that Ellul saw as central to Christian social thought. Where covenantal ethics demand attention to neighbor and fidelity to concrete obligations, algorithmic regimes encourage abstract correlations and impersonal interventions.
The political consequences of this techno‑ethical orientation are substantive. AI‑driven recommendation systems and targeted messaging amplify polarizing content because extremes often maximize engagement; the resulting feedback loops harden public discourse into simpler, antagonistic frames. Ellul’s analyses of propaganda and mass persuasion show how technical systems can accelerate social fragmentation and produce a politics of binary moralism. Moreover, his view of technique as self‑perpetuating can foster fatalism: either one capitulates to technological inevitability or withdraws into a moral purity that is incapable of reforming institutions. The neo‑Manichean framing thus narrows the moral imagination to choices between capitulation and withdrawal.
Ellul’s remedy is neither technophobia nor uncritical embrace. His corrective centers on presence, discernment, and prophetic resistance. “Presence” for Ellul means embodied, faithful communities that refuse to let institutions define their identity or moral horizons. Translating this into an AI context implies cultivating institutions and practices that preserve relational accountability: keeping humans in the loop for high‑stakes decisions; creating communal forums for deliberation about deployments that affect livelihoods and liberties; and resisting the offloading of final moral judgment to machines. Discernment, in Ellul’s dialectical mode, resists neat syntheses and holds tensions—innovation and dignity, efficiency and justice—together, refusing to reduce one to the other.
From these principles follow concrete prescriptions consistent with Ellulian thought. Societies should reject narratives that present AI as a salvific cure for moral and political problems without ethical and democratic frameworks. Human judgment must remain decisive in contexts affecting life, liberty, and fundamental rights, backed by accountability mechanisms that prevent diffusion of responsibility. Participatory impact assessments should be required for high‑risk systems, with meaningful voice given to those most affected. Concentrations of data and modeling infrastructure should be regulated to prevent their becoming techno‑priests of a secular cult. Finally, moral formation—whether in corporations, universities, or churches—must be cultivated so that technical virtuosity is tempered by habits of humility, care, and public responsibility.
Ellul’s method and tone do raise practical challenges. He excels as diagnostician and prophet, less as a policy engineer; translating his critique into workable regulatory architectures demands additional political and technical work. His penetrating account of technique can also tilt toward pessimism; applying his insights constructively requires pairing prophetic critique with pragmatic institutional designs capable of constraining and redirecting technical power rather than succumbing to paralysis.
Seen through Jacques Ellul’s lens, AI encourages a neo‑Manichean ethic that simplifies moral life into binaries optimized by systems indifferent to the qualitative goods of personhood and community. Ellul does not call for blanket rejection of technology but for an ethic that judges technological capability by whether it sustains the relational, covenantal, and moral life essential to human flourishing. The question he prompts us to ask of AI is not only what it can do, but whether it preserves the kind of human presence, responsibility, and dignity that resist becoming mere inputs in an ever‑expanding apparatus of technique.
Been giving out copies of “The Machine Stops” for some years now, particularly over the years during and after the pandemic, which brought physical constraint and technological dependence into an unholy alliance.
I’m not a luddite, hating the sounds of progress. But too often its music sounds like a death march. Or the last plunge of a rocket.
I can do little but continue my floundering in this whirlpool (hot tub?) which in my youth was so enticing. Now I try to convince myself that some ‘Great Chain of Being’ will magically throw me a life belt. Alas, artificial intelligence gets in the way & pushes me off the safety ladder, while removing yet more rungs.
I really like wish I could revisit my 20-year-old undergrad self — but set in 2025, not pre-Internet — reading this excerpt late in the evening after a big bong hit. Whoa! Mind. Blown.
Really enjoyed this. I am rereading Elizabeth E's famous work "The Printing Press as an Agent of Change" for similar insights. McLuhan's "Gutenberg Galaxy" is also similarly prescient.
I agree with you almost completely, and your Forster framing really helped sharpen how I think about this. The only place I’d add a wrinkle is that these whirlpools don’t seem neutral in practice: some get protected and amplified, others get punished or shut down. That asymmetry feels like where real damage starts. Curious how you think about that part: is it just an emergent property, or is something actively selecting which whirlpools survive?
Thanks for unearthing for us that short story, "The Machine stops" by E. M. Forster. I had never thought of him as one of the progenitors of science fiction.
.
I found that story gobsmacking, in part because of just that. I'm sure some literary scholar has explored the influences on his thinking. I'd love to know more of the specific context and what drove him.
Along those lines, you might like the formerly prescient Men, Machines, and Modern Times by Elting E. Morison. It's out of print, but I reread my copy every few years. Might it apply to Generative AI, but in a seemingly paradoxical way? Its basic concept is that society always is slow to make beneficial use of new technologies. I'm seeing an apparently opposite effect beginning with the Internet, with economically and socially harmful rushes to build out new stuff faster than society can mitigate their harms. Might our switch from technological hindrances to over exuberance point toward the cause of the Fermi Paradox? https://www.seti.org/research/seti-101/fermi-paradox/
Hah! Love Morison! He's going to feature prominently in my next book.
OK, then, let us know when we can reorder your next book. Anyone who can simultaneously consider both Elting E. Morison and E. M. Forster is a genuine predictioneer in my mind (this being a reference to Bruce Bueno de Mesquita).
The fact that contemplation of radio produces such an uncanny prediction of the current situation suggests that there really is something about instantaneous broadcast media that represents the biggest break with literary culture