Posts Tagged ‘morality’
“In the long run, we are all dead”*…
I’ve spent several decades thinking (and helping others think) abut the future: e.g., doing scenario planning via GBN and Heminge & Condell, working with The Institute for the Future, thinking with the folks at the Long Now Foundation; I deeply believe in the importance of long-term thinking. It’s a critical orientation– both a perspective and a set of tools/techniques– that can help us off-set our natural tendency to act in and for the short-run and help us be better, more responsible ancestors.
But two recent articles warn that “the long term” can be turned into a justification for all sorts of grief. The first, from Phil Torres (@xriskology), argues that “so-called rationalists” have created a disturbing secular religion– longtermism– that looks like it addresses humanity’s deepest problems, but actually justifies pursuing the social preferences of elites…
Longtermism should not be confused with “long-term thinking.” It goes way beyond the observation that our society is dangerously myopic, and that we should care about future generations no less than present ones. At the heart of this worldview, as delineated by [Oxford philosopher Nick] Bostrom, is the idea that what matters most is for “Earth-originating intelligent life” to fulfill its potential in the cosmos. What exactly is “our potential”? As I have noted elsewhere, it involves subjugating nature, maximizing economic productivity, replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings living what Bostrom describes as “rich and happy lives” inside high-resolution computer simulations.
This is what “our potential” consists of, and it constitutes the ultimate aim toward which humanity as a whole, and each of us as individuals, are morally obligated to strive. An existential risk, then, is any event that would destroy this “vast and glorious” potential, as Toby Ord, a philosopher at the Future of Humanity Institute, writes in his 2020 book The Precipice, which draws heavily from earlier work in outlining the longtermist paradigm. (Note that Noam Chomsky just published a book also titled The Precipice.)
The point is that when one takes the cosmic view, it becomes clear that our civilization could persist for an incredibly long time and there could come to be an unfathomably large number of people in the future. Longtermists thus reason that the far future could contain way more value than exists today, or has existed so far in human history, which stretches back some 300,000 years. So, imagine a situation in which you could either lift 1 billion present people out of extreme poverty or benefit 0.00000000001 percent of the 1023 biological humans who Bostrom calculates could exist if we were to colonize our cosmic neighborhood, the Virgo Supercluster. Which option should you pick? For longtermists, the answer is obvious: you should pick the latter. Why? Well, just crunch the numbers: 0.00000000001 percent of 1023 people is 10 billion people, which is ten times greater than 1 billion people. This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today.
[For more on posthumanism, see here and here]
“The Dangerous Ideas of ‘Longtermism’ and ‘Existential Risk’”
The second, from Paul Graham Raven (@PaulGrahamRaven) builds on Torres’ case…
Phil Torres… does a pretty good job of setting out the issues with what might be the ultimate in moral philosophies, namely a moral philosophy whose adherents have convinced themselves that it is not at all a moral philosophy, but rather the end-game of the enlightenment-modernist quest for a fully rational and quantifiable way of legitimating the actions that you and your incredibly wealthy donors were already doing, and would like to continue doing indefinitely, regardless of the consequences to other lesser persons in the present and immediate future, thankyouverymuch.
I have one bone of contention, though the fault is not that of Torres but rather the Longtermists themselves: the labelling of their teleology as “posthuman”. This is exactly wrong, as their position is in fact the absolute core of transhumanism; my guess would be that the successful toxification of that latter term (within academia, as well as without) has led them to instead identify with the somewhat more accepted and established label of posthumanism, so as to avoid critique and/or use a totally different epistemology as a way of drawing fire…
[For more on transhumanism, see here and here]
“Longtermism is merely a more acceptable mask for transhumanism“
Both pieces are worth reading in full…
And for more on a posthuman (if not in every case posthumanist) future: “The best books about the post-human Earth.”
* John Maynard Keynes
###
As we take the long view, we might send far-sighted birthday greetings to John Flamsteed; he was born on this date in 1646. An astronomer, he compiled a 3,000-star catalogue, Catalogus Britannicus, and a star atlas called Atlas Coelestis, and made the first recorded observations of Uranus (though he mistakenly catalogued it as a star). Flamsteed led the group of scientists who convinced King Charles II to build the Greenwich Observatory, and personally laid its foundation stone. And he served as the first Astronomer Royal.
“It is forbidden to kill; therefore all murderers are punished unless they kill in large numbers and to the sound of trumpets”*…

Francis Bacon, Study after Velazquez’s Portrait of Pope Innocent X, 1953
Nobody but AI mavens would ever tiptoe up to the notion of creating godlike cyber-entities that are much smarter than people. I hasten to assure you — I take that weird threat seriously. If we could wipe out the planet with nuclear physics back in the late 1940s, there must be plenty of other, novel ways to get that done…
In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed…
Technological proliferation is not a list of principles. It is a deep, multivalent historical process with many radically different stakeholders over many different time-scales. People who invent technology never get to set the rules for what is done with it. A “non-evil” Google, built by two Stanford dropouts, is just not the same entity as modern Alphabet’s global multinational network, with its extensive planetary holdings in clouds, transmission cables, operating systems, and device manufacturing.
It’s not that Google and Alphabet become evil just because they’re big and rich. Frankly, they’re not even all that “evil.” They’re just inherently involved in huge, tangled, complex, consequential schemes, with much more variegated populations than had originally been imagined. It’s like the ethical difference between being two parish priests and becoming Pope.
Of course the actual Pope will confront Artificial Intelligence. His response will not be “is it socially beneficial to the user-base?” but rather, “does it serve God?” So unless you’re willing to morally out-rank the Pope, you need to understand that religious leaders will use Artificial Intelligence in precisely the way that televangelists have used television.
So I don’t mind the moralizing about AI. I even enjoy it as metaphysical game, but I do have one caveat about this activity, something that genuinely bothers me. The practitioners of AI are not up-front about the genuine allure of their enterprise, which is all about the old-school Steve-Jobsian charisma of denting the universe while becoming insanely great. Nobody does AI for our moral betterment; everybody does it to feel transcendent.
AI activists are not everyday brogrammers churning out grocery-code. These are visionary zealots driven by powerful urges they seem unwilling to confront. If you want to impress me with your moral authority, gaze first within your own soul.
Excerpted from the marvelous Bruce Sterling‘s essay “Artificial Morality,” a contribution to the Provocations series, a project of the Los Angeles Review of Books in conjunction with UCI’s “The Future of the Future: The Ethics and Implications of AI” conference.
* Voltaire
###
As we agonize over algorithms, we might recall that it was on this date in 1872 that Luther Crowell patented a machine for the manufacture of accordion-sided, flat-bottomed paper bags (#123,811). That said, Margaret E. Knight might more accurately be considered the “mother of the modern shopping bag”; she had perfected square bottoms two years earlier.
“Educating the mind without educating the heart is no education at all”*…
You are on an asteroid careening through the cosmos. Aboard the asteroid with you are nine hundred highly-skilled physicians, who have been working on developing a revolutionary medication that will cure every disease in the known universe. The asteroid’s current trajectory is taking it straight toward the Planet of Orphans, where all intergalactic civilizations have dumped their unwanted offspring, of which there are now 100 trillion, all living, breathing, and mewling. If you detonate the asteroid, all of the doctors will die, along with the hope for curing every disease in the universe. If you do not detonate the asteroid, the doctors will have time to develop the cure and send it hurtling toward the Healing Planet before you crash into and destroy the Planet of Orphans. Thus you face the crucial question: how useful is this hypothetical for illuminating moral truths?
The “Trolley Problem” is a staple of undergraduate moral philosophy. It is a gruesome hypothetical supposedly designed to test our moral intuitions and introduce the differences between Kantian and consequentialist reasoning. For the lucky few who have thus far managed to avoid exposure to the Trolley Problem, here it is: a runaway trolley is hurtling down the track. In the trolley’s path are five workers, who will inevitably be smushed to a gory paste if it continues along its present course. But you, you have the power to change things: you happen to be standing by a switch. If you give the switch a yank, the trolley will veer onto a different track. On this track, there is only one worker. Do you pull the switch and doom the unsuspecting proletarian, or do you refrain from acting and allow five others to die?…
How a staple of moral education “turns us into horrible people, and discourages us from examining the structural factors that determine our choices”: “The Trolley Problem Will Tell You Nothing Useful About Morality.”
[TotH to the ever-illuminating 3 Quarks Daily]
* Aristotle
###
As we carefully consider the questions that deserve our response, we might spare a thought for German Idealist philosopher Georg Wilhelm Friedrich Hegel; he died on this date in 1831. While his ideas have been divisive, they have been hugely influential (e.g., here). Karl Barth described Hegel as a “Protestant Aquinas,” while Maurice Merleau-Ponty wrote that “all the great philosophical ideas of the past century—the philosophies of Marx and Nietzsche, phenomenology, German existentialism, and psychoanalysis—had their beginnings in Hegel.”
You must be logged in to post a comment.