(Roughly) Daily

Posts Tagged ‘morality

“In the long run, we are all dead”*…

I’ve spent several decades thinking (and helping others think) abut the future: e.g., doing scenario planning via GBN and Heminge & Condell, working with The Institute for the Future, thinking with the folks at the Long Now Foundation; I deeply believe in the importance of long-term thinking. It’s a critical orientation– both a perspective and a set of tools/techniques– that can help us off-set our natural tendency to act in and for the short-run and help us be better, more responsible ancestors.

But two recent articles warn that “the long term” can be turned into a justification for all sorts of grief. The first, from Phil Torres (@xriskology), argues that “so-called rationalists” have created a disturbing secular religion– longtermism– that looks like it addresses humanity’s deepest problems, but actually justifies pursuing the social preferences of elites…

Longtermism should not be confused with “long-term thinking.” It goes way beyond the observation that our society is dangerously myopic, and that we should care about future generations no less than present ones. At the heart of this worldview, as delineated by [Oxford philosopher Nick] Bostrom, is the idea that what matters most is for “Earth-originating intelligent life” to fulfill its potential in the cosmos. What exactly is “our potential”? As I have noted elsewhere, it involves subjugating nature, maximizing economic productivity, replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings living what Bostrom describes as “rich and happy lives” inside high-resolution computer simulations.

This is what “our potential” consists of, and it constitutes the ultimate aim toward which humanity as a whole, and each of us as individuals, are morally obligated to strive. An existential risk, then, is any event that would destroy this “vast and glorious” potential, as Toby Ord, a philosopher at the Future of Humanity Institute, writes in his 2020 book The Precipice, which draws heavily from earlier work in outlining the longtermist paradigm. (Note that Noam Chomsky just published a book also titled The Precipice.)

The point is that when one takes the cosmic view, it becomes clear that our civilization could persist for an incredibly long time and there could come to be an unfathomably large number of people in the future. Longtermists thus reason that the far future could contain way more value than exists today, or has existed so far in human history, which stretches back some 300,000 years. So, imagine a situation in which you could either lift 1 billion present people out of extreme poverty or benefit 0.00000000001 percent of the 1023 biological humans who Bostrom calculates could exist if we were to colonize our cosmic neighborhood, the Virgo Supercluster. Which option should you pick? For longtermists, the answer is obvious: you should pick the latter. Why? Well, just crunch the numbers: 0.00000000001 percent of 1023 people is 10 billion people, which is ten times greater than 1 billion people. This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today.

[For more on posthumanism, see here and here]

The Dangerous Ideas of ‘Longtermism’ and ‘Existential Risk’

The second, from Paul Graham Raven (@PaulGrahamRaven) builds on Torres’ case…

Phil Torres… does a pretty good job of setting out the issues with what might be the ultimate in moral philosophies, namely a moral philosophy whose adherents have convinced themselves that it is not at all a moral philosophy, but rather the end-game of the enlightenment-modernist quest for a fully rational and quantifiable way of legitimating the actions that you and your incredibly wealthy donors were already doing, and would like to continue doing indefinitely, regardless of the consequences to other lesser persons in the present and immediate future, thankyouverymuch.

I have one bone of contention, though the fault is not that of Torres but rather the Longtermists themselves: the labelling of their teleology as “posthuman”. This is exactly wrong, as their position is in fact the absolute core of transhumanism; my guess would be that the successful toxification of that latter term (within academia, as well as without) has led them to instead identify with the somewhat more accepted and established label of posthumanism, so as to avoid critique and/or use a totally different epistemology as a way of drawing fire…

[For more on transhumanism, see here and here]

Longtermism is merely a more acceptable mask for transhumanism

Both pieces are worth reading in full…

And for more on a posthuman (if not in every case posthumanist) future: “The best books about the post-human Earth.”

* John Maynard Keynes

###

As we take the long view, we might send far-sighted birthday greetings to John Flamsteed; he was born on this date in 1646. An astronomer, he compiled a 3,000-star catalogue, Catalogus Britannicus, and a star atlas called Atlas Coelestis, and made the first recorded observations of Uranus (though he mistakenly catalogued it as a star). Flamsteed led the group of scientists who convinced King Charles II to build the Greenwich Observatory, and personally laid its foundation stone. And he served as the first Astronomer Royal.

source

“It is forbidden to kill; therefore all murderers are punished unless they kill in large numbers and to the sound of trumpets”*…

 

Pope AI

Francis Bacon, Study after Velazquez’s Portrait of Pope Innocent X, 1953

 

Nobody but AI mavens would ever tiptoe up to the notion of creating godlike cyber-entities that are much smarter than people. I hasten to assure you — I take that weird threat seriously. If we could wipe out the planet with nuclear physics back in the late 1940s, there must be plenty of other, novel ways to get that done…

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed…

Technological proliferation is not a list of principles. It is a deep, multivalent historical process with many radically different stakeholders over many different time-scales. People who invent technology never get to set the rules for what is done with it. A “non-evil” Google, built by two Stanford dropouts, is just not the same entity as modern Alphabet’s global multinational network, with its extensive planetary holdings in clouds, transmission cables, operating systems, and device manufacturing.

It’s not that Google and Alphabet become evil just because they’re big and rich. Frankly, they’re not even all that “evil.” They’re just inherently involved in huge, tangled, complex, consequential schemes, with much more variegated populations than had originally been imagined. It’s like the ethical difference between being two parish priests and becoming Pope.

Of course the actual Pope will confront Artificial Intelligence. His response will not be “is it socially beneficial to the user-base?” but rather, “does it serve God?” So unless you’re willing to morally out-rank the Pope, you need to understand that religious leaders will use Artificial Intelligence in precisely the way that televangelists have used television.

So I don’t mind the moralizing about AI. I even enjoy it as metaphysical game, but I do have one caveat about this activity, something that genuinely bothers me. The practitioners of AI are not up-front about the genuine allure of their enterprise, which is all about the old-school Steve-Jobsian charisma of denting the universe while becoming insanely great. Nobody does AI for our moral betterment; everybody does it to feel transcendent.

AI activists are not everyday brogrammers churning out grocery-code. These are visionary zealots driven by powerful urges they seem unwilling to confront. If you want to impress me with your moral authority, gaze first within your own soul.

Excerpted from the marvelous Bruce Sterling‘s essay “Artificial Morality,” a contribution to the Provocations series, a project of the Los Angeles Review of Books in conjunction with UCI’s “The Future of the Future: The Ethics and Implications of AI” conference.

* Voltaire

###

As we agonize over algorithms, we might recall that it was on this date in 1872 that Luther Crowell patented a machine for the manufacture of accordion-sided, flat-bottomed paper bags (#123,811).  That said, Margaret E. Knight might more accurately be considered the “mother of the modern shopping bag”; she had perfected square bottoms two years earlier.

source

 

Written by (Roughly) Daily

February 20, 2020 at 1:01 am

“Never let your sense of morals get in the way of doing what’s right”*…

 

People disagree about morality. They disagree about what morality prohibits, permits and requires. And they disagree about why morality prohibits, permits and requires these things. Moreover, at least some of the disagreement on these matters is reasonable. It is not readily attributable to woolly thinking or ignorance or inattention to relevant considerations. Sensible and sincere people armed with similar life experiences and acquainted with roughly the same facts come to strikingly different conclusions about the content and justification of morality.

For examples of disagreement about content, think of the standards ‘vote in democratic elections’, ‘do not smack your children’, and ‘do not eat meat’. Some reasonable people recognise a moral duty to vote, or a moral prohibition on smacking or meat-eating; others do not. To see the depth of disagreement about justification, consider the variety of reasons advanced for the widely accepted moral standard ‘do not lie’. Should we refrain from lying because God commands it, because it promotes the greatest happiness of the greatest number, because in deceiving others we treat them as mere means to our ends, or because the virtue of honesty is a necessary condition of our own flourishing? Each of these reasons is persuasive to some and quite unpersuasive to others.

Reasonable disagreement about morality presents educators with a problem. It is hard to see how we can bring it about that children subscribe to moral standards, and believe them to be justified, except by giving them some form of moral education. But it is also hard to see how moral educators can legitimately cultivate these attitudes in the face of reasonable disagreement about the content and justification of morality. It looks as though any attempt to persuade children of the authority of a particular moral code will be tantamount to indoctrination…

Michael Hand asks– and suggests an answer to– a desperately-important question: “If we disagree about morality, how can we teach it?

* Isaac Asimov

###

As we struggle to teach our children well, we might send fabulous birthday greetings to Publius Ovidius Naso; he was born on this date in 43 BCE.  Known in the English-speaking world as Ovid, he was a Roman poet who lived during the reign of Augustus.  He was a contemporary of the older Virgil and Horace, with whom he is often ranked as one of the three canonical poets of Latin literature.  Ovid is today best known for the Metamorphoses, a 15-book continuous mythological narrative written in the meter of epic, and for works in elegiac couplets such as Ars Amatoria (“The Art of Love”) and Fasti.  His poetry was much imitated during Late Antiquity and the Middle Ages, and greatly influenced Western art and literature; he was, for instance a favorite– and favorite source– of Shakespeare.  And the Metamorphoses remains one of the most important sources of classical mythology.

Ovid enjoyed enormous popularity in his time; but, in one of the great mysteries of literary history, was sent by Augustus into exile in a remote province on the Black Sea, where he remained until his death.  Ovid himself attributes his exile to carmen et error, “a poem and a mistake”; but his discretion in discussing the causes has resulted in much speculation among scholars.

 source

 

Written by (Roughly) Daily

March 20, 2018 at 1:01 am

“Educating the mind without educating the heart is no education at all”*…

 

You are on an asteroid careening through the cosmos. Aboard the asteroid with you are nine hundred highly-skilled physicians, who have been working on developing a revolutionary medication that will cure every disease in the known universe. The asteroid’s current trajectory is taking it straight toward the Planet of Orphans, where all intergalactic civilizations have dumped their unwanted offspring, of which there are now 100 trillion, all living, breathing, and mewling. If you detonate the asteroid, all of the doctors will die, along with the hope for curing every disease in the universe. If you do not detonate the asteroid, the doctors will have time to develop the cure and send it hurtling toward the Healing Planet before you crash into and destroy the Planet of Orphans. Thus you face the crucial question: how useful is this hypothetical for illuminating moral truths?

The “Trolley Problem” is a staple of undergraduate moral philosophy. It is a gruesome hypothetical supposedly designed to test our moral intuitions and introduce the differences between Kantian and consequentialist reasoning. For the lucky few who have thus far managed to avoid exposure to the Trolley Problem, here it is: a runaway trolley is hurtling down the track. In the trolley’s path are five workers, who will inevitably be smushed to a gory paste if it continues along its present course. But you, you have the power to change things: you happen to be standing by a switch. If you give the switch a yank, the trolley will veer onto a different track. On this track, there is only one worker. Do you pull the switch and doom the unsuspecting proletarian, or do you refrain from acting and allow five others to die?…

How a staple of moral education “turns us into horrible people, and discourages us from examining the structural factors that determine our choices”: “The Trolley Problem Will Tell You Nothing Useful About Morality.”

[TotH to the ever-illuminating 3 Quarks Daily]

* Aristotle

###

As we carefully consider the questions that deserve our response, we might spare a thought for German Idealist philosopher Georg Wilhelm Friedrich Hegel; he died on this date in 1831.  While his ideas have been divisive, they have been hugely influential (e.g., here).  Karl Barth described Hegel as a “Protestant Aquinas,” while Maurice Merleau-Ponty wrote that “all the great philosophical ideas of the past century—the philosophies of Marx and Nietzsche, phenomenology, German existentialism, and psychoanalysis—had their beginnings in Hegel.”

 source

 

Written by (Roughly) Daily

November 14, 2017 at 1:01 am

“The rules of morality are not the conclusion of our reason”*…

 

The Pew Research Center’s 2013 Global Attitudes survey asked 40,117 respondents in 40 countries what they thought about eight topics often discussed as moral issues: extramarital affairs, gambling, homosexuality, abortion, premarital sex, alcohol consumption, divorce, and the use of contraceptives.  For each issue, respondents were asked whether the behavior is morally acceptable, morally unacceptable, or not a moral issue.

Explore the results (and see larger versions of charts like the one above) here.

* David Hume

###

As we struggle to take the high ground, we might that it was on this date in 1881 that Clara Barton and Adolphus Solomons founded the American National Red Cross, to provide humanitarian aid to victims of wars and natural disasters.  Barton, who’d famously done medical relief work during the American Civil War, had later gone to Europe to provide aid during the Franco-Prussian War, and had encountered the European Red Cross.  On returning, and with the help of Solomons, she started the American branch of the organization. The group was quickly called into action, first in response to the Great Fire of 1881 in the Thumb region of Michigan, which occurred 1n September of 1881, as a result of which over 5,000 people were left homeless. The second major call on te Red Cross was the Johnstown Flood which occurred at the end of May, 1889. Over 2,209 people died and thousands more were injured in or near Johnstown, Pennsylvania in one of the worst disasters in U.S. history.

The Red Cross set up in a community hard hit by tornadoes, Florida, 2007

source

 

Written by (Roughly) Daily

May 21, 2014 at 1:01 am

%d bloggers like this: