(Roughly) Daily

Posts Tagged ‘transhumanism

“To be what you want to be: isn’t this the essence of being human?”*…

… Ah, but what does one– should one– want to be? Jules Evans, with a history of the transhumanist-rationalist-extropian movement that has ensorcelled many of technology’s leaders, one that celebrates an elite few and promises an end to death and taxes…

Once upon a time there was an obscure mailing list. It only had about 100 people on it, yet in this digital village was arguably the greatest concentration of brain power since fifth-century Athens. There was Hans Moravec, pioneer in robotics; Eric Drexler, pioneer of nanotechnology; Eliezer Yudkowsky, father of the Rationalist movement; Max More, father of modern transhumanism; Nick Bostrom, founder of Long-Termism and the study of Existential Risks; Hal Finney, Nick Szabo and Wei Dai, the inventors of cryptocurrency; and Julian Assange, founder of Wikileaks. Together they developed a transhumanist worldview — self-transformation, genetic modification, nootropic drugs, AI, crypto-libertarianism and space exploration. It’s a worldview that has become the ruling philosophy of the obscenely rich of California.

It all started in Bristol, England. There, a young man called Max O’Connor grew up, and went to study philosophy at Oxford. But Max wanted more, more excitement, more life, more everything. He changed his name to Max More, and moved to California, where the future is invented. His dreams took root in the soil prepared by Californian transhumanists of the 1970s. Many of them were members of an organization called L5, dedicated to the colonization of space by a genetic elite — its members included Timothy Leary, Marvin Minsky, Isaac Asimov and Freeman Dyson, and its magazine was named Ad Astra — which was what Elon Musk named his school for SpaceX kids in 2014.

Max was also inspired by Robert Ettinger, an American engineer who argued that humans would soon become immortal superbeings, and we should freeze ourselves when we die so we can be resurrected in the perfect future. While doing a PhD at the University of Southern California, Max got a job at the Alcor Foundation for cryonic preservation, and in 1989 he started a magazine with his fellow philosophy grad, Tom Morrow, called Extropy: Journal of Transhumanist Thought. ‘Do you want to be an ubermensch?’ the first issue asked.

‘Ubermensch’ (overman or superman) is the German word used by Friedrich Nietzsche to describe the individual (male or female) who has overcome all obstacles to the perfection of him or herself…

A history and an explanation: “How did transhumanism become the religion of the super-rich?” from @JulesEvans11.

* David Zindell

###

As we ponder the (presumptuous) preference for perfection, we might recall that it was (tradition holds) on this date in 1517– All Hallows (All Saints) Eve– that Martin Luther, a priest and scholar in Wittenberg, Germany, upset by what he saw as the excesses and corruption of the Roman Catholic Church (especially the papal practice of taking payments– “indulgences”– for the forgiveness of sins), posted his 95 Theses on the door of Castle Church.  Thus began the Protestant Reformation.

Martin Luther (source)

Today, of course, All Hallows (All Saints) Eve, is celebrated as Halloween, which is (if it is, as many scholars believe, directly descended from the ancient Welsh harvest festival Samhain) the longest-running holiday with a set date… and (usually, anyway) the second-biggest (after Christmas) commercial holiday in the United States.

source

“The cyborg would not recognize the Garden of Eden; it is not made of mud and cannot dream of returning to dust.”*…

Here I had tried a straightforward extrapolation of technology, and found myself precipitated over an abyss. It’s a problem we face every time we consider the creation of intelligences greater than our own. When this happens, human history will have reached a kind of singularity — a place where extrapolation breaks down and new models must be applied — and the world will pass beyond our understanding.

Vernor Vinge, True Names and Other Dangers

The once-vibrant transhumanist movement doesn’t capture as much attention as it used to; but as George Dvorsky explains, its ideas are far from dead. Indeed, they helped seed the Futurist movements that are so prominent today (and here and here)…

[On the heels of 9/11] transhumanism made a lot of sense to me, as it seemed to represent the logical next step in our evolution, albeit an evolution guided by humans and not Darwinian selection. As a cultural and intellectual movement, transhumanism seeks to improve the human condition by developing, promoting, and disseminating technologies that significantly augment our cognitive, physical, and psychological capabilities. When I first stumbled upon the movement, the technological enablers of transhumanism were starting to come into focus: genomics, cybernetics, artificial intelligence, and nanotechnology. These tools carried the potential to radically transform our species, leading to humans with augmented intelligence and memory, unlimited lifespans, and entirely new physical and cognitive capabilities. And as a nascent Buddhist, it meant a lot to me that transhumanism held the potential to alleviate a considerable amount of suffering through the elimination of disease, infirmary, mental disorders, and the ravages of aging.

The idea that humans would transition to a posthuman state seemed both inevitable and desirable, but, having an apparently functional brain, I immediately recognized the potential for tremendous harm.

The term “transhumanism” popped into existence during the 20th century, but the idea has been around for a lot longer than that.

The quest for immortality has always been a part of our history, and it probably always will be. The Mesopotamian Epic of Gilgamesh is the earliest written example, while the Fountain of Youth—the literal Fountain of Youth—was the obsession of Spanish explorer Juan Ponce de León.

Notions that humans could somehow be modified or enhanced appeared during the European Enlightenment of the 18th century, with French philosopher Denis Diderot arguing that humans might someday redesign themselves into a multitude of types “whose future and final organic structure it’s impossible to predict,” as he wrote in D’Alembert’s Dream

The Russian cosmists of the late 19th and early 20th centuries foreshadowed modern transhumanism, as they ruminated on space travel, physical rejuvenation, immortality, and the possibility of bringing the dead back to life, the latter being a portend to cryonics—a staple of modern transhumanist thinking. From the 1920s through to the 1950s, thinkers such as British biologist J. B. S. Haldane, Irish scientist J. D. Bernal, and British biologist Julian Huxley (who popularized the term “transhumanism” in a 1957 essay) were openly advocating for such things as artificial wombs, human clones, cybernetic implants, biological enhancements, and space exploration.

It wasn’t until the 1990s, however, that a cohesive transhumanist movement emerged, a development largely brought about by—you guessed it—the internet…

[There follows a brisk and helpful history of transhumanist thought, then an account of the recent past, and present…]

Some of the transhumanist groups that emerged in the 1990s and 2000s still exist or evolved into new forms, and while a strong pro-transhumanist subculture remains, the larger public seems detached and largely disinterested. But that’s not to say that these groups, or the transhumanist movement in general, didn’t have an impact…

“I think the movements had mainly an impact as intellectual salons where blue-sky discussions made people find important issues they later dug into professionally,” said Sandberg. He pointed to Oxford University philosopher and transhumanist Nick Bostrom, who “discovered the importance of existential risk for thinking about the long-term future,” which resulted in an entirely new research direction. The Center for the Study of Existential Risk at the University of Cambridge and the Future of Humanity Institute at Oxford are the direct results of Bostrom’s work. Sandberg also cited artificial intelligence theorist Eliezer Yudkowsky, who “refined thinking about AI that led to the AI safety community forming,” and also the transhumanist “cryptoanarchists” who “did the groundwork for the cryptocurrency world,” he added. Indeed, Vitalik Buterin, a co-founder of Ethereum, subscribes to transhumanist thinking, and his father, Dmitry, used to attend our meetings at the Toronto Transhumanist Association…

Intellectual history: “What Ever Happened to the Transhumanists?,” from @dvorsky.

See also: “The Heaven of the Transhumanists” from @GenofMod (source of the image above).

Donna Haraway

###

As we muse on mortality, we might send carefully-calculated birthday greetings to Marvin Minsky; he was born on this date in 1927.  A biochemist and cognitive scientist by training, he was founding director of MIT’s Artificial Intelligence Project (the MIT AI Lab).  Minsky authored several widely-used texts, and made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics.  He holds several patents, including those for the first neural-network simulator (SNARC, 1951), the first head-mounted graphical display, the first confocal scanning microscope, and the LOGO “turtle” device (with his friend and frequent collaborator Seymour Papert).  His other inventions include mechanical hands and the “Muse” synthesizer.

 source

“In the long run, we are all dead”*…

I’ve spent several decades thinking (and helping others think) abut the future: e.g., doing scenario planning via GBN and Heminge & Condell, working with The Institute for the Future, thinking with the folks at the Long Now Foundation; I deeply believe in the importance of long-term thinking. It’s a critical orientation– both a perspective and a set of tools/techniques– that can help us off-set our natural tendency to act in and for the short-run and help us be better, more responsible ancestors.

But two recent articles warn that “the long term” can be turned into a justification for all sorts of grief. The first, from Phil Torres (@xriskology), argues that “so-called rationalists” have created a disturbing secular religion– longtermism– that looks like it addresses humanity’s deepest problems, but actually justifies pursuing the social preferences of elites…

Longtermism should not be confused with “long-term thinking.” It goes way beyond the observation that our society is dangerously myopic, and that we should care about future generations no less than present ones. At the heart of this worldview, as delineated by [Oxford philosopher Nick] Bostrom, is the idea that what matters most is for “Earth-originating intelligent life” to fulfill its potential in the cosmos. What exactly is “our potential”? As I have noted elsewhere, it involves subjugating nature, maximizing economic productivity, replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings living what Bostrom describes as “rich and happy lives” inside high-resolution computer simulations.

This is what “our potential” consists of, and it constitutes the ultimate aim toward which humanity as a whole, and each of us as individuals, are morally obligated to strive. An existential risk, then, is any event that would destroy this “vast and glorious” potential, as Toby Ord, a philosopher at the Future of Humanity Institute, writes in his 2020 book The Precipice, which draws heavily from earlier work in outlining the longtermist paradigm. (Note that Noam Chomsky just published a book also titled The Precipice.)

The point is that when one takes the cosmic view, it becomes clear that our civilization could persist for an incredibly long time and there could come to be an unfathomably large number of people in the future. Longtermists thus reason that the far future could contain way more value than exists today, or has existed so far in human history, which stretches back some 300,000 years. So, imagine a situation in which you could either lift 1 billion present people out of extreme poverty or benefit 0.00000000001 percent of the 1023 biological humans who Bostrom calculates could exist if we were to colonize our cosmic neighborhood, the Virgo Supercluster. Which option should you pick? For longtermists, the answer is obvious: you should pick the latter. Why? Well, just crunch the numbers: 0.00000000001 percent of 1023 people is 10 billion people, which is ten times greater than 1 billion people. This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today.

[For more on posthumanism, see here and here]

The Dangerous Ideas of ‘Longtermism’ and ‘Existential Risk’

The second, from Paul Graham Raven (@PaulGrahamRaven) builds on Torres’ case…

Phil Torres… does a pretty good job of setting out the issues with what might be the ultimate in moral philosophies, namely a moral philosophy whose adherents have convinced themselves that it is not at all a moral philosophy, but rather the end-game of the enlightenment-modernist quest for a fully rational and quantifiable way of legitimating the actions that you and your incredibly wealthy donors were already doing, and would like to continue doing indefinitely, regardless of the consequences to other lesser persons in the present and immediate future, thankyouverymuch.

I have one bone of contention, though the fault is not that of Torres but rather the Longtermists themselves: the labelling of their teleology as “posthuman”. This is exactly wrong, as their position is in fact the absolute core of transhumanism; my guess would be that the successful toxification of that latter term (within academia, as well as without) has led them to instead identify with the somewhat more accepted and established label of posthumanism, so as to avoid critique and/or use a totally different epistemology as a way of drawing fire…

[For more on transhumanism, see here and here]

Longtermism is merely a more acceptable mask for transhumanism

Both pieces are worth reading in full…

And for more on a posthuman (if not in every case posthumanist) future: “The best books about the post-human Earth.”

* John Maynard Keynes

###

As we take the long view, we might send far-sighted birthday greetings to John Flamsteed; he was born on this date in 1646. An astronomer, he compiled a 3,000-star catalogue, Catalogus Britannicus, and a star atlas called Atlas Coelestis, and made the first recorded observations of Uranus (though he mistakenly catalogued it as a star). Flamsteed led the group of scientists who convinced King Charles II to build the Greenwich Observatory, and personally laid its foundation stone. And he served as the first Astronomer Royal.

source

%d bloggers like this: