Posts Tagged ‘Nick Bostrom’
“Sometimes we drug ourselves with dreams of new ideas”*…
Further to last week’s piece on Samuel Arbesman‘s “incremental humanism,” Jennifer Banks unpacks the differences between the two leading “flavors” of humanism afoot today: one akin to Arbesman’s; the other, not so much…
In 2003, Edward Said wrote in the wake of the terrorist attacks of 11 September 2001 and in the context of the United States’ war on terror that ‘humanism is the only, and, I would go so far as saying, the final, resistance we have against the inhuman practices and injustices that disfigure human history.’ The moment, he felt, was ‘apocalyptic’, and the end was indeed near for him; he died of leukaemia later that year.
So why was it humanism that he held to so tightly as war and sickness cinched time’s horizon around him? Humanism, an intellectual and cultural movement that emerged in Renaissance Europe emphasising classical learning and affirming human potential, had been subject to decades of critique by the time Said was writing this. Among its many detractors were postcolonialists who argued that humanism’s elevation of a particular kind of human – Eurocentric, rational, empiricist, self-realising, secular and universal – had provided thin cover for the exploitation of large swaths of the world’s population.
But Said, one of the founders of postcolonial studies, hadn’t given up on the term, despite its imperialist entanglements. He imagined a humanism abused but not exhausted, an –ism more elastic and plural, more subject to critique and revision, and more acquainted with the limits of reason than many humanisms have historically been. Humanism, he argued, was more like an ‘exigent, resistant, intransigent art’ – an art that was not, for him, particularly triumphant. His humanism was defined by a ‘tragic flaw that is constitutive to it and cannot be removed’. It refused all final solutions to the irreconcilable, dialectical oppositions that are at the heart of human life – a refusal that ironically kept the world liveable and the future open.
At stake in his defence was not only the survival of the humanistic fields of study he had devoted his academic career to, but the survival, freedom and thriving of actual people, including those populations that humanisms had historically excluded. Various antihumanisms had gradually been eroding humanism’s stature within the academy, but it was humanism, he believed, with its positive ideas about liberty, learning and human agency – and not antihumanist deconstructions – that inspired people to resist unjust wars, military occupations, despotism and tyranny.
Humanism, however, fell further out of vogue in the two decades that followed. Humanities enrolments dropped dramatically at universities, and funding for departments like comparative literature, women’s studies, religion, and foreign languages got slashed. Increasingly, however, it wasn’t just the inadequacies of any –ism that were the problem. It was the subject at the heart of humanism that came under widespread attack: the human itself. Given that history could be read as a catalogue of human greed, blindness, exclusions and violence, the future seemed to belong to someone – or something – else. The humane in humanism seemed to be missing. Alternative ideologies like antihumanism, transhumanism, posthumanism and antinatalism seeped from the fringes into the mainstream, buoyed by their conviction that they might offer the planet or even the cosmos something more ethical, more humane even, than humans have ever been able to. Humanity’s time, perhaps, was simply up.
In his book The Revolt Against Humanity: Imagining a Future Without Us (2023), the American critic Adam Kirsch identifies the contested line between humanists and non-humanists as one of the defining faultlines of our political and cultural moment. The debates between them can feel merely semantic, the stuff of graduate seminars, but the revolt against humanity is likely to have major implications for our future, Kirsch argues, even if its prophecies about our imminent extinction don’t come true. ‘[D]isappointed prophecies,’ he writes, ‘have been responsible for some of the most important movements in history, from Christianity to Communism.’ Anyone committed to the prospect of a liveable future should pay close attention to what’s going on here.
…
I might have never put too much stock in a term like humanism if I had not read around in the transhumanist literature. I came to this work while researching a book on birth that explored the relationship between birth, death and the question of a human future. Does humanity have a future? Do we deserve one? What will that future look like? The answers to those questions will be determined by many forces – technological, economic, political, environmental and more – but also by how we experience and think about our own births and deaths. Despite large areas of convergence, humanists and transhumanists can end up with wildly different visions of our future, based on dramatically different understandings of birth and death, as one can see by comparing how a novelist (Toni Morrison) and a philosopher (Nick Bostrom) have explored these themes. Morrison offers us a prophetic celebration of Earthly, ongoing, biological generation and a future that allows for human freedom, while Bostrom points us toward a highly controlled surveillance world order, organised around a paranoid fear of human action, and oriented toward the pristine emptiness of outer space. Which future, we should ask ourselves, would we willingly choose?
…
Do read on for her analysis: “What awaits us?“, from @jenniferabanks in @aeonmag.
Apposite: “The Philosophy Of Co-Becoming” from @NoemaMag and “To pay attention, this is our endless and proper work,” @LMSacasas on Illich.
###
As we ponder possibility, we might spare a thought for Dandara, “the Warrior Queen” of the Quilombo dos Palmares, a settlement of Afro-Brazilian people who freed themselves from enslavement during Brazil’s colonial period. She was captured by colonial authorities on this date in 1694 and committed suicide rather than be returned to a life of slavery.
“Toto, I’ve a feeling we’re not in Kansas anymore”*…

in Pensées (1670), Blaise Pascal famously outlined a proposition that has become known as “Pascal’s Wager”:
If there is a God, He is infinitely incomprehensible, since, having, neither parts nor limits, He has no affinity to us. We are then incapable of knowing either what He is or if He is… [so] belief is a wise wager. Granted that faith cannot be proved, what harm will come to you if you gamble on its truth and it proves false? If you gain, you gain all; if you lose, you lose nothing. Wager, then, without hesitation, that He exists.
In last Sunday’s New York Times, philosophy professor Preston Greene updates– and inverts– Pascal’s logic. Noting that scientists are proposing an experimental test of Oxford professor Nick Bostrom‘s assertion that we are living in an elaborate simulation, Greene argues strongly against it…
So far, none of these experiments has been conducted, and I hope they never will be. Indeed, I am writing to warn that conducting these experiments could be a catastrophically bad idea — one that could cause the annihilation of our universe.Think of it this way. If a researcher wants to test the efficacy of a new drug, it is vitally important that the patients not know whether they’re receiving the drug or a placebo. If the patients manage to learn who is receiving what, the trial is pointless and has to be canceled.
In much the same way, as I argue in a forthcoming paper in the journal Erkenntnis, if our universe has been created by an advanced civilization for research purposes, then it is reasonable to assume that it is crucial to the researchers that we don’t find out that we’re in a simulation. If we were to prove that we live inside a simulation, this could cause our creators to terminate the simulation — to destroy our world.Of course, the proposed experiments may not detect anything that suggests we live in a computer simulation. In that case, the results will prove nothing. This is my point: The results of the proposed experiments will be interesting only when they are dangerous. While there would be considerable value in learning that we live in a computer simulation, the cost involved — incurring the risk of terminating our universe — would be many times greater…
As far as I am aware, no physicist proposing simulation experiments has considered the potential hazards of this work. This is surprising, not least because Professor Bostrom himself explicitly identified “simulation shutdown” as a possible cause of the extinction of all human life.
This area of academic research is rife with speculation and uncertainty, but one thing is for sure: If scientists do go ahead with these simulation experiments, the results will be either extremely uninteresting or spectacularly dangerous. Is it really worth the risk?
The piece in full: “Are We Living in a Computer Simulation? Let’s Not Find Out.”
[Image above, The Matrix, back in theaters on the occasion of its 20th anniversary]
###
As we rethink reality, we might send elastic birthday greetings to Peter Hodgson; he was born on this date in 1912. An advertising and marketing consultant, Hodgson introduced Silly Putty to the world. As The New York Times recounted in his obituary,
The stuff had been developed by General Electric scientists in the company’s New Haven laboratories several years earlier in a search for a viable synthetic rubber. It was obviously not satisfactory, and it found its way instead onto the local cocktail party circuit.
That’s where Mr. Hodgson, who was at the time writing a catalogue of toys for a local store, saw it, and an idea was born.
“Everybody kept saying there was no earthly use for the stuff” he later recalled. “But I watched them as they fooled with it. I couldn’t help noticing how people with busy schedules wasted as much as 15 minutes at a shot just fondling and stretching it”.
“I decided to take a chance and sell some. We put an ad in the catalogue on the adult page, along with such goodies as a spaghetti-making machine. We packaged the goop in a clear compact case and tagged it at $1.00”.
Having borrowed $147 for the venture, Mr. Hodgson ordered a batch from General Electric, hired a Yale student to separate the gob into one ounce dabs and began filling orders. At the same time he hurried to get some trademarks.
Silly Putty was an instant success, and Mr. Hodgson quickly geared up to take advantage of it…
“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…

When warning about the dangers of artificial intelligence, many doomsayers cite philosopher Nick Bostrom’s paperclip maximizer thought experiment. [See here for an amusing game that demonstrates Bostrom’s fear.]
Imagine an artificial intelligence, he says, which decides to amass as many paperclips as possible. It devotes all its energy to acquiring paperclips, and to improving itself so that it can get paperclips in new ways, while resisting any attempt to divert it from this goal. Eventually it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities”. This apparently silly scenario is intended to make the serious point that AIs need not have human-like motives or psyches. They might be able to avoid some kinds of human error or bias while making other kinds of mistake, such as fixating on paperclips. And although their goals might seem innocuous to start with, they could prove dangerous if AIs were able to design their own successors and thus repeatedly improve themselves. Even a “fettered superintelligence”, running on an isolated computer, might persuade its human handlers to set it free. Advanced AI is not just another technology, Mr Bostrom argues, but poses an existential threat to humanity.
Harvard cognitive scientist Joscha Bach, in a tongue-in-cheek tweet, has countered this sort of idea with what he calls “The Lebowski Theorem”:
No superintelligent AI is going to bother with a task that is harder than hacking its reward function.
Why it’s cool to take Bobby McFerrin’s advice at: “The Lebowski Theorem of machine superintelligence.”
* Alan Kay
###
As we get down with the Dude, we might send industrious birthday greetings to prolific writer Anthony Trollope; he was born on this date in 1815. Trollope wrote 47 novels, including those in the “Chronicles of Barsetshire” and “Palliser” series (along with short stories and occasional prose). And he had a successful career as a civil servant; indeed, among his works the best known is surely not any of his books, but the iconic red British mail drop, the “pillar box,” which he invented in his capacity as Postal Surveyor.
The end of a novel, like the end of a children’s dinner-party, must be made up of sweetmeats and sugar-plums. (source)


You must be logged in to post a comment.