(Roughly) Daily

Posts Tagged ‘humanity

“Everything / is not itself”*…

Toward an ecology of mind: Nathan Gardels talks with Benjamin Bratton about his recent article, “Post-Anthropocene Humanism- Cultivating the ‘third space’ where nature, technology, and human autonomy meet“…

The reality we sense is not fixed or static, but, as Carlo Rovelli puts it, a “momentary get together on the sand.” For the quantum physicist, all reality is an ever-shifting interaction of manifold influences, each determining the other, which converge or dissolve under the conditions at a particular time and space that is always in flux…

The human, too, can be seen this way as a node of ever-changing interactions with the natural cosmos and the environment humans themselves have formed through technology and culture. What it means to be human, then, is not a constant, but continually constituted, altered and re-constituted through the recursive interface with an open and evolving world.

This is the view, at least, of Benjamin Bratton, a philosopher of technology who directs the Berggruen Institute’s Antikythera project to investigate the impact and potential of planetary-scale computation. To further explore the notion of “post-Anthropocene humanism” raised in a recent Noema essay, I asked him to weigh in on the nature of human being and becoming when anthropogenesis and technogenesis are one and the same process.

“I can’t accept the essentially reactionary claim that modern science erases ‘the Human.’ Demystification is not erasure. It may destabilize some ideas that humans have about what humans are, yes. But I see it more as a disclosure of what ‘humans’ always have been but could not perceive as such. It’s not that some essence of the Human goes away, but that humans are now a bit less wrong about what humans are,” he argues.

Bratton goes on: “Instead of science and technology leading to some ‘post-human’ condition, perhaps it will lead to a slightly more human condition? The figure we associate with modern European Humanism may be a fragile, if also a productive, philosophical concept. But dismantling the concept does not make the reality go away. Rather, it redefines it in the broader context of new understanding. In fact, that reality is more perceivable because the concept is made to dissolve.” 

How so? “The origins of human societies are revealed by archaeological pursuits. What is found is usually not the primal scene of some local cultural tradition but something much more alien and unsettling: human society as a physical process.

All this would suggest, in Bratton’s view, “that cooperative social intelligence was not only the path to Anthropocene-scale agency for humans, but a reminder that the evolution of social intelligence literally shaped our bodies and biology, from the microbial ecologies inside of us to our tool-compatible phenotype. The Renaissance idea of Vitruvian Man, that we possess bodies and then engage the world through tools and intention, is somewhat backward. Instead, we possess bodies because of biotic and abiotic ‘technologization’ of us by the world, which we in turn accelerate through social cooperation.”

In short, one might say, it is not “I think therefore I am,” but, because the world is embedded in me, “thereby I am.” 

Bratton’s view has significant implications for how we see and approach the accelerating advances in science and technology.

A negative biopolitics, so to speak, would seek to limit the transformations underway in the name of a valued concept of the human born in a specific time and place on the continuum of human evolution. A positive bio-politics would embrace the artificiality of those transformations as part of the responsibility of human agency.

Bratton states: “Abstract intelligence is not some outside imposition from above. It emerged and evolved along with humans and other things that think. Therefore, I am equally suspicious of the sort of posthumanism that collapses sentience and sapience into an anti-rationalist, flat epistemology that seeks not to calibrate the relation between reason and world, but is instead a will to vegetablization: a dissolving of agency into flux and flow. Governance then, in the sense of steerage, is sacrificed.”

To mediate this creative tension, what is called for is a theory of governance that recognizes the promise while affirming the autonomy of humans, albeit reconfigured through a new awareness, by striving to shape what we now understand as anthropo-technogenesis.

In the political theory of checks and balances, government is the positive and constitutional rule is the negative. The one is the capacity to act, the other to amend or arrest action that could lead to harmful consequences — the “katechon” concept from Greek antiquity of “withholding from becoming,” which I have written about before.

An ecology of mind, in the term of anthropologist Gregory Bateson, would encompass both by re-casting human agency not as the master, but as a responsible co-creator with other intelligences in the reality we are making together…

The Evolution of What It Means To Be Human,” from Nathan Gardels and @bratton in @NoemaMag. Both the conversation and the article on which it is based are eminently worth reading on full.

Pair with: “Artificial Intelligence and the Noosphere” (from Robert Wright; for which, a ToTH to friend MK): a very optimistic take on a possible future that could emerge from the dynamic that Bratton outlines. Worth reading and considering; his visions of the socioeconomic and spiritual bounties-to-come are certainly enticing.

That said, I’ll just suggest that, even if AI is ultimately as capable as many assume it can/will be– by no means a sure thing– unless we address the kinds of issues raised in last week’s (R)D on this same general subject (“Without reflection, we go blindly on our way”) we’ll never get to Bratton’s (and Wright’s) happy place…  The same kinds of things that Bratton implicitly and Wright explicitly are mooting for AI (as a knitter of minds in a noosphere) could have been said— were said— for computer networking, then for the web, then for social media…  in the event, they knit— but not so much so much in the interest of blissful, enabling sharing and growth; rather as the tools of rapacious commercial interests (c.f.: Cory Doctorow’s “enshittification”) and/or authoritarians (c.f., China or Russia or…). Seems to me that in the long run, if we can rein in capitalism and authoritarians: maybe.  In the foreseeable future: if only…

* Rainer Maria Rilke

###

As we contemplate collaboration, we might send mysterious birthday greetings to Alexius Meinong; he was born this date in 1853. A philosopher, he is known for his unique ontology and for contributions to the philosophy of mind and axiology– the theory of value.

Meinong’s ontology is notable for its belief in nonexistent objects. He distinguished several levels of reality among objects and facts about them: existent objects participate in actual (true) facts about the world; subsistent (real but non-existent) objects appear in possible (but false) facts; and objects that neither exist nor subsist can only belong to impossible facts. See his Gegenstandstheorie, or the Theory of Abstract Objects.

source

“It is well to remember that the entire universe, with one trifling exception, is composed of others”*…

This artist’s impression shows the temperate planet Ross 128 b, with its red dwarf parent star in the background. Credit: ESO/M. Kornmesser

For centuries, scientific discoveries have suggested humanity occupies no privileged place in the universe. But as Mario Livio argues, studies of worlds beyond our solar system could place meaningful new limits on our existential mediocrity…

When the Polish polymath Nicolaus Copernicus proposed in 1543 that the sun, rather than the Earth, was the center of our solar system, he did more than resurrect the “heliocentric” model that had been devised (and largely forgotten) some 18 centuries earlier by the Greek astronomer Aristarchus of Samos. Copernicus—or, rather, the “Copernican principle” that bears his name—tells us that we humans are nothing special. Or, at least, that the planet on which we live is not central to anything outside of us; instead, it’s just another ordinary world revolving around a star.

Our apparent mediocrity has only ascended in the centuries that have passed since Copernicus’s suggestion. In the middle of the 19th century Charles Darwin realized that rather than being the “crown of creation,” humans are simply a natural product of evolution by means of natural selection. Early in the 20th century, astronomer Harlow Shapley deepened our Copernican cosmic demotion, showing that not only the Earth but the whole solar system lacks centrality, residing in the Milky Way’s sleepy outer suburbs rather than the comparatively bustling galactic center. A few years later, astronomer Edwin Hubble showed that galaxies other than the Milky Way exist, and current estimates put the total number of galaxies in the observable universe at a staggering trillion or more.

Since 1995 we have discovered that even within our own Milky Way roughly one of every five sunlike or smaller stars harbors an Earth-size world orbiting in a “Goldilocks” region (neither too hot nor too cold) where liquid water may persist on a rocky planetary surface. This suggests there are at least a few hundred million planets in the Milky Way alone that may in principle be habitable. In roughly the same span of time, observations of the big bang’s afterglow—the cosmic microwave background—have shown that even the ordinary atomic matter that forms planets and people alike constitutes no more than 5 percent of the cosmic mass and energy budget. With each advance in our knowledge, our entire existence retreats from any possible pinnacle, seemingly reduced to flotsam adrift at the universe’s margins.

Believe it or not, the Copernican principle doesn’t even end there. In recent years increasing numbers of physicists and cosmologists have begun to suspect—often against their most fervent hopes—that our entire universe may be but one member of a mind-numbingly huge ensemble of universes: a multiverse.

Interestingly though, if a multiverse truly exists, it also suggests that Copernican cosmic humility can only be taken so far.

The implications of the Copernican principle may sound depressing to anyone who prefers a view of the world regarding humankind as the central or most important element of existence, but notice that every step along the way in extending the Copernican principle represented a major human discovery. That is, each decrease in the sense of our own physical significance was the result of a huge expansion in our knowledge. The Copernican principle teaches us humility, yes, but it also reminds us to keep our curiosity and passion for exploration alive and vibrant…

Fascinating: “How Far Should We Take Our Cosmic Humility?“, from @Mario_Livio in @sciam.

* John Holmes (the poet)

###

As we ponder our place, we might send carefully-observed birthday greetings to Arno Penzias; he was born on this date in 1933. A physicist and radio astronomer, he and Robert Wilson, a collegue at Bell Labs, discovered the cosmic microwave background radiation, which helped establish the Big Bang theory of cosmology– work for which they shared the 1978 Nobel Prize in Physics.

MB radiation is something that anyone old enough to have watched broadcast (that’s to say, pre-cable/streaming) television) has seen:

The way a television works is relatively simple. A powerful electromagnetic wave is transmitted by a tower, where it can be received by a properly sized antenna oriented in the correct direction. That wave has additional signals superimposed atop it, corresponding to audio and visual information that had been encoded. By receiving that information and translating it into the proper format (speakers for producing sound and cathode rays for producing light), we were able to receive and enjoy broadcast programming right in the comfort of our own homes for the first time. Different channels broadcasted at different wavelengths, giving viewers multiple options simply by turning a dial.

Unless, that is, you turned the dial to channel 03.

Channel 03 was — and if you can dig up an old television set, still is — simply a signal that appears to us as “static” or “snow.” That “snow” you see on your television comes from a combination of all sorts of sources:

– human-made radio transmissions,

– the Sun,

– black holes,

– and all sorts of other directional astrophysical phenomena like pulsars, cosmic rays and more.

But if you were able to either block all of those other signals out, or simply took them into account and subtracted them out, a signal would still remain. It would only by about 1% of the total “snow” signal that you see, but there would be no way of removing it. When you watch channel 03, 1% of what you’re watching comes from the Big Bang’s leftover glow. You are literally watching the cosmic microwave background…

This Is How Your Old Television Set Can Prove The Big Bang

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…

There is a wide range of opinions on AI and what it might portend. While artificial intelligence has its skeptics, and some argue that we should slow its development, AI is here, and it’s only getting warmed up (c.f.: Ezra Klein‘s “This Changes Everything“).

As applications multiply (and get more sophisticated), there’s an understandable concern about its impact on employment. While tools like ChatGPT and DALL·E 2 are roiling the creative sphere, many economists are looking more broadly…

Like many revolutionary technologies before it, AI is likely to eliminate jobs. But, as has been the case in the past, experts argue, AI will likely offset much of that by spurring the creation of new jobs in addition to enhancing many existing jobs. The big question is: what sort of jobs?

“AI will wipe out a lot of current jobs, as has happened with all past technologies,” said Lawrence Katz, a labor economist at Harvard. “But I have no reason to think that AI and robots won’t continue changing the mix of jobs. The question is: will the change in the mix of jobs exacerbate existing inequalities? Will AI raise productivity so much that even as it displaces a lot of jobs, it creates new ones and raises living standards?”

Anu Madgavkar, who leads labor market research at the McKinsey Global Institute, estimates that one in four workers in the US are going to see more AI and technology adopted in their jobs. She said 50-60% of companies say they are pursuing AI-related projects. “So one way or the other people are going to have to learn to work with AI,” Madgavkar said.

While past rounds of automation affected factory jobs most, Madgavkar said that AI will hit white-collar jobs most. “It’s increasingly going into office-based work and customer service and sales,” she said. “They are the job categories that will have the highest rate of automation adoption and the biggest displacement. These workers will have to work with it or move into different skills.”…

US experts warn AI likely to kill off jobs – and widen wealth inequality

But most of these visions are rooted in an appreciation of what AI can currently do (and the likely extensions of those capabilities). What if AI develops in startling, discontinuous ways– what if it exhibits “emergence”?…

… Recent investigations… have revealed that LLMs (large language models) can produce hundreds of “emergent” abilities — tasks that big models can complete that smaller models can’t, many of which seem to have little to do with analyzing text. They range from multiplication to generating executable computer code to, apparently, decoding movies based on emojis. New analyses suggest that for some tasks and some models, there’s a threshold of complexity beyond which the functionality of the model skyrockets. (They also suggest a dark flip side: As they increase in complexity, some models reveal new biases and inaccuracies in their responses.)

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes…

The Unpredictable Abilities Emerging From Large AI Models

Perhaps we should be thinking about AI not just functionally, but also philosophically…

The development of Artificial Intelligence is a scientific and engineering project, but it’s also a philosophical one. Lingering debates in the philosophy of mind have the potential to be substantially demystified, if not outright resolved, through the creation of artificial minds that parallel capabilities once thought to be the exclusive province of the human brain.

And since our brain is how we know and interface with the world more generally, understanding how the mind works can shed light on every other corner of philosophy as well, from epistemology to metaethics. My view is thus the exact opposite of Noam Chomsky’s, who argues that the success of Large Language Models is of limited scientific or philosophical import, since such models ultimately reduce to giant inscrutable matrices. On the contrary, the discovery that giant inscrutable matrices can, under the right circumstances, do many things that otherwise require a biological brain is itself a striking empirical datum — one Chomsky chooses to simply dismiss a priori.

Biological brains differ in important ways from artificial neural networks, but the fact that the latter can emulate the capacities of the former really does contribute to human self-understanding. For one, it represents an independent line of evidence that the brain is indeed computational. But that’s just the tip of the iceberg. The success of LLMs may even help settle longstanding debates on the nature of meaning itself…

We’re all Wittgensteinians now

And maybe we should be careful about “othering” AI (or, for that matter, any of the other forms for intelligence that surround us)…

I don’t think there is such a thing as an artificial intelligence. There are multiple intelligences, many ways of doing intelligence. What I envisage to be more useful and interesting than artificial intelligence as we currently conceive of it—which is this incredibly reduced version of human intelligence— is something more distributed, more widely empowered, and more diverse than singular intelligence would allow for. It’s actually a conversation between multiple intelligences, focused on some narrow goals. I have a new, very long-term, very nascent project I’m calling Server Farm. And the vision of Server Farm is to create a setting in which multiple intelligences could work on a problem together. Those intelligences would be drawn from all different kinds of life. That could include computers, but it could also include fungi and plants and animals in some kind of information-sharing processing arrangement. The point is that it would involve more than one kind of thinking, happening in dialogue and relationship with each other.

James Bridle, “There’s Nothing Unnatural About a Computer

In the end, Tyler Cowan suggests, we should keep developing AI…

…what kind of civilization is it that turns away from the challenge of dealing with more…intelligence?  That has not the self-confidence to confidently confront a big dose of more intelligence?  Dare I wonder if such societies might not perish under their current watch, with or without AI?  Do you really want to press the button, giving us that kind of American civilization?…

We should take the plunge.  We already have taken the plunge.  We designed/tolerated our decentralized society so we could take the plunge…

Existential risk, AI, and the inevitable turn in human history

Still, we’re human, and we would do well, Samuel Arbesman suggests, to use the best of our human “tools”– the humanities– to understand AI…

So go study the concepts of narrative technique and use them to elucidate the behavior of LLMs. Or examine the rhetorical devices that writers and speakers have been using for millennia—and which GPT models has imbibed—and figure out how to use their “physical” principles in relating to these language models.

Ultimately, we need a deeper kind of cultural and humanistic competence, one that doesn’t just vaguely gesture at certain parts of history or specific literary styles. It’s still early days, but we need more of this thinking. To quote Hollis Robbins again: “Nobody yet knows what cultural competence will be in the AI era.” But we must begin to work this out.

AI, Semiotic Physics, and the Opcodes of Story World

All of which is to suggest that we are faced with a future that may well contain currently-unimaginable capabilities, that can accrue as threats or (and) as opportunities. So, as the estimable Jaron Lanier reminds us, we need to remain centered…

“From my perspective,” he says, “the danger isn’t that a new alien entity will speak through our technology and take over and destroy us. To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane if you like, in a way that we aren’t acting with enough understanding and self-interest to survive, and we die through insanity, essentially.”…

The way to ensure that we are sufficiently sane to survive is to remember it’s our humanness that makes us unique…

Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’

All of the above-sampled pieces are eminently worth reading in full.

Apposite (and offered without comment): Theta Noir

[Image above: source]

* Alan Kay

###

As we ponder progress, we might recall that it was on this date in 1979 that operators failed to notice that a relief valve was stuck open in the primary coolant system of Three Mile Island’s Unit 2 nuclear reactor following an unexpected shutdown. Consequently, enough coolant drained out of the system to allow the core to overheat and partially melt down– the worst commercial nuclear accident in American history.

Three Mile Island Nuclear Power Plant, near Harrisburg, PA

“Human history seems to me to be one long story of people sweeping down—or up, I suppose—replacing other people in the process”*…

Max Roser argues that, if we keep each other safe – and protect ourselves from the risks that nature and we ourselves pose – we are only at the beginning of human history…

… The development of powerful technology gives us the chance to survive for much longer than a typical mammalian species.

Our planet might remain habitable for roughly a billion years. If we survive as long as the Earth stays habitable, and based on the scenario above, this would be a future in which 125 quadrillion children will be born. A quadrillion is a 1 followed by 15 zeros: 1,000,000,000,000,000.

A billion years is a thousand times longer than the million years depicted in this chart. Even very slow moving changes will entirely transform our planet over such a long stretch of time: a billion years is a timespan in which the world will go through several supercontinent cycles – the world’s continents will collide and drift apart repeatedly; new mountain ranges will form and then erode, the oceans we are familiar with will disappear and new ones open up…

… the future is big. If we keep each other safe the huge majority of humans who will ever live will live in the future.

And this requires us to be more careful and considerate than we currently are. Just as we look back to the heroes who achieved what we enjoy today, those who come after us will remember what we did for them. We will be the ancestors of a very large number of people. Let’s make sure we are good ancestors…

If we manage to avoid a large catastrophe, we are living at the early beginnings of human history: “The Future is Vast,” from @MaxCRoser @OurWorldInData.

* Alexander McCall Smith

###

As we take the long view, we might recall that it was on this date in 1915 that Mary Mallon, “Typhoid Mary,” was put in quarantine on North Brother Island, in New York City, where she was isolated until she died in 1938.  She was the first person in the United States identified as an asymptomatic carrier of the pathogen associated with typhoid fever… before which, she first inadvertently, then knowingly spread typhoid for years while working as a cook in the New York area.

Mallon had previously been identified as a carrier (in 1905) and quarantined for three years, after which she was set free on the condition she changed her occupation and embraced good hygiene habits. But after working a lower paying job as a laundress, Mary changed her last name to Brown and returned to cooking… and over the next five years the infectious cycle returned, until she was identified and put back into quarantine.

source

“People are trapped in history and history is trapped in them”*…

The late David Graeber (with his co-author David Wengrow), left one last book; William Deresiewicz gives us an early look…

Many years ago, when I was a junior professor at Yale, I cold-called a colleague in the anthropology department for assistance with a project I was working on. I didn’t know anything about the guy; I just selected him because he was young, and therefore, I figured, more likely to agree to talk.

Five minutes into our lunch, I realized that I was in the presence of a genius. Not an extremely intelligent person—a genius. There’s a qualitative difference. The individual across the table seemed to belong to a different order of being from me, like a visitor from a higher dimension. I had never experienced anything like it before. I quickly went from trying to keep up with him, to hanging on for dear life, to simply sitting there in wonder.

That person was David Graeber. In the 20 years after our lunch, he published two books; was let go by Yale despite a stellar record (a move universally attributed to his radical politics); published two more books; got a job at Goldsmiths, University of London; published four more books, including Debt: The First 5,000 Years, a magisterial revisionary history of human society from Sumer to the present; got a job at the London School of Economics; published two more books and co-wrote a third; and established himself not only as among the foremost social thinkers of our time—blazingly original, stunningly wide-ranging, impossibly well read—but also as an organizer and intellectual leader of the activist left on both sides of the Atlantic, credited, among other things, with helping launch the Occupy movement and coin its slogan, “We are the 99 percent.”

On September 2, 2020, at the age of 59, David Graeber died of necrotizing pancreatitis while on vacation in Venice. The news hit me like a blow. How many books have we lost, I thought, that will never get written now? How many insights, how much wisdom, will remain forever unexpressed? The appearance of The Dawn of Everything: A New History of Humanity is thus bittersweet, at once a final, unexpected gift and a reminder of what might have been. In his foreword, Graeber’s co-author, David Wengrow, an archaeologist at University College London, mentions that the two had planned no fewer than three sequels.

And what a gift it is, no less ambitious a project than its subtitle claims. The Dawn of Everything is written against the conventional account of human social history as first developed by Hobbes and Rousseau; elaborated by subsequent thinkers; popularized today by the likes of Jared Diamond, Yuval Noah Harari, and Steven Pinker; and accepted more or less universally. The story goes like this. Once upon a time, human beings lived in small, egalitarian bands of hunter-gatherers (the so-called state of nature). Then came the invention of agriculture, which led to surplus production and thus to population growth as well as private property. Bands swelled to tribes, and increasing scale required increasing organization: stratification, specialization; chiefs, warriors, holy men.

Eventually, cities emerged, and with them, civilization—literacy, philosophy, astronomy; hierarchies of wealth, status, and power; the first kingdoms and empires. Flash forward a few thousand years, and with science, capitalism, and the Industrial Revolution, we witness the creation of the modern bureaucratic state. The story is linear (the stages are followed in order, with no going back), uniform (they are followed the same way everywhere), progressive (the stages are “stages” in the first place, leading from lower to higher, more primitive to more sophisticated), deterministic (development is driven by technology, not human choice), and teleological (the process culminates in us).

It is also, according to Graeber and Wengrow, completely wrong. Drawing on a wealth of recent archaeological discoveries that span the globe, as well as deep reading in often neglected historical sources (their bibliography runs to 63 pages), the two dismantle not only every element of the received account but also the assumptions that it rests on. Yes, we’ve had bands, tribes, cities, and states; agriculture, inequality, and bureaucracy, but what each of these were, how they developed, and how we got from one to the next—all this and more, the authors comprehensively rewrite. More important, they demolish the idea that human beings are passive objects of material forces, moving helplessly along a technological conveyor belt that takes us from the Serengeti to the DMV. We’ve had choices, they show, and we’ve made them. Graeber and Wengrow offer a history of the past 30,000 years that is not only wildly different from anything we’re used to, but also far more interesting: textured, surprising, paradoxical, inspiring…

A brilliant new account upends bedrock assumptions about 30,000 years of change: “Human History Gets a Rewrite,” @WDeresiewicz introduces the newest– and last?– book from @davidgraeber and @davidwengrow. Eminently worth reading in full.

* James Baldwin

###

As we reinterpret, we might spare a thought for Vic Allen; he died on this date in 1914. A British human rights activist, political prisoner, sociologist, historian, economist and professor at the University of Leeds, he worked closely with British trade unions, and was considered a key player in the resistance against Apartheid in South Africa. He spent much of his life supporting the South African National Union of Mineworkers (NUM), and was a key mentor to British trade union leader Arthur Scargill, In 2010 Allen was awarded the Kgao ya Bahale award, the highest honor afforded by the South African Union of Miners. After his death he was widely commended by his fellow academics and activists for his lifelong commitment to worker’s rights and racial equality.

source

%d bloggers like this: