(Roughly) Daily

Posts Tagged ‘Benjamin Bratton

“The historian of science may be tempted to exclaim that when paradigms change, the world itself changes with them”*…

What we now call AI has gone through a series of paradigm shifts, and there appears to be no end in sight. Ashlee Vance shares an anecdote that suggests that AI might itself be an agent (perhaps the agent) of a broader paradigm shift (or shifts)…

AI madness is upon many of us, and it can take different forms. In August 2024, for example, I stumbled upon a post from a 20-year-old who had built a nuclear fusor [see here] in his home with a bunch of mail-ordered parts. More to the point, he’d done this while under the tutelage of Anthropic’s Claude AI service…

… The guy who built the fusor in question, Hudhayfa Nazoordeen, better known as HudZah on the internet, was a math student on his summer break from the University of Waterloo. I reached out and asked to see his experiment in person partly because it seemed weird and interesting and partly because it seemed to say something about AI technology and how some people are going to be in for a very uncomfortable time in short order.

A couple days after the fusor posts hit X, I showed up at Nazoordeen’s front door, a typical Victorian in San Francisco’s Lower Haight neighborhood. Nazoordeen, a tall, skinny dude with lots of energy and the gesticulations to match, had been crashing there for the summer with a bunch of his university friends as they tried to soak in the start-up and AI lifestyle. Decades ago, these same kids might have yearned to catch Jerry Garcia and The Dead playing their first gigs or to happen upon an Acid Test. This Waterloo set, though, had a different agenda. They were turned on and LLMed up.

Like many of the Victorian-style homes in the city, this one had a long hallway that stretched from the front door to the kitchen with bedrooms jutting off on both sides. The wooden flooring had been blackened in the center from years of foot traffic, but that was not the first thing anyone would notice. Instead, they’d see the mass of electrical cables that were 10-, 25- and sometimes 50-feet long and coming out of each room and leading to somewhere else in the house.

One of the cables powered a series of mind-reading experiments. Someone in the house, Nazoordeen said, had built his own electroencephalogram (EEG) device for measuring brain activity and had been testing it out on houseguests for weeks. Most of the cables, though, were there to feed GPU clusters, the computing systems filled with graphics chips (often designed by Nvidia) that have powered the recent AI boom. You’d follow a cable from one room to another and end up in front of a black box on the floor. All across San Francisco, I imagined, twenty-somethings were gathered around similar GPU altars to try out their ideas…

Vance tells HudZah’s story, recounts the building of his fusor, explains Claude’s (sometimes reluctant) role, and raises the all-too-legitimate safety questions the experiment raises… though in fairness, one might note that the web is rife with instuctions for building a fusor, e.g., here, here, and here, some of which encuraged HudZah.

But in the end, the takeaway for Vance was not the product, but the process…

I must admit, though, that the thing that scared me most about HudZah was that he seemed to be living in a different technological universe than I was. If the previous generation were digital natives, HudZah was an AI native.

HudZah enjoys reading the old-fashioned way, but he now finds that he gets more out of the experience by reading alongside an AI. He puts PDFs of books into Claude or ChatGPT and then queries the books as he moves through the text. He uses Granola to listen in on meetings so that he can query an AI after the chats as well. His friend built Globe Explorer, which can instantly break down, say, the history of rockets, as if you had a professional researcher at your disposal. And, of course, HudZah has all manner of AI tools for coding and interacting with his computer via voice.

It’s not that I don’t use these things. I do. It’s more that I was watching HudZah navigate his laptop with an AI fluency that felt alarming to me. He was using his computer in a much, much different way than I’d seen someone use their computer before, and it made me feel old and alarmed by the number of new tools at our disposal and how HudZah intuitively knew how to tame them.

It also excited me. Just spending a couple of hours with HudZah left me convinced that we’re on the verge of someone, somewhere creating a new type of computer with AI built into its core. I believe that laptops and PCs will give way to a more novel device rather soon.

I’m not sure that people know what’s coming for them. You’re either with the AIs now and really learning how to use them or you’re getting left behind in a profound way. Obviously, these situations follow every major technology transition, but I’m a very tech-forward person, and there were things HudZah could accomplish on his machine that gave off alien vibes to me. So, er, like, good luck if you’re not paying attention to this stuff.

After doing his AI and fusor show for me, HudZah gave me a tour of the house. Most of his roommates had already bailed out and returned to Canada. He was left to clean up the mess, which included piles of beer cans and bottles of booze in the backyard from a last hurrah.

The AI housemates had also left some gold panning equipment in a bathtub. At some point during the summer, they had decided to grab “a shit ton of sand from a nearby creek” and work it over in their communal bathroom for fun.

I’m honestly not sure what the takeaway there was exactly other than that something profound happened to the Bay Area brain in 1849, and it’s still doing its thing…

Goodbye, Digital Natives; hello, AI Natives: “A Young Man Used AI to Build A Nuclear Fusor and Now I Must Weep,” from @ashleevance. Eminently worth reading in full.

And for a look at one attempt to understand what may be the emerging new pardigm(s) of which AI may be a motive part, see Benjamin Bratton‘s explantion of the work he and his collegues are doing at a new institute at UCSD: “Antikythera.” See his recent Long Now Foundation talk on this same subject here.

On the other hand: “The Future Is Too Easy” (gift article) by David Roth in the always-illuminating Defector.

(Image above: source)

Thomas Kuhn

###

As we ponder progress, we might spare a thought for Johannes Gutenberg; he died on this date in 1416. A craftsman and inventor, he invented the movable-type printing press. (Though movable type was already in use in East Asia, Gutenberg’s invention of the printing press enabled a much faster rate of printing.)

The printing press spread across the world and led to an information revolution and the unprecedented mass-spread of literature throughout Europe. It was a profound enabler of the arts and the sciences of the Renaissance, of the Reformation (and Counter-Reformation), and of humanist movements… which is to say that it contributed to a series of pardigm shifts.

source

“To understand anything, you just need to understand the little bits”*…

Oscar Schwartz begs to differ. Here, excerpts from his provocative critique of TED Talks…

Bill Gates wheels a hefty metal barrel out onto a stage. He carefully places it down and then faces the audience, which sits silent in a darkened theater. “When I was a kid, the disaster we worried about most was a nuclear war,” he begins. Gates is speaking at TED’s flagship conference, held in Vancouver in 2015. He wears a salmon pink sweater, and his hair is combed down over his forehead, Caesar-style. “That’s why we had a barrel like this down in our basement, filled with cans of food and water,” he says. “When the nuclear attack came, we were supposed to go downstairs, hunker down, and eat out of that barrel.”

Now that he is an adult, Gates continues, it is no longer nuclear apocalypse that scares him, but pestilence. A year ago, Ebola killed over ten thousand people in West Africa. If the virus had been airborne or spread to a large city center, things would have been far worse. It might’ve snowballed into a pandemic and killed tens of millions of people. Gates tells the TED attendees that humanity is not ready for this scenario — that a pandemic would trigger a global catastrophe at an unimaginable scale. We have no basement to retreat to and no metal barrel filled with supplies to rely on. 

But, Gates adds, the future might turn out okay. He has an idea. Back when he was a kid, the U.S. military had sufficient funding to mobilize for war at any minute. Gates says that we must prepare for a pandemic with the same fearful intensity. We need to build a medical reserve corps. We need to play germ games like generals play war games. We need to make alliances with other virus-fighting nations. We need to build an arsenal of biomedical weapons to attack any non-human entity that might attack our bodies. “If we start now, we can be ready for the next epidemic,” Gates concludes, to a round of applause. 

Of course, Gates’s popular and well-shared TED talk — viewed millions of times — didn’t alter the course of history. Neither did any of the other “ideas worth spreading” (the organization’s tagline) presented at the TED conference that year — including Monica Lewinsky’s massively viral speech about how to stop online bullying through compassion and empathy, or a Google engineer’s talk about how driverless cars would make roads smarter and safer in the near future. In fact, seven years after TED 2015, it feels like we are living in a reality that is the exact opposite of the future envisioned that year. A president took office in part because of his talent for online bullying. Driverless cars are nowhere near as widespread as predicted, and those that do share our roads keep crashing. Covid has killed five million people and counting. 

At the start of the pandemic, I noticed people sharing Gates’s 2015 talk. The general sentiment was one of remorse and lamentation: the tech-prophet had predicted the future for us! If only we had heeded his warning! I wasn’t so sure. It seems to me that Gates’s prediction and proposed solution are at least part of what landed us here. I don’t mean to suggest that Gates’s TED talk is somehow directly responsible for the lack of global preparedness for Covid. But it embodies a certain story about “the future” that TED talks have been telling for the past two decades — one that has contributed to our unending present crisis.

The story goes like this: there are problems in the world that make the future a scary prospect. Fortunately, though, there are solutions to each of these problems, and the solutions have been formulated by extremely smart, tech-adjacent people. For their ideas to become realities, they merely need to be articulated and spread as widely as possible. And the best way to spread ideas is through stories — hence Gates’s opening anecdote about the barrel. In other words, in the TED episteme, the function of a story isn’t to transform via metaphor or indirection, but to actually manifest a new world. Stories about the future create the future. Or as Chris Anderson, TED’s longtime curator, puts it, “We live in an era where the best way to make a dent on the world… may be simply to stand up and say something.” And yet, TED’s archive is a graveyard of ideas. It is a seemingly endless index of stories about the future — the future of science, the future of the environment, the future of work, the future of love and sex, the future of what it means to be human — that never materialized. By this measure alone, TED, and its attendant ways of thinking, should have been abandoned…

… TED talks began to take on a distinct rhetorical style, later laid out in Anderson’s book TED Talks: The Official TED Guide to Public Speaking. In it, Anderson insists anyone is capable of giving a TED-esque talk. You just need an interesting topic and then you need to attach that topic to an inspirational story. Robots are interesting. Using them to eat trash in Nairobi is inspiring. Put the two together, and you have a TED talk.

I like to call this fusion “the inspiresting.” Stylistically, the inspiresting is earnest and contrived. It is smart but not quite intellectual, personal but not sincere, jokey but not funny. It is an aesthetic of populist elitism. Politically, the inspiresting performs a certain kind of progressivism, as it is concerned with making the world a better place, however vaguely…

Perhaps the most incisive critique came, ironically, at a 2013 TEDx conference. In “What’s Wrong with TED Talks?” media theorist Benjamin Bratton told a story about a friend of his, an astrophysicist, who gave a complex presentation on his research before a donor, hoping to secure funding. When he was finished, the donor decided to pass on the project. “I’m just not inspired,” he told the astrophysicist. “You should be more like Malcolm Gladwell.” Bratton was outraged. He felt that the rhetorical style TED helped popularize was “middlebrow megachurch infotainment,” and had begun to directly influence the type of intellectual work that could be undertaken. If the research wasn’t entertaining or moving, it was seen as somehow less valuable. TED’s influence on intellectual culture was “taking something with value and substance and coring it out so that it can be swallowed without chewing,” Bratton said. “This is not the solution to our most frightening problems — rather, this is one of our most frightening problems.” (Online, his talk proved to be one of many ideas worth spreading. “This is by far the most interesting and challenging thing I’ve heard on TED,” one commenter posted. “Very glad to come across it!”)…

Some thoughts on the “inspiresting”: “What Was the TED Talk?​” from @scarschwartz in @thedrift_mag.

* Chris Anderson, proprietor and curator of TED

###

As we unchain our curiosity, we might send ruthless curious (and immensely entertaining) birthday greetings to Martin Gardner; he was born on this date in 1914. Though not an academic, nor ever a formal student of math or science, he wrote widely and prolifically on both subjects in such popular books as The Ambidextrous Universe and The Relativity Explosion and as the “Mathematical Games” columnist for Scientific American. Indeed, his elegant– and understandable– puzzles delighted professional and amateur readers alike, and helped inspire a generation of young mathematicians.

Gardner’s interests were wide; in addition to the math and science that were his power alley, he studied and wrote on topics that included magic, philosophy, religion, and literature (c.f., especially his work on Lewis Carroll– including the delightful Annotated Alice— and on G.K. Chesterton).  And he was a fierce debunker of pseudoscience: a founding member of CSICOP, and contributor of a monthly column (“Notes of a Fringe Watcher,” from 1983 to 2002) in Skeptical Inquirer, that organization’s monthly magazine.

Gardner died in 2010, having never given a TED Talk.

source

“Everything / is not itself”*…

Toward an ecology of mind: Nathan Gardels talks with Benjamin Bratton about his recent article, “Post-Anthropocene Humanism- Cultivating the ‘third space’ where nature, technology, and human autonomy meet“…

The reality we sense is not fixed or static, but, as Carlo Rovelli puts it, a “momentary get together on the sand.” For the quantum physicist, all reality is an ever-shifting interaction of manifold influences, each determining the other, which converge or dissolve under the conditions at a particular time and space that is always in flux…

The human, too, can be seen this way as a node of ever-changing interactions with the natural cosmos and the environment humans themselves have formed through technology and culture. What it means to be human, then, is not a constant, but continually constituted, altered and re-constituted through the recursive interface with an open and evolving world.

This is the view, at least, of Benjamin Bratton, a philosopher of technology who directs the Berggruen Institute’s Antikythera project to investigate the impact and potential of planetary-scale computation. To further explore the notion of “post-Anthropocene humanism” raised in a recent Noema essay, I asked him to weigh in on the nature of human being and becoming when anthropogenesis and technogenesis are one and the same process.

“I can’t accept the essentially reactionary claim that modern science erases ‘the Human.’ Demystification is not erasure. It may destabilize some ideas that humans have about what humans are, yes. But I see it more as a disclosure of what ‘humans’ always have been but could not perceive as such. It’s not that some essence of the Human goes away, but that humans are now a bit less wrong about what humans are,” he argues.

Bratton goes on: “Instead of science and technology leading to some ‘post-human’ condition, perhaps it will lead to a slightly more human condition? The figure we associate with modern European Humanism may be a fragile, if also a productive, philosophical concept. But dismantling the concept does not make the reality go away. Rather, it redefines it in the broader context of new understanding. In fact, that reality is more perceivable because the concept is made to dissolve.” 

How so? “The origins of human societies are revealed by archaeological pursuits. What is found is usually not the primal scene of some local cultural tradition but something much more alien and unsettling: human society as a physical process.

All this would suggest, in Bratton’s view, “that cooperative social intelligence was not only the path to Anthropocene-scale agency for humans, but a reminder that the evolution of social intelligence literally shaped our bodies and biology, from the microbial ecologies inside of us to our tool-compatible phenotype. The Renaissance idea of Vitruvian Man, that we possess bodies and then engage the world through tools and intention, is somewhat backward. Instead, we possess bodies because of biotic and abiotic ‘technologization’ of us by the world, which we in turn accelerate through social cooperation.”

In short, one might say, it is not “I think therefore I am,” but, because the world is embedded in me, “thereby I am.” 

Bratton’s view has significant implications for how we see and approach the accelerating advances in science and technology.

A negative biopolitics, so to speak, would seek to limit the transformations underway in the name of a valued concept of the human born in a specific time and place on the continuum of human evolution. A positive bio-politics would embrace the artificiality of those transformations as part of the responsibility of human agency.

Bratton states: “Abstract intelligence is not some outside imposition from above. It emerged and evolved along with humans and other things that think. Therefore, I am equally suspicious of the sort of posthumanism that collapses sentience and sapience into an anti-rationalist, flat epistemology that seeks not to calibrate the relation between reason and world, but is instead a will to vegetablization: a dissolving of agency into flux and flow. Governance then, in the sense of steerage, is sacrificed.”

To mediate this creative tension, what is called for is a theory of governance that recognizes the promise while affirming the autonomy of humans, albeit reconfigured through a new awareness, by striving to shape what we now understand as anthropo-technogenesis.

In the political theory of checks and balances, government is the positive and constitutional rule is the negative. The one is the capacity to act, the other to amend or arrest action that could lead to harmful consequences — the “katechon” concept from Greek antiquity of “withholding from becoming,” which I have written about before.

An ecology of mind, in the term of anthropologist Gregory Bateson, would encompass both by re-casting human agency not as the master, but as a responsible co-creator with other intelligences in the reality we are making together…

The Evolution of What It Means To Be Human,” from Nathan Gardels and @bratton in @NoemaMag. Both the conversation and the article on which it is based are eminently worth reading on full.

Pair with: “Artificial Intelligence and the Noosphere” (from Robert Wright; for which, a ToTH to friend MK): a very optimistic take on a possible future that could emerge from the dynamic that Bratton outlines. Worth reading and considering; his visions of the socioeconomic and spiritual bounties-to-come are certainly enticing.

That said, I’ll just suggest that, even if AI is ultimately as capable as many assume it can/will be– by no means a sure thing– unless we address the kinds of issues raised in last week’s (R)D on this same general subject (“Without reflection, we go blindly on our way”) we’ll never get to Bratton’s (and Wright’s) happy place…  The same kinds of things that Bratton implicitly and Wright explicitly are mooting for AI (as a knitter of minds in a noosphere) could have been said— were said— for computer networking, then for the web, then for social media…  in the event, they knit— but not so much so much in the interest of blissful, enabling sharing and growth; rather as the tools of rapacious commercial interests (c.f.: Cory Doctorow’s “enshittification”) and/or authoritarians (c.f., China or Russia or…). Seems to me that in the long run, if we can rein in capitalism and authoritarians: maybe.  In the foreseeable future: if only…

* Rainer Maria Rilke

###

As we contemplate collaboration, we might send mysterious birthday greetings to Alexius Meinong; he was born this date in 1853. A philosopher, he is known for his unique ontology and for contributions to the philosophy of mind and axiology– the theory of value.

Meinong’s ontology is notable for its belief in nonexistent objects. He distinguished several levels of reality among objects and facts about them: existent objects participate in actual (true) facts about the world; subsistent (real but non-existent) objects appear in possible (but false) facts; and objects that neither exist nor subsist can only belong to impossible facts. See his Gegenstandstheorie, or the Theory of Abstract Objects.

source