Posts Tagged ‘AI’
“The historian of science may be tempted to exclaim that when paradigms change, the world itself changes with them”*…
What we now call AI has gone through a series of paradigm shifts, and there appears to be no end in sight. Ashlee Vance shares an anecdote that suggests that AI might itself be an agent (perhaps the agent) of a broader paradigm shift (or shifts)…
AI madness is upon many of us, and it can take different forms. In August 2024, for example, I stumbled upon a post from a 20-year-old who had built a nuclear fusor [see here] in his home with a bunch of mail-ordered parts. More to the point, he’d done this while under the tutelage of Anthropic’s Claude AI service…
… The guy who built the fusor in question, Hudhayfa Nazoordeen, better known as HudZah on the internet, was a math student on his summer break from the University of Waterloo. I reached out and asked to see his experiment in person partly because it seemed weird and interesting and partly because it seemed to say something about AI technology and how some people are going to be in for a very uncomfortable time in short order.
A couple days after the fusor posts hit X, I showed up at Nazoordeen’s front door, a typical Victorian in San Francisco’s Lower Haight neighborhood. Nazoordeen, a tall, skinny dude with lots of energy and the gesticulations to match, had been crashing there for the summer with a bunch of his university friends as they tried to soak in the start-up and AI lifestyle. Decades ago, these same kids might have yearned to catch Jerry Garcia and The Dead playing their first gigs or to happen upon an Acid Test. This Waterloo set, though, had a different agenda. They were turned on and LLMed up.
Like many of the Victorian-style homes in the city, this one had a long hallway that stretched from the front door to the kitchen with bedrooms jutting off on both sides. The wooden flooring had been blackened in the center from years of foot traffic, but that was not the first thing anyone would notice. Instead, they’d see the mass of electrical cables that were 10-, 25- and sometimes 50-feet long and coming out of each room and leading to somewhere else in the house.
One of the cables powered a series of mind-reading experiments. Someone in the house, Nazoordeen said, had built his own electroencephalogram (EEG) device for measuring brain activity and had been testing it out on houseguests for weeks. Most of the cables, though, were there to feed GPU clusters, the computing systems filled with graphics chips (often designed by Nvidia) that have powered the recent AI boom. You’d follow a cable from one room to another and end up in front of a black box on the floor. All across San Francisco, I imagined, twenty-somethings were gathered around similar GPU altars to try out their ideas…
Vance tells HudZah’s story, recounts the building of his fusor, explains Claude’s (sometimes reluctant) role, and raises the all-too-legitimate safety questions the experiment raises… though in fairness, one might note that the web is rife with instuctions for building a fusor, e.g., here, here, and here, some of which encuraged HudZah.
But in the end, the takeaway for Vance was not the product, but the process…
I must admit, though, that the thing that scared me most about HudZah was that he seemed to be living in a different technological universe than I was. If the previous generation were digital natives, HudZah was an AI native.
HudZah enjoys reading the old-fashioned way, but he now finds that he gets more out of the experience by reading alongside an AI. He puts PDFs of books into Claude or ChatGPT and then queries the books as he moves through the text. He uses Granola to listen in on meetings so that he can query an AI after the chats as well. His friend built Globe Explorer, which can instantly break down, say, the history of rockets, as if you had a professional researcher at your disposal. And, of course, HudZah has all manner of AI tools for coding and interacting with his computer via voice.
It’s not that I don’t use these things. I do. It’s more that I was watching HudZah navigate his laptop with an AI fluency that felt alarming to me. He was using his computer in a much, much different way than I’d seen someone use their computer before, and it made me feel old and alarmed by the number of new tools at our disposal and how HudZah intuitively knew how to tame them.
It also excited me. Just spending a couple of hours with HudZah left me convinced that we’re on the verge of someone, somewhere creating a new type of computer with AI built into its core. I believe that laptops and PCs will give way to a more novel device rather soon.
I’m not sure that people know what’s coming for them. You’re either with the AIs now and really learning how to use them or you’re getting left behind in a profound way. Obviously, these situations follow every major technology transition, but I’m a very tech-forward person, and there were things HudZah could accomplish on his machine that gave off alien vibes to me. So, er, like, good luck if you’re not paying attention to this stuff.
After doing his AI and fusor show for me, HudZah gave me a tour of the house. Most of his roommates had already bailed out and returned to Canada. He was left to clean up the mess, which included piles of beer cans and bottles of booze in the backyard from a last hurrah.
The AI housemates had also left some gold panning equipment in a bathtub. At some point during the summer, they had decided to grab “a shit ton of sand from a nearby creek” and work it over in their communal bathroom for fun.
I’m honestly not sure what the takeaway there was exactly other than that something profound happened to the Bay Area brain in 1849, and it’s still doing its thing…
Goodbye, Digital Natives; hello, AI Natives: “A Young Man Used AI to Build A Nuclear Fusor and Now I Must Weep,” from @ashleevance. Eminently worth reading in full.
And for a look at one attempt to understand what may be the emerging new pardigm(s) of which AI may be a motive part, see Benjamin Bratton‘s explantion of the work he and his collegues are doing at a new institute at UCSD: “Antikythera.” See his recent Long Now Foundation talk on this same subject here.
On the other hand: “The Future Is Too Easy” (gift article) by David Roth in the always-illuminating Defector.
(Image above: source)
###
As we ponder progress, we might spare a thought for Johannes Gutenberg; he died on this date in 1416. A craftsman and inventor, he invented the movable-type printing press. (Though movable type was already in use in East Asia, Gutenberg’s invention of the printing press enabled a much faster rate of printing.)
The printing press spread across the world and led to an information revolution and the unprecedented mass-spread of literature throughout Europe. It was a profound enabler of the arts and the sciences of the Renaissance, of the Reformation (and Counter-Reformation), and of humanist movements… which is to say that it contributed to a series of pardigm shifts.
“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…
Dan Davies took a ride in a silver machine…
A while ago, I was lucky enough to attend a presentation on a Google DeepMind project called “The Habermas Machine”. It’s a really intriguing use of the LLM technology – basically, you take a lot of people who disagree with each other and ask them what they think about an issue. Then you feed their answers into a model, which tries to produce a statement of minimal agreement that all of them might sign up to. They score the extent to which they do agree with it (which trains the model), and explain what it is that they don’t like about the statement. This second round allows the model to come up with another, better version, which also clarifies to the participants what the other side’s reasons are for disagreeing with them.
It’s called “The Habermas Machine” because it’s meant to, loosely speaking, do a similar job to Jurgen Habermas’ “Ideal Speech Environment,” In tests, there seems to be decent evidence that not only is the machine better than a human moderator at coming up with consensus statements, but that the machine-moderated process leads to more convergence of opinions among the actual participants. (I think I might have predicted this; the model obviously has a “flat” affect, and unlike a human being, isn’t always leaking clues from its intonation and body language about what it really thinks of the participants. That might suggest that as LLMs get better at simulating human responses, they might be worse for this purpose!)
There’s really a lot to say and think about this. But it’s Friday [as he wrote this] and I’m a facetious person, so instead I’m going to share the notes I’ve been making ever since seeing the presentation on which other philosophers and social theorists might also benefit from having machines made out of them.
The Giddens Machine – in accordance with the principle of double hermeneutics, it’s the Habermas Machine, but only for reaching agreement on interpretations of Habermas.
The Goffman Machine – after your side lost on the Habermas Machine, it comes along and generates a set of reasons why you shouldn’t feel so bad about that and should come back for another go.
The Bourdieu Machine – you type your views into it, and then it repeats them with slight and subtle adjustments to make you sound more middle class
The Fourcade/Healy Machine – it gives you a score, then makes you do the work of finding out how to change your views so as to increase your score. Finding equilibrium for the machine is your job now.
The Gambetta Machine – instead of finding a consensus, it selects the most awful version of each conflicting view, and then everyone switches to that in order to show how committed they are.
The Austin Machine – instead of telling the machine “I agree with this statement”, you have to tick a box saying “I hereby agree with this statement”.
The Grice Machine – like the Habermas one, but via conversational implicature it aims to create consensus among all the views that you haven’t expressed rather than the ones you have.
The Derrida Machine – everyone keeps asserting the same statements, but the AI brings them into agreement by changing the meaning of the words themselves.
The Crenshaw Machine – in each round the machine finds a new issue to divide up the group in a different way. Equilibrium is reached when everyone realises they’re on their own and need to get along with each other anyway…
A wry exploration of the possibilities of AI: “Fully automated social theory,” from @dsquareddigest.bsky.social
(Image above: source)
* Alan Kay
###
As we delegate discourse, we might recall that it was on this date in 1981 that the first production model of the DeLorean sports car rolled off the assembly line at the Dunmurry factory, located a few miles from Belfast City Centre in Northern Ireland.
“I like to think (it has to be) of a cybernetic ecology where we are free of our labors and joined back to nature, returned to our mammal brothers and sisters”*…
A.I. pioneer Dario Amodei with a positive scenario for artificial intelligence…
I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.
In this essay I try to sketch out what that upside might look like—what a world with powerful AI might look like if everything goes right. Of course no one can know the future with any certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one…
How AI could transform the world for the better: “Machines of Loving Grace,” from @DarioAmodei. Eminently worth reading in full…
A (similarly positive, but slightly more focused) piece from a team at Deepmind: “AI for Science.”
Apposite (if not opposite): “Shoggoths amongst us,” from Henry Farrell, and an earlier (R)D, “We ceased to be the lunatic fringe. We’re now the lunatic core.”
See also: “AI Isn’t Your God—But It Might Be Your Intern.”
* Richard Brautigan, “All Watched Over By Machines Of Loving Grace” (the source of Amodei’s title)
###
As we ponder the perplexities of progress, we might send carefully-calculated birthday greetings to Grace Brewster Murray Hopper; she was born on this date in 19o6. A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.
Hopper also (inadvertently) contributed one of the most ubiquitous metaphors in computer science: she found and documented the first computer “bug” (in 1947).
She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.

“One cannot conceive anything so strange and so implausible that it has not already been said by one philosopher or another”*…
Wisdom for the exquisite Existential Comics (“A philosophy comic about the inevitable anguish of living a brief life in an absurd world. Also jokes.”)…
Frege was an early philosopher of language, who formulated a theory of semantics that largely had to do with how we form truth propositions about the world. His theories were enormously influential for people like Russel, Carnap, and even Wittgenstein early in his career. They all recognized that the languages we use are ambiguous, so making exact determinations was always difficult. Most of them were logicians and mathematicians, and wanted to render ordinary language as exact and precise as mathematical language, so we could go about doing empirical science with perfect clarity. Russell, Carnap, and others even vowed to create an exact scientific language (narrator: “they didn’t create an exact scientific language”).
Later on, Wittgenstein and other philosophers such as J.L. Austin came to believe that a fundamental mistake was made about the nature of language itself. Language, they thought, doesn’t pick out truth propositions about the world at all. Speech acts were fundamentally no different than other actions, and were merely used in social situations to bring about certain effects. For example, in asking for a sandwich to be passed across the table, we do not pick out a certain set of facts about the world, we only utter the words with the expectations that it will cause certain behavior in others. Learning what is and isn’t a sandwich is more like learning the rules of a game than making declarations about what exists in the world, so for Wittgenstein, what is or isn’t a sandwich depends only on the success or failure of the word “sandwich” in a social context, regardless of what actual physical properties a sandwich has in common with, say, a hotdog.
“Is a Hotdog a Sandwich? A Definitive Study,” from @existentialcomics.com.
* René Descartes
###
As we add mayonnaise, we might send thoughtful birthday greetings to Norbert Wiener; he was born on this date in 1894. A computer scientist, mathematician, and philosopher, Wiener is considered the originator of cybernetics, the science of communication as it relates to living things and machines– a field that has had implications for implications for a wide variety of fields, including engineering, systems control, computer science, biology, neuroscience, and philosophy. (Wiener credited Leibniz as the “patron saint of cybernetics.)
His work heavily influenced computer pioneer John von Neumann, information theorist Claude Shannon, anthropologists Margaret Mead and Gregory Bateson, and many others. Wiener was one of the first to theorize that all intelligent behavior was the result of feedback mechanisms and could possibly be simulated by machines– an important early step towards the development of modern artificial intelligence.
“I fear the day when the technology overlaps with our humanity. The world will only have a generation of idiots.”*…
Alva Noë on the importance of humans hanging on to their humanity– for all the promise and dangers of AI, computers plainly can’t think. To think is to resist – something no machine does:
Computers don’t actually do anything. They don’t write, or play; they don’t even compute. Which doesn’t mean we can’t play with computers, or use them to invent, or make, or problem-solve. The new AI is unexpectedly reshaping ways of working and making, in the arts and sciences, in industry, and in warfare. We need to come to terms with the transformative promise and dangers of this new tech. But it ought to be possible to do so without succumbing to bogus claims about machine minds.
What could ever lead us to take seriously the thought that these devices of our own invention might actually understand, and think, and feel, or that, if not now, then later, they might one day come to open their artificial eyes thus finally to behold a shiny world of their very own? One source might simply be the sense that, now unleashed, AI is beyond our control. Fast, microscopic, distributed and astronomically complex, it is hard to understand this tech, and it is tempting to imagine that it has power over us.
But this is nothing new. The story of technology – from prehistory to now – has always been that of the ways we are entrained by the tools and systems that we ourselves have made. Think of the pathways we make by walking. To every tool there is a corresponding habit, that is, an automatised way of acting and being. From the humble pencil to the printing press to the internet, our human agency is enacted in part by the creation of social and technological landscapes that in turn transform what we can do, and so seem, or threaten, to govern and control us.
Yet it is one thing to appreciate the ways we make and remake ourselves through the cultural transformation of our worlds via tool use and technology, and another to mystify dumb matter put to work by us. If there is intelligence in the vicinity of pencils, shoes, cigarette lighters, maps or calculators, it is the intelligence of their users and inventors. The digital is no different.
But there is another origin of our impulse to concede mind to devices of our own invention, and this is what I focus on here: the tendency of some scientists to take for granted what can only be described as a wildly simplistic picture of human and animal cognitive life. They rely unchecked on one-sided, indeed, milquetoast conceptions of human activity, skill and cognitive accomplishment. The surreptitious substitution (to use a phrase of Edmund Husserl’s) of this thin gruel version of the mind at work – a substitution that I hope to convince you traces back to Alan Turing and the very origins of AI – is the decisive move in the conjuring trick.
What scientists seem to have forgotten is that the human animal is a creature of disturbance. Or as the mid-20th-century philosopher of biology Hans Jonas wrote: ‘Irritability is the germ, and as it were the atom, of having a world…’ With us there is always, so to speak, a pebble in the shoe. And this is what moves us, turns us, orients us to reorient ourselves, to do things differently, so that we might carry on. It is irritation and disorientation that is the source of our concern. In the absence of disturbance, there is nothing: no language, no games, no goals, no tasks, no world, no care, and so, yes, no consciousness…
[Starting with Turing, Noë considers the relative roles of humans and technology across a number of spheres, including music…]
… The piano was invented, to be sure, but not by you or me. We encounter it. It pre-exists us and solicits our submission. To learn to play is to be altered, made to adapt one’s posture, hands, fingers, legs and feet to the piano’s mechanical requirements. Under the regime of the piano keyboard, it is demanded that we ourselves become player pianos, that is to say, extensions of the machine itself.
But we can’t. And we won’t. To learn to play, to take on the machine, for us, is to struggle. It is hard to master the instrument’s demands.
And this fact – the difficulty we encounter in the face of the keyboard’s insistence – is productive. We make art out of it. It stops us being player pianos, but it is exactly what is required if we are to become piano players.
For it is the player’s fraught relation to the machine, and to the history and tradition that the machine imposes, that supplies the raw material of musical invention. Music and play happen in that entanglement. To master the piano, as only a person can, is not just to conform to the machine’s demands. It is, rather, to push back, to say no, to rage against the machine. And so, for example, we slap and bang and shout out. In this way, the piano becomes not merely a vehicle of habit and control – a mechanism – but rather an opportunity for action and expression.
And, as with the piano, so with the whole of human cultural life. We live in the entanglement between government and resistance. We fight back…
… The telling fact: computers are used to play our games; they are engineered to make moves in the spaces opened up by our concerns. They don’t have concerns of their own, and they make no new games. They invent no new language.
The British philosopher R G Collingwood noticed that the painter doesn’t invent painting, and the musician doesn’t invent the musical culture in which they find themselves. And for Collingwood this served to show that no person is fully autonomous, a God-like fount of creativity; we are always to some degree recyclers and samplers and, at our best, participants in something larger than ourselves.
But this should not be taken to show that we become what we are (painters, musicians, speakers) by doing what, for example, LLMs do – i.e., merely by getting trained up on large data sets. Humans aren’t trained up. We have experience. We learn. And for us, learning a language, for example, isn’t learning to generate ‘the next token’. It’s learning to work, play, eat, love, flirt, dance, fight, pray, manipulate, negotiate, pretend, invent and think. And crucially, we don’t merely incorporate what we learn and carry on; we always resist. Our values are always problematic. We are not merely word-generators. We are makers of meaning.
We can’t help doing this; no computer can do this…
Eminently worth reading in full: “Rage against the machine,” from @alvanoe in @aeonmag.
For more, see Noë’s The Entanglement: How Art and Philosophy Make Us What We Are.
* Albert Einstein
###
As we resolve to wrestle, we might recall that it was on this date in 1969 that UCLA professor Leonard Kleinrock (aided by his student assistant Charley Kline) created the first networked computer-to-computer connection (with SRI programmer Bill Duvall in Palo Alto), via which they sent the first networked computer-to-computer communication)… or at least part of it. Duvall’s machine crashed partway through the transmission, meaning the only letters received from the attempted “login” were “lo.” The next month two more nodes were added (UCSB and the University of Utah) and the network was dubbed ARPANET.
Still, “lo”– perhaps an appropriate way to announce what would grow up to be the internet.











You must be logged in to post a comment.