Posts Tagged ‘arts’
“The historian of science may be tempted to exclaim that when paradigms change, the world itself changes with them”*…
What we now call AI has gone through a series of paradigm shifts, and there appears to be no end in sight. Ashlee Vance shares an anecdote that suggests that AI might itself be an agent (perhaps the agent) of a broader paradigm shift (or shifts)…
AI madness is upon many of us, and it can take different forms. In August 2024, for example, I stumbled upon a post from a 20-year-old who had built a nuclear fusor [see here] in his home with a bunch of mail-ordered parts. More to the point, he’d done this while under the tutelage of Anthropic’s Claude AI service…
… The guy who built the fusor in question, Hudhayfa Nazoordeen, better known as HudZah on the internet, was a math student on his summer break from the University of Waterloo. I reached out and asked to see his experiment in person partly because it seemed weird and interesting and partly because it seemed to say something about AI technology and how some people are going to be in for a very uncomfortable time in short order.
A couple days after the fusor posts hit X, I showed up at Nazoordeen’s front door, a typical Victorian in San Francisco’s Lower Haight neighborhood. Nazoordeen, a tall, skinny dude with lots of energy and the gesticulations to match, had been crashing there for the summer with a bunch of his university friends as they tried to soak in the start-up and AI lifestyle. Decades ago, these same kids might have yearned to catch Jerry Garcia and The Dead playing their first gigs or to happen upon an Acid Test. This Waterloo set, though, had a different agenda. They were turned on and LLMed up.
Like many of the Victorian-style homes in the city, this one had a long hallway that stretched from the front door to the kitchen with bedrooms jutting off on both sides. The wooden flooring had been blackened in the center from years of foot traffic, but that was not the first thing anyone would notice. Instead, they’d see the mass of electrical cables that were 10-, 25- and sometimes 50-feet long and coming out of each room and leading to somewhere else in the house.
One of the cables powered a series of mind-reading experiments. Someone in the house, Nazoordeen said, had built his own electroencephalogram (EEG) device for measuring brain activity and had been testing it out on houseguests for weeks. Most of the cables, though, were there to feed GPU clusters, the computing systems filled with graphics chips (often designed by Nvidia) that have powered the recent AI boom. You’d follow a cable from one room to another and end up in front of a black box on the floor. All across San Francisco, I imagined, twenty-somethings were gathered around similar GPU altars to try out their ideas…
Vance tells HudZah’s story, recounts the building of his fusor, explains Claude’s (sometimes reluctant) role, and raises the all-too-legitimate safety questions the experiment raises… though in fairness, one might note that the web is rife with instuctions for building a fusor, e.g., here, here, and here, some of which encuraged HudZah.
But in the end, the takeaway for Vance was not the product, but the process…
I must admit, though, that the thing that scared me most about HudZah was that he seemed to be living in a different technological universe than I was. If the previous generation were digital natives, HudZah was an AI native.
HudZah enjoys reading the old-fashioned way, but he now finds that he gets more out of the experience by reading alongside an AI. He puts PDFs of books into Claude or ChatGPT and then queries the books as he moves through the text. He uses Granola to listen in on meetings so that he can query an AI after the chats as well. His friend built Globe Explorer, which can instantly break down, say, the history of rockets, as if you had a professional researcher at your disposal. And, of course, HudZah has all manner of AI tools for coding and interacting with his computer via voice.
It’s not that I don’t use these things. I do. It’s more that I was watching HudZah navigate his laptop with an AI fluency that felt alarming to me. He was using his computer in a much, much different way than I’d seen someone use their computer before, and it made me feel old and alarmed by the number of new tools at our disposal and how HudZah intuitively knew how to tame them.
It also excited me. Just spending a couple of hours with HudZah left me convinced that we’re on the verge of someone, somewhere creating a new type of computer with AI built into its core. I believe that laptops and PCs will give way to a more novel device rather soon.
I’m not sure that people know what’s coming for them. You’re either with the AIs now and really learning how to use them or you’re getting left behind in a profound way. Obviously, these situations follow every major technology transition, but I’m a very tech-forward person, and there were things HudZah could accomplish on his machine that gave off alien vibes to me. So, er, like, good luck if you’re not paying attention to this stuff.
After doing his AI and fusor show for me, HudZah gave me a tour of the house. Most of his roommates had already bailed out and returned to Canada. He was left to clean up the mess, which included piles of beer cans and bottles of booze in the backyard from a last hurrah.
The AI housemates had also left some gold panning equipment in a bathtub. At some point during the summer, they had decided to grab “a shit ton of sand from a nearby creek” and work it over in their communal bathroom for fun.
I’m honestly not sure what the takeaway there was exactly other than that something profound happened to the Bay Area brain in 1849, and it’s still doing its thing…
Goodbye, Digital Natives; hello, AI Natives: “A Young Man Used AI to Build A Nuclear Fusor and Now I Must Weep,” from @ashleevance. Eminently worth reading in full.
And for a look at one attempt to understand what may be the emerging new pardigm(s) of which AI may be a motive part, see Benjamin Bratton‘s explantion of the work he and his collegues are doing at a new institute at UCSD: “Antikythera.” See his recent Long Now Foundation talk on this same subject here.
On the other hand: “The Future Is Too Easy” (gift article) by David Roth in the always-illuminating Defector.
(Image above: source)
###
As we ponder progress, we might spare a thought for Johannes Gutenberg; he died on this date in 1416. A craftsman and inventor, he invented the movable-type printing press. (Though movable type was already in use in East Asia, Gutenberg’s invention of the printing press enabled a much faster rate of printing.)
The printing press spread across the world and led to an information revolution and the unprecedented mass-spread of literature throughout Europe. It was a profound enabler of the arts and the sciences of the Renaissance, of the Reformation (and Counter-Reformation), and of humanist movements… which is to say that it contributed to a series of pardigm shifts.
“I fear the day when the technology overlaps with our humanity. The world will only have a generation of idiots.”*…
Alva Noë on the importance of humans hanging on to their humanity– for all the promise and dangers of AI, computers plainly can’t think. To think is to resist – something no machine does:
Computers don’t actually do anything. They don’t write, or play; they don’t even compute. Which doesn’t mean we can’t play with computers, or use them to invent, or make, or problem-solve. The new AI is unexpectedly reshaping ways of working and making, in the arts and sciences, in industry, and in warfare. We need to come to terms with the transformative promise and dangers of this new tech. But it ought to be possible to do so without succumbing to bogus claims about machine minds.
What could ever lead us to take seriously the thought that these devices of our own invention might actually understand, and think, and feel, or that, if not now, then later, they might one day come to open their artificial eyes thus finally to behold a shiny world of their very own? One source might simply be the sense that, now unleashed, AI is beyond our control. Fast, microscopic, distributed and astronomically complex, it is hard to understand this tech, and it is tempting to imagine that it has power over us.
But this is nothing new. The story of technology – from prehistory to now – has always been that of the ways we are entrained by the tools and systems that we ourselves have made. Think of the pathways we make by walking. To every tool there is a corresponding habit, that is, an automatised way of acting and being. From the humble pencil to the printing press to the internet, our human agency is enacted in part by the creation of social and technological landscapes that in turn transform what we can do, and so seem, or threaten, to govern and control us.
Yet it is one thing to appreciate the ways we make and remake ourselves through the cultural transformation of our worlds via tool use and technology, and another to mystify dumb matter put to work by us. If there is intelligence in the vicinity of pencils, shoes, cigarette lighters, maps or calculators, it is the intelligence of their users and inventors. The digital is no different.
But there is another origin of our impulse to concede mind to devices of our own invention, and this is what I focus on here: the tendency of some scientists to take for granted what can only be described as a wildly simplistic picture of human and animal cognitive life. They rely unchecked on one-sided, indeed, milquetoast conceptions of human activity, skill and cognitive accomplishment. The surreptitious substitution (to use a phrase of Edmund Husserl’s) of this thin gruel version of the mind at work – a substitution that I hope to convince you traces back to Alan Turing and the very origins of AI – is the decisive move in the conjuring trick.
What scientists seem to have forgotten is that the human animal is a creature of disturbance. Or as the mid-20th-century philosopher of biology Hans Jonas wrote: ‘Irritability is the germ, and as it were the atom, of having a world…’ With us there is always, so to speak, a pebble in the shoe. And this is what moves us, turns us, orients us to reorient ourselves, to do things differently, so that we might carry on. It is irritation and disorientation that is the source of our concern. In the absence of disturbance, there is nothing: no language, no games, no goals, no tasks, no world, no care, and so, yes, no consciousness…
[Starting with Turing, Noë considers the relative roles of humans and technology across a number of spheres, including music…]
… The piano was invented, to be sure, but not by you or me. We encounter it. It pre-exists us and solicits our submission. To learn to play is to be altered, made to adapt one’s posture, hands, fingers, legs and feet to the piano’s mechanical requirements. Under the regime of the piano keyboard, it is demanded that we ourselves become player pianos, that is to say, extensions of the machine itself.
But we can’t. And we won’t. To learn to play, to take on the machine, for us, is to struggle. It is hard to master the instrument’s demands.
And this fact – the difficulty we encounter in the face of the keyboard’s insistence – is productive. We make art out of it. It stops us being player pianos, but it is exactly what is required if we are to become piano players.
For it is the player’s fraught relation to the machine, and to the history and tradition that the machine imposes, that supplies the raw material of musical invention. Music and play happen in that entanglement. To master the piano, as only a person can, is not just to conform to the machine’s demands. It is, rather, to push back, to say no, to rage against the machine. And so, for example, we slap and bang and shout out. In this way, the piano becomes not merely a vehicle of habit and control – a mechanism – but rather an opportunity for action and expression.
And, as with the piano, so with the whole of human cultural life. We live in the entanglement between government and resistance. We fight back…
… The telling fact: computers are used to play our games; they are engineered to make moves in the spaces opened up by our concerns. They don’t have concerns of their own, and they make no new games. They invent no new language.
The British philosopher R G Collingwood noticed that the painter doesn’t invent painting, and the musician doesn’t invent the musical culture in which they find themselves. And for Collingwood this served to show that no person is fully autonomous, a God-like fount of creativity; we are always to some degree recyclers and samplers and, at our best, participants in something larger than ourselves.
But this should not be taken to show that we become what we are (painters, musicians, speakers) by doing what, for example, LLMs do – i.e., merely by getting trained up on large data sets. Humans aren’t trained up. We have experience. We learn. And for us, learning a language, for example, isn’t learning to generate ‘the next token’. It’s learning to work, play, eat, love, flirt, dance, fight, pray, manipulate, negotiate, pretend, invent and think. And crucially, we don’t merely incorporate what we learn and carry on; we always resist. Our values are always problematic. We are not merely word-generators. We are makers of meaning.
We can’t help doing this; no computer can do this…
Eminently worth reading in full: “Rage against the machine,” from @alvanoe in @aeonmag.
For more, see Noë’s The Entanglement: How Art and Philosophy Make Us What We Are.
* Albert Einstein
###
As we resolve to wrestle, we might recall that it was on this date in 1969 that UCLA professor Leonard Kleinrock (aided by his student assistant Charley Kline) created the first networked computer-to-computer connection (with SRI programmer Bill Duvall in Palo Alto), via which they sent the first networked computer-to-computer communication)… or at least part of it. Duvall’s machine crashed partway through the transmission, meaning the only letters received from the attempted “login” were “lo.” The next month two more nodes were added (UCSB and the University of Utah) and the network was dubbed ARPANET.
Still, “lo”– perhaps an appropriate way to announce what would grow up to be the internet.

“No part of mathematics is ever, in the long run, ‘useless’.”*…
The number 1 can be written as a sum of distinct unit fractions, such as 1/2 + 1/3 + 1/12 + 1/18 + 1/36…
Number theorists are always looking for hidden structure. And when confronted by a numerical pattern that seems unavoidable, they test its mettle, trying hard — and often failing — to devise situations in which a given pattern cannot appear.
One of the latest results to demonstrate the resilience of such patterns, by Thomas Bloom of the University of Oxford, answers a question with roots that extend all the way back to ancient Egypt.
“It might be the oldest problem ever,” said Carl Pomerance of Dartmouth College.
The question involves fractions that feature a 1 in their numerator, like 1/2, 1/7 or 1/122. These “unit fractions” were especially important to the ancient Egyptians because they were the only types of fractions their number system contained: With the exception of a single symbol for 23, they could only express more complicated fractions (like 3/4) as sums of unit fractions (1/2 + 1/4).
The modern-day interest in such sums got a boost in the 1970s, when Paul Erdős and Ronald Graham asked how hard it might be to engineer sets of whole numbers that don’t contain a subset whose reciprocals add to 1. For instance, the set {2, 3, 6, 9, 13} fails this test: It contains the subset {2, 3, 6}, whose reciprocals are the unit fractions 1/2, 1/3 and 1/6 — which sum to 1.
More exactly, Erdős and Graham conjectured that any set that samples some sufficiently large, positive proportion of the whole numbers — it could be 20% or 1% or 0.001% — must contain a subset whose reciprocals add to 1. If the initial set satisfies that simple condition of sampling enough whole numbers (known as having “positive density”), then even if its members were deliberately chosen to make it difficult to find that subset, the subset would nonetheless have to exist.
“I just thought this was an impossible question that no one in their right mind could possibly ever do,” said Andrew Granville of the University of Montreal. “I didn’t see any obvious tool that could attack it.”…
Bloom, building on work by Ernie Croot, found that tool. The ubiquity of ways to represent whole numbers as sums of fractions: “Math’s ‘Oldest Problem Ever’ Gets a New Answer,” by Jordana Cepelewicz (@jordanacep) in @QuantaMagazine.
* “No part of mathematics is ever, in the long run, ‘useless.’ Most of number theory has very few ‘practical’ applications. That does not reduce its importance, and if anything it enhances its fascination. No one can predict when what seems to be a most obscure theorem may suddenly be called upon to play some vital and hitherto unsuspected role.” – C. Stanley Ogilvy, Excursions in Number Theory
###
As we recombine, we might send carefully-calculated birthday greetings to Ulugh Beg (or, officially, Mīrzā Muhammad Tāraghay bin Shāhrukh); he was born on this date in 1394. A Timurid sultan with a hearty interest in science and the arts, he is better remembered as an astronomer and mathematician.
The most important observational astronomer of the 15th century, he built the great Ulugh Beg Observatory in Samarkand between 1424 and 1429– considered by scholars to have been one of the finest observatories in the Islamic world at the time and the largest in Central Asia. In his observations he discovered a number of errors in the computations of the 2nd-century Alexandrian astronomer Ptolemy, whose figures were still being used. His star map of 994 stars was the first new one since Hipparchus. Among his contributions to mathematics were trigonometric tables of sine and tangent values correct to at least eight decimal places.
“Culture is the name for what people are interested in”*…
… but “culture” (that’s to say, “high culture”) has also been a form of authority, a kind of superego for society. These days, Adam Kirsh argues, not so much…
From the 1920s to the 1950s, from jazz and blues to rock and roll, tweaking the canon was part of the appeal of pop music—and a favorite device of lyricists. Ella Fitzgerald had a signature hit with Sam Coslow’s “(If You Can’t Sing It) You’ll Have to Swing It (Mr. Paganini).” Betty Comden and Adolph Green wrote the lyrics to “It’s a Simple Little System,” from the musical Bells Are Ringing, in which a bookie uses composers’ names as code to refer to racetracks: “Beethoven is Belmont Park/ Tchaikovsky is Churchill Downs.” Chuck Berry hit the same targets in “Roll Over Beethoven”: “My heart’s beating rhythm/ And my soul keeps singing the blues/ Roll over Beethoven/ Tell Tchaikovsky the news.”
In recent decades, however, this type of indirect homage to the authority of classical music has completely disappeared from popular music. The last example may be “Rock Me, Amadeus,” a German pop hit from 1985 that was inspired less by Mozart himself than by the 1984 movie Amadeus, in which the composer is portrayed as, in the song’s words, “ein Punker” and “ein Rockidol.” Today’s pop lyricists don’t poke fun at Beethoven and Tchaikovsky because young listeners no longer recognize those names as possessing any cultural authority or prestige, if they recognize them at all. It would make as much sense to write a pop song called “Roll Over Palestrina” or “Rock Me, Hildegard von Bingen,” since all composers are equally unfamiliar to a mass audience.
Like the disappearance of a certain species of frog or insect, this is a small change that signals a profound transformation of the climate—in this case, the cultural climate…
And while that change has its costs, Kirsch explains, it also has its benefits : “Culture as counterculture.”
###
As we contemplate canons, we might recall that on this date in 2008 the #1 song in the U.S. was “Whatever You Like” by T.I. Jared W. Dillon of Sputnikmusic called the song a “more sophisticated take” on Lil Wayne‘s “Lollipop.”
“Mozart died too late rather than too soon”*…
Glenn Gould was a gloriously talented and profoundly iconoclastic pianist, unafraid to challenge the conventions of the canon.
His April 1962 performance of Brahms’ first piano concerto, with the New York Philharmonic and Leonard Bernstein conducting, gave rise to an extraordinary situation in which Mr. Bernstein disagreed with Gould’s interpretation so vehemently that he felt it necessary to warn the audience beforehand. The performance was subsequently broadcast on the radio with Bernstein’s comments included. A draft copy of those comments can be found in the Leonard Bernstein Collection at the Library of Congress and is available to read online…
But perhaps his most egregiously unpopular opinion was his conviction that Mozart– especially late Mozart– was a “bad composer.”
How Mozart Became a Bad Composer, which was originally broadcast on a weekly public television series titled Public Broadcast Laboratory in 1968. The Library of Congress National Audio-Visual Conservation Center recently digitized the episode that includes the 37-minute segment from a two-inch tape found in the Library’s collection. It is now available on the web site of the American Archive of Public Broadcasting, which is a collaborative effort by the Library of Congress and WGBH in Boston, Massachusetts.
On the reception of the program, Peter Goddard in The Great Gould (2017) wrote, “Recognizing the outrage-driven ratings possibilities here, the Public Broadcasting [sic] Laboratory series by National Educational Television, the precursor to PBS in the United States, broadcast Gould’s thirty-seven-minute-long How Mozart Became a Bad Composer on April 28, 1968. After that, the show disappeared from sight worldwide, and a version of the script was only uncovered years later by New York-based documentarian Lucille Carra.” Kevin Bazzana in Wondrous Strange: The Life and Art of Glenn Gould (2004) notes, “The program outraged viewers in both the United States and Canada, including formerly sympathetic fans and critics.” The program is now widely available to the public for the first time since its broadcast. Although, ardent Glenn Gould fans may remember his interview in Piano Quarterly, which was reprinted in The Glenn Gould Reader (1984), “Mozart and Related Matters: Glenn Gould in Conversation with Bruno Monsaingeon,” in which he expresses many of the same reservations about Mozart’s music that are heard in the television segment…
Cait Miller (of the Music Division of the Library of Congress) puts it in a personal context:
My parents are or were both musicians – my father was a composer – and so my appreciation for classical music was probably equal parts nature and nurture. So, when I entered graduate school as a musicologist and met a fellow student named Masa Yoshioka, who became one of my best friends during my doctoral study, it was more than a little shocking when, during one of our many extended conversations about music, he revealed to me that he did not think that Mozart was a particularly interesting composer. As a musicologist who had come from a previous incarnation as a classical singer, this was tantamount to heresy. However, due to my regard for Masa and his well-thought-out opinions, I did not discount it out of hand. Instead, I took it as a challenge to listen to the music of Mozart and, in fact, the music of all composers, with fresh ears every time I encountered it and to let no preconceptions that I had learned as a child allow me to speak as a child when I heard new works by a composer whom I had been conditioned to revere. It is with this spirit in mind that I hope you will view Glenn Gould’s television segment…
Your correspondent would agree. In any event, enjoy:
“The Unpopular Opinions of Glenn Gould or “How Mozart Became a Bad Composer.”
[image at top: source]
* Glenn Gould (who also once suggested that “Beethoven always sounds to me like the upsetting of a bag of nails, with here and there also a dropped hammer”)
###
As we tickle the ivories, we might recall that it was on this date in 1976 that another group of musical iconoclasts, The Sex Pistols, released their single ‘Anarchy In The UK‘. Originally issued in a plain black sleeve, the single was the only Sex Pistols recording released by EMI, and reached the No.38 spot on the UK Singles Chart before EMI dropped the group on 6 January 1977. (The band ran through five labels; their only album, Never Mind the Bollocks, Here’s the Sex Pistols (1977; #1 on the UK charts) was released by Virgin.)









You must be logged in to post a comment.