Posts Tagged ‘prudence’
“Look before you ere you leap; / For as you sow, y’ are like to reap”*…
Further in a fashion to Saturday’s post, Robert Wright on the recent AI Summit in Paris…
[Last] week at the Paris AI summit, Vice President JD Vance stood before heads of state and tech titans and said, “When conferences like this convene to discuss a cutting edge technology, oftentimes, I think, our response is to be too self-conscious, too risk-averse. But never have I encountered a breakthrough in tech that so clearly calls us to do precisely the opposite.”
Precisely the opposite of “too risk-averse” would seem to be “not risk-averse enough.” Or maybe, as both ChatGPT and Anthropic’s Claude said when asked for the opposite of “too risk-averse”: “too risk-seeking” or “reckless.” In any event, most people in the AI safety community would agree that such terms capture the Trump administration’s approach to AI regulation. And that includes people who generally share Trump’s and Vance’s laissez faire intuitions. AI researcher Rob Miles posted a video of Vance’s speech on X and commented, “It’s so depressing that the one time when the government takes the right approach to an emerging technology, it’s for basically the only technology where that’s actually a terrible idea.”
The news for AI safety advocates gets worse: The summit’s overall vibe wasn’t all that different from Vance’s. The host, French President Emmanuel Macron, after announcing a big AI infrastructure investment, said that France is “back in the AI race” and that “Europe and France must accelerate their investments.” European Commission President Ursula von der Leyen vowed to “accelerate innovation” and “cut red tape” that now hobbles innovators. China and the US may be the world’s AI leaders, she granted, but “the AI race is far from being over.” All of this sat well with the corporate sector. As Axios reported, “A range of tech leaders, including Google CEO Sundar Pichai and Mistral CEO Arthur Mensch, used their speeches to push the acceleration mantra.”
Seems like only yesterday Sundar Pichai was emphasizing the need for international regulation, saying that AI, for all its benefits, holds great dangers. But, actually, that was back in 2023, when people like Open AI’s Sam Altman were also saying such things. That was the year world leaders convened in Britain’s Bletchley Park to discuss ways to collectively address AI risks, including catastrophic ones. The idea was to hold annual global summits on the international governance of AI. In theory, the Paris summit was the third of these (after the 2024 summit in Seoul). But you should always read the fine print: Whereas the official name of the first summit was “AI Safety Summit,” this year’s version was “AI Action Summit.” The headline over the Axios story was: “Don’t miss out” replaces “doom is nigh” at Paris’ AI summit.
The statement that came out of the summit did call for AI “safety” (along with “sustainable development, innovation,” and many other virtuous things). But there was no elaboration. Nothing, for example, about preventing people from using AIs to help make bioweapons—the kind of problem you’d think would call for international regulation, since pandemics don’t recognize national borders (and the kind of problem that some knowledgeable observers worry has been posed by OpenAI’s recently released Deep Research model).
MIT physicist Max Tegmark tweeted on Monday that a leaked draft of the summit statement seemed “optimized to antagonize both the US government (with focus on diversity, gender and disinformation) and the UK government (completely ignoring the scientific and political consensus around risks from smarter-than-human AI systems that was agreed at the Bletchley Park Summit).” And indeed, Britain and the US refused to sign the statement. The other 60 attending nations, including China, signed it.
Journalist Shakeel Hashim wrote about the world’s journey from Bletchley Park to Paris: “What was supposed to be a crucial forum for international cooperation has ended as a cautionary tale about how easily serious governance efforts can be derailed by national self-interest.” But, he said, the Paris Summit may have value “as a wake-up call. It has shown, definitively, that the current approach to AI governance is broken. The question now is whether we have time to fix it.”…
The ropes are down; the brakes are off: “AI Accelerationism Goes Global,” from @robertwrighter.bsky.social.
Apposite: the always-illuminating (and amusing) Matt Levine on Elon Musk’s bid to purchase Open AI (gift link to Bloomberg).
* Samuel Butler, Hudibras
###
As we prioritize prudence, we might spare a thought for Giordano Bruno; he died on this date in 1600. A philosopher, poet, alchemist, astrologer, cosmological theorist, and esotericist (occultist), his theories anticipated modern science. The most notable of these were his theories of the infinite universe and the multiplicity of worlds, in which he rejected the traditional geocentric (or Earth-centred) astronomy and intuitively went beyond the Copernican heliocentric (sun-centred) theory, which still maintained a finite universe with a sphere of fixed stars. Although one of the most important philosophers of the Italian Renaissance, Bruno’s various passionate utterings led to intense opposition. In 1592, after a trial by the Roman Inquisition, he was kept imprisoned for eight years and interrogated periodically. When, in the end, he refused to recant, he was burned at the stake in Rome for heresy.
“Speed is carrying us along, but we have yet to master it”*…

A call to contemplate the potential negative effects of internet technology along with its promise…
Conversations about technology tend to be dominated by an optimistic faith in technological progress, and headlines about new technologies tend to be peppered with deterministic language assuring readers of all the wonderful things these nascent technologies “will” do once they arrive. There is endless encouragement to think about all of the exciting benefits to be enjoyed if everything goes right, but significantly less attention is usually paid to the ways things might go spectacularly wrong.
In the estimation of philosopher Paul Virilio, the refusal to seriously contemplate the chance of failure can have calamitous effects. As he evocatively put it in 1997’s Open Sky, “Unless we are deliberately forgetting the invention of the shipwreck in the invention of the ship or the rail accident in the advent of the train, we need to examine the hidden face of new technologies, before that face reveals itself in spite of us.” Virilio’s formulation is a reminder that along with new technologies come new types of dangerous technological failures. It may seem obvious today that there had never been a car crash before the car was invented, but what future wrecks are being overlooked today amidst the excited chatter about AI, the metaverse, and all things crypto?
Virilio’s attention to accidents is a provocation to look at technology differently. To foreground the dangers instead of the benefits, and to see ourselves as the potential victims instead of as the smiling beneficiaries. As he put it in Pure War, first published in 1983, “Every technology produces, provokes, programs a specific accident.” Thus, the challenge becomes looking for the “accident” behind the technophilic light show — and what’s more, to find it before the wreckage starts to pile up.
Undoubtedly, this is not the most enjoyable way to look at technology. It is far more fun to envision yourself enjoying the perfect meal prepared for you by your AI butler than to imagine yourself caught up in a Kafkaesque nightmare after the AI system denies your loan application. Nevertheless, if Virilio was right to observe that “the invention of the highway was the invention of 300 cars colliding in five minutes,” it would be wise to start thinking seriously about the crashes that await us as we accelerate down the information superhighway…
The work of Paul Virilio urges us to ask: What future disasters inhere in today’s technologies? “Inventing the Shipwreck” from Zachary Loeb (@libshipwreck) in @_reallifemag. Eminently worth reading in full.
For a look at those who don’t just brush aside Virilio’s caution, but actively embrace speed and the chaos that it can cause:
Accelerationism holds that the modern, Western democratic state is so mired in corruption and ineptitude that true patriots should instigate a violent insurrection to hasten its destruction to allow a new, white-dominated order to emerge. Indeed, some of the foremost exponents of accelerationism today were at the U.S. Capitol on January 6. They included: the Oath Keepers, whose grab-bag ideology states that “paranoid anti-federalism envision[s] a restoration of ‘self-government’ and ‘natural rights’;” QAnon adherents, who remain convinced that the 2020 presidential election was stolen and that former President Donald Trump was thwarted from saving the world from a Satan-worshipping pedophilia ring run by Democrats, Jews, and other agents of the deep state; and, of course, Trump’s own die-hard “Stop the Steal” minions, who, against all reason and legal proof, seek to restore the former president to office.
The objective of accelerationism is to foment divisiveness and polarization that will induce the collapse of the existing order and spark a second civil war…
Read the full piece: “A Year After January 6, Is Accelerationism the New Terrorist Threat?“
* Paul Virilio
###
As we practice prudence, we might recall that it was on this date in 1854 that Anthony Fass, a Philadelphia piano maker, was awarded the first U.S. patent (#11062) for an accordion. (An older patent existed in Europe, issued in Vienna in 1829 to Cyrill Demian.)
“Music helps set a romantic mood. Imagine her surprise when you say, ‘We don’t need a stereo – I have an accordion’.” – Martin Mull
“A gentleman is someone who can play the accordion, but doesn’t.” – Tom Waits



You must be logged in to post a comment.