(Roughly) Daily

“Look before you ere you leap; / For as you sow, y’ are like to reap”*…

Further in a fashion to Saturday’s post, Robert Wright on the recent AI Summit in Paris…

[Last] week at the Paris AI summit, Vice President JD Vance stood before heads of state and tech titans and said, “When conferences like this convene to discuss a cutting edge technology, oftentimes, I think, our response is to be too self-conscious, too risk-averse. But never have I encountered a breakthrough in tech that so clearly calls us to do precisely the opposite.”

Precisely the opposite of “too risk-averse” would seem to be “not risk-averse enough.” Or maybe, as both ChatGPT and Anthropic’s Claude said when asked for the opposite of “too risk-averse”: “too risk-seeking” or “reckless.” In any event, most people in the AI safety community would agree that such terms capture the Trump administration’s approach to AI regulation. And that includes people who generally share Trump’s and Vance’s laissez faire intuitions. AI researcher Rob Miles posted a video of Vance’s speech on X and commented, “It’s so depressing that the one time when the government takes the right approach to an emerging technology, it’s for basically the only technology where that’s actually a terrible idea.”

The news for AI safety advocates gets worse: The summit’s overall vibe wasn’t all that different from Vance’s. The host, French President Emmanuel Macron, after announcing a big AI infrastructure investment, said that France is “back in the AI race” and that “Europe and France must accelerate their investments.” European Commission President Ursula von der Leyen vowed to “accelerate innovation” and “cut red tape” that now hobbles innovators. China and the US may be the world’s AI leaders, she granted, but “the AI race is far from being over.” All of this sat well with the corporate sector. As Axios reported, “A range of tech leaders, including Google CEO Sundar Pichai and Mistral CEO Arthur Mensch, used their speeches to push the acceleration mantra.”

Seems like only yesterday Sundar Pichai was emphasizing the need for international regulation, saying that AI, for all its benefits, holds great dangers. But, actually, that was back in 2023, when people like Open AI’s Sam Altman were also saying such things. That was the year world leaders convened in Britain’s Bletchley Park to discuss ways to collectively address AI risks, including catastrophic ones. The idea was to hold annual global summits on the international governance of AI. In theory, the Paris summit was the third of these (after the 2024 summit in Seoul). But you should always read the fine print: Whereas the official name of the first summit was “AI Safety Summit,” this year’s version was “AI Action Summit.” The headline over the Axios story was: “Don’t miss out” replaces “doom is nigh” at Paris’ AI summit.

The statement that came out of the summit did call for AI “safety” (along with “sustainable development, innovation,” and many other virtuous things). But there was no elaboration. Nothing, for example, about preventing people from using AIs to help make bioweapons—the kind of problem you’d think would call for international regulation, since pandemics don’t recognize national borders (and the kind of problem that some knowledgeable observers worry has been posed by OpenAI’s recently released Deep Research model).

MIT physicist Max Tegmark tweeted on Monday that a leaked draft of the summit statement seemed “optimized to antagonize both the US government (with focus on diversity, gender and disinformation) and the UK government (completely ignoring the scientific and political consensus around risks from smarter-than-human AI systems that was agreed at the Bletchley Park Summit).” And indeed, Britain and the US refused to sign the statement. The other 60 attending nations, including China, signed it.

Journalist Shakeel Hashim wrote about the world’s journey from Bletchley Park to Paris: “What was supposed to be a crucial forum for international cooperation has ended as a cautionary tale about how easily serious governance efforts can be derailed by national self-interest.” But, he said, the Paris Summit may have value “as a wake-up call. It has shown, definitively, that the current approach to AI governance is broken. The question now is whether we have time to fix it.”…

The ropes are down; the brakes are off: “AI Accelerationism Goes Global,” from @robertwrighter.bsky.social.

Apposite: the always-illuminating (and amusing) Matt Levine on Elon Musk’s bid to purchase Open AI (gift link to Bloomberg).

* Samuel Butler, Hudibras

###

As we prioritize prudence, we might spare a thought for Giordano Bruno; he died on this date in 1600. A philosopher, poet, alchemist, astrologer, cosmological theorist, and esotericist (occultist), his theories anticipated modern science. The most notable of these were his theories of the infinite universe and the multiplicity of worlds, in which he rejected the traditional geocentric (or Earth-centred) astronomy and intuitively went beyond the Copernican heliocentric (sun-centred) theory, which still maintained a finite universe with a sphere of fixed stars. Although one of the most important philosophers of the Italian Renaissance, Bruno’s various passionate utterings led to intense opposition. In 1592, after a trial by the Roman Inquisition, he was kept imprisoned for eight years and interrogated periodically. When, in the end, he refused to recant, he was burned at the stake in Rome for heresy.

source

Discover more from (Roughly) Daily

Subscribe now to keep reading and get access to the full archive.

Continue reading