Posts Tagged ‘Copernicus’
“Look before you ere you leap; / For as you sow, y’ are like to reap”*…
Further in a fashion to Saturday’s post, Robert Wright on the recent AI Summit in Paris…
[Last] week at the Paris AI summit, Vice President JD Vance stood before heads of state and tech titans and said, “When conferences like this convene to discuss a cutting edge technology, oftentimes, I think, our response is to be too self-conscious, too risk-averse. But never have I encountered a breakthrough in tech that so clearly calls us to do precisely the opposite.”
Precisely the opposite of “too risk-averse” would seem to be “not risk-averse enough.” Or maybe, as both ChatGPT and Anthropic’s Claude said when asked for the opposite of “too risk-averse”: “too risk-seeking” or “reckless.” In any event, most people in the AI safety community would agree that such terms capture the Trump administration’s approach to AI regulation. And that includes people who generally share Trump’s and Vance’s laissez faire intuitions. AI researcher Rob Miles posted a video of Vance’s speech on X and commented, “It’s so depressing that the one time when the government takes the right approach to an emerging technology, it’s for basically the only technology where that’s actually a terrible idea.”
The news for AI safety advocates gets worse: The summit’s overall vibe wasn’t all that different from Vance’s. The host, French President Emmanuel Macron, after announcing a big AI infrastructure investment, said that France is “back in the AI race” and that “Europe and France must accelerate their investments.” European Commission President Ursula von der Leyen vowed to “accelerate innovation” and “cut red tape” that now hobbles innovators. China and the US may be the world’s AI leaders, she granted, but “the AI race is far from being over.” All of this sat well with the corporate sector. As Axios reported, “A range of tech leaders, including Google CEO Sundar Pichai and Mistral CEO Arthur Mensch, used their speeches to push the acceleration mantra.”
Seems like only yesterday Sundar Pichai was emphasizing the need for international regulation, saying that AI, for all its benefits, holds great dangers. But, actually, that was back in 2023, when people like Open AI’s Sam Altman were also saying such things. That was the year world leaders convened in Britain’s Bletchley Park to discuss ways to collectively address AI risks, including catastrophic ones. The idea was to hold annual global summits on the international governance of AI. In theory, the Paris summit was the third of these (after the 2024 summit in Seoul). But you should always read the fine print: Whereas the official name of the first summit was “AI Safety Summit,” this year’s version was “AI Action Summit.” The headline over the Axios story was: “Don’t miss out” replaces “doom is nigh” at Paris’ AI summit.
The statement that came out of the summit did call for AI “safety” (along with “sustainable development, innovation,” and many other virtuous things). But there was no elaboration. Nothing, for example, about preventing people from using AIs to help make bioweapons—the kind of problem you’d think would call for international regulation, since pandemics don’t recognize national borders (and the kind of problem that some knowledgeable observers worry has been posed by OpenAI’s recently released Deep Research model).
MIT physicist Max Tegmark tweeted on Monday that a leaked draft of the summit statement seemed “optimized to antagonize both the US government (with focus on diversity, gender and disinformation) and the UK government (completely ignoring the scientific and political consensus around risks from smarter-than-human AI systems that was agreed at the Bletchley Park Summit).” And indeed, Britain and the US refused to sign the statement. The other 60 attending nations, including China, signed it.
Journalist Shakeel Hashim wrote about the world’s journey from Bletchley Park to Paris: “What was supposed to be a crucial forum for international cooperation has ended as a cautionary tale about how easily serious governance efforts can be derailed by national self-interest.” But, he said, the Paris Summit may have value “as a wake-up call. It has shown, definitively, that the current approach to AI governance is broken. The question now is whether we have time to fix it.”…
The ropes are down; the brakes are off: “AI Accelerationism Goes Global,” from @robertwrighter.bsky.social.
Apposite: the always-illuminating (and amusing) Matt Levine on Elon Musk’s bid to purchase Open AI (gift link to Bloomberg).
* Samuel Butler, Hudibras
###
As we prioritize prudence, we might spare a thought for Giordano Bruno; he died on this date in 1600. A philosopher, poet, alchemist, astrologer, cosmological theorist, and esotericist (occultist), his theories anticipated modern science. The most notable of these were his theories of the infinite universe and the multiplicity of worlds, in which he rejected the traditional geocentric (or Earth-centred) astronomy and intuitively went beyond the Copernican heliocentric (sun-centred) theory, which still maintained a finite universe with a sphere of fixed stars. Although one of the most important philosophers of the Italian Renaissance, Bruno’s various passionate utterings led to intense opposition. In 1592, after a trial by the Roman Inquisition, he was kept imprisoned for eight years and interrogated periodically. When, in the end, he refused to recant, he was burned at the stake in Rome for heresy.
“There is only one world, the natural world, exhibiting patterns we call the ‘laws of nature’”*…

The quote above (in full, below) is the reigning substantive understanding of scientific naturalism that is commonplace today. Indeed, the modern era is often seen as the triumph of science over supernaturalism. But, as Peter Harrison explains, what really happened is far more interesting…
By any measure, the scientific revolution of the 17th century was a significant milestone in the emergence of our modern secular age. This remarkable historical moment is often understood as science finally liberating itself from the strictures of medieval religion, striking out on a new path that eschewed theological explanations and focused its attentions solely on a disenchanted, natural world. But this version of events is, at best, half true.
Medieval science, broadly speaking, had followed Aristotle in seeking explanations in terms of the inherent causal properties of natural things. God was certainly involved, at least to the extent that he had originally invested things with their natural properties and was said to ‘concur’ with their usual operations. Yet the natural world had its own agency. Beginning in the 17th century, the French philosopher and scientist René Descartes and his fellow intellectual revolutionaries dispensed with the idea of internal powers and virtues. They divested natural objects of inherent causal powers and attributed all motion and change in the universe directly to natural laws.
But, for all their transformative influence, key agents in the scientific revolution such as Descartes, Johannes Kepler, Robert Boyle and Isaac Newton are not our modern and secular forebears. They did not share our contemporary understandings of the natural or our idea of ‘laws of nature’ that we imagine underpins that naturalism…
[Harrison traces the history of the often contentious, but ultimately momentous rise of naturalism, then considers the historical acounts of that ascension– and what they gloss over or miss altogether. He then turns to whay that matters…]
… the contrived histories of naturalism that purport to show its victory over supernaturalism were fabricated in the 19th century and are simply not consistent with the historical evidence. They are also tainted by a cultural condescension that, in the past at least, descended into outright racism. Few, if any, would today endorse the chauvinism that attends these older, triumphalist accounts of the history of naturalism. Yet, it is worth reflecting upon the extent to which elements of cultural condescension necessarily colour scholarly endeavours that are premised on the imagined ‘neutral’ grounds of naturalism. Careful consideration of the contingent historical circumstances that gave rise to present analytic categories that enjoy significant standing and authority would suggest that there is nothing especially neutral or objective about them. Any clear-eyed crosscultural comparison – one that refrains from assessing worldviews in terms of how they measure up to the standard of the modern West – will reinforce this. We might go so far as to adopt a form of ‘reverse anthropology’, where we think how our own conceptions of the world might look if we adopted the frameworks of others. This might entail dispensing with the idea of the supernatural, and attempting to think outside the box of our recently inherited natural/supernatural distinction.
History [that is, the “actual” history that Harrison recounts] suggests that our regnant modern naturalism is deeply indebted to monotheism, and that its adherents may need to abandon the comforting idea that their naturalistic commitments are licensed by the success of science. As for the idea of the supernatural, ironically this turns out to be far more important for the identity of those who wish to deny its reality than it had ever been for traditional religious believers…
Fascinating and provocative: “The birth of naturalism,” from @uqpharri in @aeonmag.
* “There is only one world, the natural world, exhibiting patterns we call the ‘laws of nature’, and which is discoverable by the methods of science and empirical investigation. There is no separate realm of the supernatural, spiritual, or divine; nor is there any cosmic teleology or transcendent purpose inherent in the nature of the universe or in human life.” – Sean Carroll, The Big Picture
###
As we rethink reality, we might recall that it was on this date in 1588 that Tycho Brahe first outlined his “Tychonic system” concept of the structure of the solar system. The Tychonic system was a hybrid, sharing both the basic idea of the geocentric system of Ptolemy, and the heliocentric idea of Nicholas Copernicus. Published in his De mundi aethorei recentioribus phaenomenis, Tycho’s proposal, retaining Aristotelian physics, kept the the Sun and Moon revolving about Earth in the center of the universe and, at a great distance, the shell of the fixed stars was centered on the Earth. But like Copernicus, he agreed that Mercury, Venus, Mars, Jupiter, and Saturn revolved about the Sun. Thus he could explain the motions of the heavens without “crystal spheres” carrying the planets through complex Ptolemaic epicycles.

On this same date, in 1633, Galileo Galilei arrived in Rome to face trial before the Inquisition. His crime was professing the belief that the earth revolves around the sun– based on observations that he’d made further to Copernicus and Tycho.

“Zero is powerful because it is infinity’s twin. They are equal and opposite, yin and yang.”*…

… and like infinity, zero can be a cognitive challenge. Yasemin Saplakoglu explains…
Around 2,500 years ago, Babylonian traders in Mesopotamia impressed two slanted wedges into clay tablets. The shapes represented a placeholder digit, squeezed between others, to distinguish numbers such as 50, 505 and 5,005. An elementary version of the concept of zero was born.
Hundreds of years later, in seventh-century India, zero took on a new identity. No longer a placeholder, the digit acquired a value and found its place on the number line, before 1. Its invention went on to spark historic advances in science and technology. From zero sprang the laws of the universe, number theory and modern mathematics.
“Zero is, by many mathematicians, definitely considered one of the greatest — or maybe the greatest — achievement of mankind,” said the neuroscientist Andreas Nieder, who studies animal and human intelligence at the University of Tübingen in Germany. “It took an eternity until mathematicians finally invented zero as a number.”
Perhaps that’s no surprise given that the concept can be difficult for the brain to grasp. It takes children longer to understand and use zero than other numbers, and it takes adults longer to read it than other small numbers. That’s because to understand zero, our mind must create something out of nothing. It must recognize absence as a mathematical object.
“It’s like an extra level of abstraction away from the world around you,” said Benjy Barnett, who is completing graduate work on consciousness at University College London. Nonzero numbers map onto countable objects in the environment: three chairs, each with four legs, at one table. With zero, he said, “we have to go one step further and say, ‘OK, there wasn’t anything there. Therefore, there must be zero of them.’”
In recent years, research started to uncover how the human brain represents numbers, but no one examined how it handles zero. Now two independent studies, led by Nieder and Barnett, respectively, have shown that the brain codes for zero much as it does for other numbers, on a mental number line. But, one of the studies found, zero also holds a special status in the brain…
Read on to find out the ways in which new studies are uncovering how the mind creates something out of nothing: “How the Human Brain Contends With the Strangeness of Zero,” from @QuantaMagazine.
Pair with Percival Everett’s provocative (and gloriously entertaining) Dr. No.
* Charles Seife, Zero: The Biography of a Dangerous Idea
Scheduling note: your correspondent is sailing again into uncommonly busy waters. So, with apologies for the hiatus, (R)D will resume on Friday the 25th…
###
As we noodle on noodling on nothing, we might send carefully-calculated birthday greetings to Erasmus Reinhold; he was born on this date in 1511. A professor of Higher Mathematics (at the University of Wittenberg, where he was ultimately Rector), Reinhold worked at a time when “mathematics” included applied mathematics, especially astronomy– to which he made many contributions and of which he was considered the most influential pedagogue of his generation.
Reinhold’s Prutenicae Tabulae (1551, 1562, 1571, and 1585) or Prussian Tables were astronomical tables that helped to disseminate calculation methods of Copernicus throughout the Empire. That said, Reinhold (like other astronomers before Kepler and Galileo) translated Copernicus’ mathematical methods back into a geocentric system, rejecting heliocentric cosmology on physical and theological grounds. Both Reinhold’s Prutenic Tables and Copernicus’ studies were the foundation for the Calendar Reform by Pope Gregory XIII in 1582… and both made copious use of zeros.

“Mathematics has not a foot to stand on which is not purely metaphysical”*…

Lest we forget…
A forgotten episode in French-occupied Naples in the years around 1800—just after the French Revolution—illustrates why it makes sense to see mathematics and politics as entangled. The protagonists of this story were gravely concerned about how mainstream mathematical methods were transforming their world—somewhat akin to our current-day concerns about how digital algorithms are transforming ours. But a key difference was their straightforward moral and political reading of those mathematical methods. By contrast, in our own era we seem to think that mathematics offers entirely neutral tools for ordering and reordering the world—we have, in other words, forgotten something that was obvious to them.
In this essay, I’ll use the case of revolutionary Naples to argue that the rise of a new and allegedly neutral mathematics—characterized by rigor and voluntary restriction—was a mathematical response to pressing political problems. Specifically, it was a response to the question of how to stabilize social order after the turbulence of the French Revolution. Mathematics, I argue, provided the logical infrastructure for the return to order. This episode, then, shows how and why mathematical concepts and methods are anything but timeless or neutral; they define what “reason” is, and what it is not, and thus the concrete possibilities of political action. The technical and political are two sides of the same coin—and changes in notions like mathematical rigor, provability, and necessity simultaneously constitute changes in our political imagination…
Massimo Mazzotti with an adaptation from his new book, Reactionary Mathematics: A Genealogy of Purity: “Foundational Anxieties, Modern Mathematics, and the Political Imagination,” @maxmazzotti in @LAReviewofBooks.
* Thomas De Quincey
###
As we count on it, we might send carefully-calculated birthday greetings to Regiomontanus (or Johannes Müller von Königsberg, as he was christened); he was born on this date in 1436. A mathematician, astrologer, and astronomer of the German Renaissance, he and his work were instrumental in the development of Copernican heliocentrism during his lifetime and in the decades following his death.
“It is well to remember that the entire universe, with one trifling exception, is composed of others”*…

For centuries, scientific discoveries have suggested humanity occupies no privileged place in the universe. But as Mario Livio argues, studies of worlds beyond our solar system could place meaningful new limits on our existential mediocrity…
When the Polish polymath Nicolaus Copernicus proposed in 1543 that the sun, rather than the Earth, was the center of our solar system, he did more than resurrect the “heliocentric” model that had been devised (and largely forgotten) some 18 centuries earlier by the Greek astronomer Aristarchus of Samos. Copernicus—or, rather, the “Copernican principle” that bears his name—tells us that we humans are nothing special. Or, at least, that the planet on which we live is not central to anything outside of us; instead, it’s just another ordinary world revolving around a star.
Our apparent mediocrity has only ascended in the centuries that have passed since Copernicus’s suggestion. In the middle of the 19th century Charles Darwin realized that rather than being the “crown of creation,” humans are simply a natural product of evolution by means of natural selection. Early in the 20th century, astronomer Harlow Shapley deepened our Copernican cosmic demotion, showing that not only the Earth but the whole solar system lacks centrality, residing in the Milky Way’s sleepy outer suburbs rather than the comparatively bustling galactic center. A few years later, astronomer Edwin Hubble showed that galaxies other than the Milky Way exist, and current estimates put the total number of galaxies in the observable universe at a staggering trillion or more.
Since 1995 we have discovered that even within our own Milky Way roughly one of every five sunlike or smaller stars harbors an Earth-size world orbiting in a “Goldilocks” region (neither too hot nor too cold) where liquid water may persist on a rocky planetary surface. This suggests there are at least a few hundred million planets in the Milky Way alone that may in principle be habitable. In roughly the same span of time, observations of the big bang’s afterglow—the cosmic microwave background—have shown that even the ordinary atomic matter that forms planets and people alike constitutes no more than 5 percent of the cosmic mass and energy budget. With each advance in our knowledge, our entire existence retreats from any possible pinnacle, seemingly reduced to flotsam adrift at the universe’s margins.
Believe it or not, the Copernican principle doesn’t even end there. In recent years increasing numbers of physicists and cosmologists have begun to suspect—often against their most fervent hopes—that our entire universe may be but one member of a mind-numbingly huge ensemble of universes: a multiverse.
Interestingly though, if a multiverse truly exists, it also suggests that Copernican cosmic humility can only be taken so far.
…
The implications of the Copernican principle may sound depressing to anyone who prefers a view of the world regarding humankind as the central or most important element of existence, but notice that every step along the way in extending the Copernican principle represented a major human discovery. That is, each decrease in the sense of our own physical significance was the result of a huge expansion in our knowledge. The Copernican principle teaches us humility, yes, but it also reminds us to keep our curiosity and passion for exploration alive and vibrant…
Fascinating: “How Far Should We Take Our Cosmic Humility?“, from @Mario_Livio in @sciam.
* John Holmes (the poet)
###
As we ponder our place, we might send carefully-observed birthday greetings to Arno Penzias; he was born on this date in 1933. A physicist and radio astronomer, he and Robert Wilson, a collegue at Bell Labs, discovered the cosmic microwave background radiation, which helped establish the Big Bang theory of cosmology– work for which they shared the 1978 Nobel Prize in Physics.
MB radiation is something that anyone old enough to have watched broadcast (that’s to say, pre-cable/streaming) television) has seen:
The way a television works is relatively simple. A powerful electromagnetic wave is transmitted by a tower, where it can be received by a properly sized antenna oriented in the correct direction. That wave has additional signals superimposed atop it, corresponding to audio and visual information that had been encoded. By receiving that information and translating it into the proper format (speakers for producing sound and cathode rays for producing light), we were able to receive and enjoy broadcast programming right in the comfort of our own homes for the first time. Different channels broadcasted at different wavelengths, giving viewers multiple options simply by turning a dial.
Unless, that is, you turned the dial to channel 03.
Channel 03 was — and if you can dig up an old television set, still is — simply a signal that appears to us as “static” or “snow.” That “snow” you see on your television comes from a combination of all sorts of sources:
– human-made radio transmissions,
– the Sun,
– black holes,
– and all sorts of other directional astrophysical phenomena like pulsars, cosmic rays and more.
But if you were able to either block all of those other signals out, or simply took them into account and subtracted them out, a signal would still remain. It would only by about 1% of the total “snow” signal that you see, but there would be no way of removing it. When you watch channel 03, 1% of what you’re watching comes from the Big Bang’s leftover glow. You are literally watching the cosmic microwave background…
“This Is How Your Old Television Set Can Prove The Big Bang“





You must be logged in to post a comment.