Archive for February 2025
“Reality favors symmetry”*…
Emmy Noether showed that fundamental physical laws are themselves a consequence of simple symmetries. As Shalma Wegsman explains, a century later, her insights continue to shape physics…
In the fall of 1915, the foundations of physics began to crack. Einstein’s new theory of gravity seemed to imply that it should be possible to create and destroy energy, a result that threatened to upend two centuries of thinking in physics.
Einstein’s theory, called general relativity, radically transformed the meaning of space and time. Rather than being fixed backdrops to the events of the universe, space and time were now characters in their own right, able to curve, expand and contract in the presence of matter and energy.
One problem with this shifting space-time is that as it stretches and shrinks, the density of the energy inside it changes. As a consequence, the classical energy conservation law that previously described all of physics didn’t fit this framework. David Hilbert, one of the most prominent mathematicians at the time, quickly identified this issue and set out with his colleague Felix Klein to try to resolve this apparent failure of relativity. After they were stumped, Hilbert passed the problem on to his assistant, the 33-year-old Emmy Noether.
Noether was an assistant in name only. She was already a formidable mathematician when, in early 1915, Hilbert and Klein invited her to join them at the University of Göttingen. But other faculty members objected to hiring a woman, and Noether was blocked from joining the faculty. Regardless, she would spend the next three years prodding the fault line separating physics and mathematics, eventually setting off an earthquake that would shake the foundations of fundamental physics.
In 1918, Noether published the results of her investigations in two landmark theorems. One made sense of conservation laws in small regions of space, a mathematical feat that would later prove important for understanding the symmetries of quantum field theory. The other, now just known as Noether’s theorem, says that behind every conservation law lies a deeper symmetry.
In mathematical terms, a symmetry is something you can do to a system that leaves it unchanged. Consider the act of rotation. If you start with an equilateral triangle, you’ll find that you can rotate it by multiples of 120 degrees without changing how it looks. If you start with a circle, you can rotate it by any angle. These actions without consequences reveal the underlying symmetries of these shapes.
But symmetries go beyond shape. Imagine you do an experiment, then you move 10 meters to the left and do it again. The results of the experiment don’t change, because the laws of physics don’t change from place to place. This is called translation symmetry.
Now wait a few days and repeat your experiment again. The results don’t change, because the laws of physics don’t change as time passes. This is called time-translation symmetry.
Noether started with symmetries like these and explored their mathematical consequences. She worked with established physics using a common mathematical description of a physical system, called a Lagrangian.
This is where Noether’s insight went beyond the symbols on the page. On paper, symmetries seem to have no impact on the physics of the system, since symmetries don’t affect the Lagrangian. But Noether realized that symmetries must be mathematically important, since they constrain how a system can behave. She worked through what this constraint should be, and out of the mathematics of the Lagrangian popped a quantity that can’t change. That quantity corresponds to the physical property that’s conserved. The impact of symmetry had been hiding beneath the equations all along, just out of view.
In the case of translation symmetry, the system’s total momentum should never change. For time-translation symmetry, a system’s total energy is conserved. Noether discovered that conservation laws aren’t fundamental axioms of the universe. Instead, they emerge from deeper symmetries.
The conceptual consequences are hard to overstate. Physicists of the early 20th century were shocked to realize that a system that breaks time-translation symmetry can break energy conservation along with it. We now know that our own universe does this. The cosmos is expanding at an accelerating rate, stretching out the leftover light from the early universe. The process reduces the light’s energy as time passes…
… Noether’s theorem has shaped the quantum world too. In the 1970s, it played a big role in the construction of the Standard Model of particle physics. The symmetries of quantum fields dictate laws that restrict how fundamental particles behave. For instance, a symmetry in the electromagnetic field forces particles to conserve their charge.
The power of Noether’s theorem has inspired physicists to look toward symmetry to discover new physics. Over a century later, Noether’s insights continue to influence the way physicists think…
“How Noether’s Theorem Revolutionized Physics,” from @shalmawegs in @QuantaMagazine.
* Jorge Luis Borges
###
As we contemplate cosmology, we might send insightful birthday greetings to the man who “wrote the book” on perspective, Leon Battista Alberti; he was born on this date in 1404. The archetypical Renaissance humanist polymath, Alberti was an author, artist, architect, poet, priest, linguist, philosopher, cartographer, and cryptographer. He collaborated with Toscanelli on the maps used by Columbus on his first voyage, and he published the the first book on cryptography that contained a frequency table.
But he is surely best remembered as the author of the first general treatise– Della Pictura (1434)– on the the laws of perspective, which built on and extended Brunelleschi’s work to describe the approach and technique that established the science of projective geometry… and fueled the progress of painting, sculpture, and architecture from the Greek- and Arabic-influenced formalism of the High Middle Ages to the more naturalistic (and Latinate) styles of Renaissance.


“Look before you ere you leap; / For as you sow, y’ are like to reap”*…
Further in a fashion to Saturday’s post, Robert Wright on the recent AI Summit in Paris…
[Last] week at the Paris AI summit, Vice President JD Vance stood before heads of state and tech titans and said, “When conferences like this convene to discuss a cutting edge technology, oftentimes, I think, our response is to be too self-conscious, too risk-averse. But never have I encountered a breakthrough in tech that so clearly calls us to do precisely the opposite.”
Precisely the opposite of “too risk-averse” would seem to be “not risk-averse enough.” Or maybe, as both ChatGPT and Anthropic’s Claude said when asked for the opposite of “too risk-averse”: “too risk-seeking” or “reckless.” In any event, most people in the AI safety community would agree that such terms capture the Trump administration’s approach to AI regulation. And that includes people who generally share Trump’s and Vance’s laissez faire intuitions. AI researcher Rob Miles posted a video of Vance’s speech on X and commented, “It’s so depressing that the one time when the government takes the right approach to an emerging technology, it’s for basically the only technology where that’s actually a terrible idea.”
The news for AI safety advocates gets worse: The summit’s overall vibe wasn’t all that different from Vance’s. The host, French President Emmanuel Macron, after announcing a big AI infrastructure investment, said that France is “back in the AI race” and that “Europe and France must accelerate their investments.” European Commission President Ursula von der Leyen vowed to “accelerate innovation” and “cut red tape” that now hobbles innovators. China and the US may be the world’s AI leaders, she granted, but “the AI race is far from being over.” All of this sat well with the corporate sector. As Axios reported, “A range of tech leaders, including Google CEO Sundar Pichai and Mistral CEO Arthur Mensch, used their speeches to push the acceleration mantra.”
Seems like only yesterday Sundar Pichai was emphasizing the need for international regulation, saying that AI, for all its benefits, holds great dangers. But, actually, that was back in 2023, when people like Open AI’s Sam Altman were also saying such things. That was the year world leaders convened in Britain’s Bletchley Park to discuss ways to collectively address AI risks, including catastrophic ones. The idea was to hold annual global summits on the international governance of AI. In theory, the Paris summit was the third of these (after the 2024 summit in Seoul). But you should always read the fine print: Whereas the official name of the first summit was “AI Safety Summit,” this year’s version was “AI Action Summit.” The headline over the Axios story was: “Don’t miss out” replaces “doom is nigh” at Paris’ AI summit.
The statement that came out of the summit did call for AI “safety” (along with “sustainable development, innovation,” and many other virtuous things). But there was no elaboration. Nothing, for example, about preventing people from using AIs to help make bioweapons—the kind of problem you’d think would call for international regulation, since pandemics don’t recognize national borders (and the kind of problem that some knowledgeable observers worry has been posed by OpenAI’s recently released Deep Research model).
MIT physicist Max Tegmark tweeted on Monday that a leaked draft of the summit statement seemed “optimized to antagonize both the US government (with focus on diversity, gender and disinformation) and the UK government (completely ignoring the scientific and political consensus around risks from smarter-than-human AI systems that was agreed at the Bletchley Park Summit).” And indeed, Britain and the US refused to sign the statement. The other 60 attending nations, including China, signed it.
Journalist Shakeel Hashim wrote about the world’s journey from Bletchley Park to Paris: “What was supposed to be a crucial forum for international cooperation has ended as a cautionary tale about how easily serious governance efforts can be derailed by national self-interest.” But, he said, the Paris Summit may have value “as a wake-up call. It has shown, definitively, that the current approach to AI governance is broken. The question now is whether we have time to fix it.”…
The ropes are down; the brakes are off: “AI Accelerationism Goes Global,” from @robertwrighter.bsky.social.
Apposite: the always-illuminating (and amusing) Matt Levine on Elon Musk’s bid to purchase Open AI (gift link to Bloomberg).
* Samuel Butler, Hudibras
###
As we prioritize prudence, we might spare a thought for Giordano Bruno; he died on this date in 1600. A philosopher, poet, alchemist, astrologer, cosmological theorist, and esotericist (occultist), his theories anticipated modern science. The most notable of these were his theories of the infinite universe and the multiplicity of worlds, in which he rejected the traditional geocentric (or Earth-centred) astronomy and intuitively went beyond the Copernican heliocentric (sun-centred) theory, which still maintained a finite universe with a sphere of fixed stars. Although one of the most important philosophers of the Italian Renaissance, Bruno’s various passionate utterings led to intense opposition. In 1592, after a trial by the Roman Inquisition, he was kept imprisoned for eight years and interrogated periodically. When, in the end, he refused to recant, he was burned at the stake in Rome for heresy.
“They will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”*…
Socrates was worried about the impact of a new technology– writing– on effetive intelligence of its users. Similar concerns have surfaced with the rise of other new communications technologies: moveable-type printing, photography, radio, television, and the internet. As Erik Hoel reminds us, AI is next on that list…
Unfortunately, there’s a growing subfield of psychology research pointing to cognitive atrophy from too much AI usage.
Evidence includes a new paper published by a cohort of researchers at Microsoft (not exactly a group predisposed to finding evidence for brain drain). Yet they do indeed see the effect in the critical thinking of knowledge workers who make heavy use of AI in their workflows.
To measure this, the researchers at Microsoft needed a definition of critical thinking. They used one of the oldest and most storied in the academic literature: that of mid-20th century education researcher Benjamin Bloom (the very same Benjamin Bloom who popularized tutoring as the most effective method of education).
Bloom’s taxonomy of critical thinking makes a great deal of sense. Below, you can see how what we’d call “the creative act” occupies the top two entries of the pyramid of critical thinking, wherein creativity is a combination of the synthesis of new ideas and then evaluative refinement over them.
To see where AI usage shows up in Bloom’s hierarchy, researchers surveyed a group of 319 knowledge workers who had incorporated AI into their workflow. What makes this survey noteworthy is how in-depth it is. They didn’t just ask for opinions; instead they compiled ~1,000 real-world examples of tasks the workers complete with AI assistance, and then surveyed them specifically about those in all sorts of ways, including qualitative and quantitative judgements.
In general, they found that AI decreased the amount of effort spent on critical thinking when performing a task…
… While the researchers themselves don’t make the connection, their data fits the intuitive idea that positive use of AI tools is when they shift cognitive tasks upward in terms of their level of abstraction.
We can view this through the lens of one of the most cited papers in all psychology, “The Magical Number Seven, Plus or Minus Two,” which introduced the eponymous Miller’s law: that working memory in humans caps out at 7 (plus or minus 2) different things. But the critical insight from the author, psychologist George Miller, is that experts don’t really have greater working memory. They’re actually still stuck at ~7 things. Instead, their advantage is how they mentally “chunk” the problem up at a higher-level of abstraction than non-experts, so their 7 things are worth a lot more when in mental motion. The classic example is that poor Chess players think in terms of individual pieces and individual moves, but great Chess players think in terms of patterns of pieces, which are the “chunks” shifted around when playing.
I think the positive aspect for AI augmentation of human workflows can be framed in light of Miller’s law: AI usage is cognitively healthy when it allows humans to mentally “chunk” tasks at a higher level of abstraction.
But if that’s the clear upside, the downside is just as clear. As the Microsoft researchers themselves say…
While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term over-reliance on the tool and diminished skill for independent problem-solving.
This negative effect scaled with the worker’s trust in AI: the more they blindly trusted AI results, the more outsourcing of critical thinking they suffered. That’s bad news, especially if these systems ever do permanently solve their hallucination problem, since many users will be shifted into the “high trust” category by dint of sheer competence.
The study isn’t alone. There’s increasing evidence for the detrimental effects of cognitive offloading, like that creativity gets hindered when there’s reliance on AI usage, and that over-reliance on AI is greatest when outputs are difficult to evaluate. Humans are even willing to offload to AI the decision to kill, at least in mock studies on simulated drone warfare decisions. And again, it was participants less confident in their own judgments, and more trusting of the AI when it disagreed with them, who got brain drained the most…
… Admittedly, there’s not yet high-quality causal evidence for lasting brain drain from AI use. But so it goes with subjects of this nature. What makes these debates difficult is that we want mono-causal universality in order to make ironclad claims about technology’s effect on society. It would be a lot easier to point to the downsides of internet and social media use if it simply made everyone’s attention spans equally shorter and everyone’s mental health equally worse, but that obviously isn’t the case. E.g., long-form content, like blogs, have blossomed on the internet.
But it’s also foolish to therefore dismiss the concern about shorter attention spans, because people will literally describe their own attention spans as shortening! They’ll write personal essays about it, or ask for help with dealing with it, or casually describe it as a generational issue, and the effect continues to be found in academic research.
With that caveat in mind, there’s now enough suggestive evidence from self-reports and workflow analysis to take “brAIn drAIn” seriously as a societal downside to the technology (adding to the list of other issues like AI slop and existential risk).
Similarly to how people use the internet in healthy and unhealthy ways, I think we should expect differential effects. For skilled knowledge workers with strong confidence in their own abilities, AI will be a tool to chunk up cognitively-demanding tasks at a higher level of abstraction in accordance with Miller’s law. For others… it’ll be a crutch.
So then what’s the take-away?
For one, I think we should be cautious about AI exposure in children. E.g., there is evidence from another paper in the brain-drain research subfield wherein it was younger AI users who showed the most dependency, and the younger cohort also didn’t match the critical thinking skills of older, more skeptical, AI users. As a young user put it:
It’s great to have all this information at my fingertips, but I sometimes worry that I’m not really learning or retaining anything. I rely so much on AI that I don’t think I’d know how to solve certain problems without it.
What a lovely new concern for parents we’ve invented!
Already nowadays, parents have to weather internal debates and worries about exposure to short-form video content platforms like TikTok. Of course, certain parents hand their kids an iPad essentially the day they’re born. But culturally this raises eyebrows, the same way handing out junk food at every meal does. Parents are a judgy bunch, which is often for the good, as it makes them cautious instead of waiting for some finalized scientific answer. While there’s still ongoing academic debate about the psychological effects of early smartphone usage, in general the results are visceral and obvious enough in real life for parents to make conservative decisions about prohibition, agonizing over when to introduce phones, the kind of phone, how to not overexpose their child to social media or addictive video games, etc.
Similarly, parents (and schools) will need to be careful about whether kids (and students) rely too much on AI early on. I personally am not worried about a graduate student using ChatGPT to code up eye-catching figures to show off their gathered data. There, the graduate student is using the technology appropriately to create a scientific paper via manipulating more abstract mental chunks (trust me, you don’t get into science to plod through the annoying intricacies of Matplotlib). I am, however, very worried about a 7th grader using AI to do their homework, and then, furthermore, coming to it with questions they should be thinking through themselves, because inevitably those questions are going to be about more and more minor things. People already worry enough about a generation of “iPad kids.” I don’t think we want to worry about a generation of brain-drained “meat puppets” next.
For individuals themselves, the main actionable thing to do about brain drain is to internalize a rule-of-thumb the academic literature already shows: Skepticism of AI capabilities—independent of if that skepticism is warranted or not!—makes for healthier AI usage.
In other words, pro-human bias and AI distrust are cognitively beneficial.
It’s said that first we shape our tools, then they shape us. Well, meet the new boss, same as the old boss… Just as, both as individuals and societies, we’ve had to learn our way into effective use of new technologes before, so we will with AI.
The enhancement and atrophy of human cognition go hand in hand: “brAIn drAIn,” from @erikphoel.
Pair with a broad and thoughtful view from Robin Sloan: “Is It OK?“
* “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” – Socrates, in Plato’s dialogue Phaedrus 14, 274c-275b
###
As we think about thinking, we might send carefully-considered birthday greetings to Alfred North Whitehead; he was born on this date in 1861. Whitehead began his career as a mathematician and logician, perhaps most famously co-authoring (with his former student, Bertrand Russell), the three-volume Principia Mathematica (1910–13), one of the twentieth century’s most important works in mathematical logic.
But in the late teens and early 20s, Whitehead shifted his focus to philosophy, the central result of which was a new field called process philosophy, which has found application in a wide variety of disciplines (e.g., ecology, theology, education, physics, biology, economics, and psychology).
“There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have consequences for the world around us.”
“Chess pieces are the block alphabet which shapes thoughts”*…
In Amritsar, at India’s oldest and largest chess manufacturing company, artisans have hand-carved the most complicated pieces in the game for generations. Roxanne Hoorn reports…
In the bustling streets of Amritsar, India, the markets are lined with shops full of colorful tapestries and sweet treats like warm local chai served in clay mugs. But the real treasures are kept behind closed doors. Beyond stacks of gnarled logs, inside unsuspecting brick buildings off the main streets, generations of master craftsmen carefully carve, sand, and polish intricate chess pieces, carrying on a long legacy in the country where the earliest versions of chess were played over 1,500 years ago.
These are no basic sets. The pieces make up elaborate professional and collector’s chess sets that sell for up to $4,000 U.S. dollars on the international market. That price is well deserved. Each set is a collective labor of love, with every component handcrafted by a man who specializes in one type of chess piece. (Traditionally, women are not chess carvers.) There are pawn makers, queen craftsmen, and the most coveted—the knight carvers.
“The knight carvers are only knight carvers,” says Rishi Sharma, CEO of the Chess Empire, India’s oldest and largest chess manufacturing company, which was founded in 1962. “The person who is making the queen, we don’t give him the pawn. Otherwise, he’s going to ruin it.”
Of all the chessmen, knights are considered the most difficult and require the most skill to carve. While pawns and other pieces can be shaped under lathes, the knights—resembling horse heads usually with wild flowing manes—are carved completely by hand. A chess carver won’t graduate from pawn to knight or any easier piece to harder ones, but instead will learn his craft from the start of his career, usually from their father or a mentor from one of the well-established chess companies. Surinder Pal, a knight carver at the Chess Empire, learned from his father at 18 years old. Now, he has been working on the craft for over 35 years. With his advanced and highly specialized skill, he can make up to 30 simple knights a day, or spend up to three days on a single ornate knight.
Today, chess pieces are carved from local species like boxwood or imported trees like rose and dogwood. But they were once made of a far more elusive and illicit material. Amritsar was originally known for its ivory carvers, who produced everything from hair combs and jewelry to furniture and sculptures. And of course, chess sets. After the international trade of ivory was banned in the 1990s, the craftsman turned to the similarly smooth but far more accessible medium.
With raw materials readily available, it’s the demand for these fine chess sets that determines how many are produced. And demand has fluctuated in recent years. The COVID-19 pandemic left many people secluded in their homes, leading to a boost in demand for many indoor games, says Sharma. In October 2020, that enthusiasm for chess was compounded by the release of The Queen’s Gambit, a series about a fictional American chess prodigy. “The Queen’s Gambit had a very big role in spreading awareness of chess,” Sharma says. “And after that, we see a big boom.” Despite the show’s creator stating they have no plans for a second season, Sharma stays hopeful. “We hope the next season comes as soon as possible.”…
Equipping the Royal Game: “Masters of the Knight: The Art of Chess Carving in India,” from @atlasobscura.com.
###
As we prize the pieces, we might recall thatt it was on this date in 1996 that then-world chess champion Garry Kasparov and an IBM supercomputer called Deep Blue played game four the first of their two six-game chess matches. They played to a draw. Kasparov won the match– but by a margin of only 4-2 (two draws and a loss to the computer). They met for a rematch the following year, and Big Blue beat Kasparov (3 1/2- 2 1/2).










You must be logged in to post a comment.