Posts Tagged ‘AI’
“Mathematics is the music of reason”*…
New technologies, most centrally AI, are arming scientists with tools that might not just accelerate or enhance their work, but altogether transform it. As Jordana Cepelewicz reports, mathematicians have started to prepare for a profound shift in what it means to do math…
Since the start of the 20th century, the heart of mathematics has been the proof — a rigorous, logical argument for whether a given statement is true or false. Mathematicians’ careers are measured by what kinds of theorems they can prove, and how many. They spend the bulk of their time coming up with fresh insights to make a proof work, then translating those intuitions into step-by-step deductions, fitting different lines of reasoning together like puzzle pieces.
The best proofs are works of art. They’re not just rigorous; they’re elegant, creative and beautiful. This makes them feel like a distinctly human activity — our way of making sense of the world, of sharpening our minds, of testing the limits of thought itself.
But proofs are also inherently rational. And so it was only natural that when researchers started developing artificial intelligence in the mid-1950s, they hoped to automate theorem proving: to design computer programs capable of generating proofs of their own. They had some success. One of the earliest AI programs could output proofs of dozens of statements in mathematical logic. Other programs followed, coming up with ways to prove statements in geometry, calculus and other areas.
Still, these automated theorem provers were limited. The kinds of theorems that mathematicians really cared about required too much complexity and creativity. Mathematical research continued as it always had, unaffected and undeterred.
Now that’s starting to change. Over the past few years, mathematicians have used machine learning models (opens a new tab) to uncover new patterns, invent new conjectures, and find counterexamples to old ones. They’ve created powerful proof assistants both to verify whether a given proof is correct and to organize their mathematical knowledge.
They have not, as yet, built systems that can generate the proofs from start to finish, but that may be changing. In 2024, Google DeepMind announced that they had developed an AI system that scored a silver medal in the International Mathematical Olympiad, a prestigious proof-based exam for high school students. OpenAI’s more generalized “large language model,” ChatGPT, has made significant headway on reproducing proofs and solving challenging problems, as have smaller-scale bespoke systems. “It’s stunning how much they’re improving,” said Andrew Granville, a mathematician at the University of Montreal who until recently doubted claims that this technology might soon have a real impact on theorem proving. “They absolutely blow apart where I thought the limitations were. The cat’s out of the bag.”
Researchers predict they’ll be able to start outsourcing more tedious sections of proofs to AI within the next few years. They’re mixed on whether AI will ever be able to prove their most important conjectures entirely: Some are willing to entertain the notion, while others think there are insurmountable technological barriers. But it’s no longer entirely out of the question that the more creative aspects of the mathematical enterprise might one day be automated.
Even so, most mathematicians at the moment “have their heads buried firmly in the sand,” Granville said. They’re ignoring the latest developments, preferring to spend their time and energy on their usual jobs.
Continuing to do so, some researchers warn, would be a mistake. Even the ability to outsource boring or rote parts of proofs to AI “would drastically alter what we do and how we think about math over time,” said Akshay Venkatesh, a preeminent mathematician and Fields medalist at the Institute for Advanced Study in Princeton, New Jersey.
He and a relatively small group of other mathematicians are now starting to examine what an AI-powered mathematical future might look like, and how it will change what they value. In such a future, instead of spending most of their time proving theorems, mathematicians will play the role of critic, translator, conductor, experimentalist. Mathematics might draw closer to laboratory sciences, or even to the arts and humanities.
Imagining how AI will transform mathematics isn’t just an exercise in preparation. It has forced mathematicians to reckon with what mathematics really is at its core, and what it’s for…
Absolutely fascinating: “Mathematical Beauty, Truth, and Proof in the Age of AI,” from @jordanacep.bsky.social in @quantamagazine.bsky.social. Eminently worth reading in full.
###
As we wonder about ways of knowing, we might spare a thought for a man whose work helped trigger an earlier iteration of this enhance/transform discussion and laid the groundwork for the one unpacked in the article linked above above: J. Presper Eckert; he died on this day in 1995. An electrical engineer, he co-designed (with John Mauchly) the first general purpose computer, the ENIAC (see here and here) for the U.S. Army’s Ballistic Research Laboratory. He and Mauchy went on to found the Eckert–Mauchly Computer Corporation, at which they designed and built the first commercial computer in the U.S., the UNIVAC.

“‘When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.'”*…
Like today’s large language models, some 16th-century humanists (like Erasmus) had techniques to automate writing. But as Hannah Katznelson explains, others (like Rabelais) called foul…
The Renaissance scholar and educator Erasmus of Rotterdam opens his polemical treatise The Ciceronian (1528) by describing the utterly dysfunctional writing process of a character named Nosoponus. The Ciceronianis structured as a dialogue, withtwo mature writers, Bulephorus and Hypologus, trying to talk Nosoponus out of his paralysing obsession with stylistic perfection. Nosoponus explains that it would take him weeks of fruitless writing and rewriting to produce a casual letter in which he asks a friend to return some borrowed books. He says that writing requires such intense concentration that he can do it only at night, when no one else is awake to distract him, and even then his perfectionism is so intense that a single sentence becomes a full night’s work. Nosoponus goes over what he’s written again and again, but remains so dissatisfied with the quality of his language that eventually he just gives up.
Nosoponus’s problem might resonate. Who has not spent too long going over the wording of a simple email, at some point or another? Today there is an easy fix: we have large language models (LLMs) to write our letters for us, helpfully proffering suggestions as to what we might say, and how we might phrase it. When I input Nosoponus’s intended request into GPT-4, it generated the following almost instantly:
Hey [Friend’s Name],
Hope you’re doing well! I just realised I never got those books back that I lent you a while ago. No rush, but whenever you get a chance, I’d love to get them back. Let me know what works for you! Thanks!
Nosoponus
But there was a solution in the 16th century, too. A humanist education on the Erasmian model could train its students to produce letters of any length, on any topic – quickly, easily and eloquently. The French humanist François Rabelais, a contemporary of Erasmus, appears to have understood these compositional techniques as automating the creating of text in a way that, retrospectively, looks a lot like how LLMs function. If we want to understand LLMs, and what they are and aren’t capable of, we can look at earlier versions of the same technology – like Erasmian humanism. We can also read authors like Rabelais, who is already thinking about automatic text-generation along these lines, as someone who appreciates the effectiveness of Erasmian generative technology, but at the same time sees it as vitiating the social force of language and, ultimately, ruining language as a tool for moral and political life…
[Katznelson recounts Erasmus’s efforts, Rabelais’s response, and unpacks the important differences between our own authentic speech language created to speak for us and their practical and moral implications…]
What lessons from the 16th century can tell us about AI and LLMs: “Methodical banality,” from @aeon.co.
* Lewis Carroll, Through the Looking Glass
###
As we honor authenticity, we might recall that it was on this date in 1886 that three U.S. patents were issued to Alexander Graham Bell’s Volta Labs for “recording and reproducing speech and other sounds.” The Graphophone, was an improved (and the first practical) version of the Edison phonograph (from 1877), and became the foundation on which the speech recording (e.g., dictaphone) and recorded music (and spoken word) industries began to grow.
“I think the next century will be the century of complexity”*…
… and as Philip Ball reports, a team of scientists at Carnegie Science agrees…
In 1950 the Italian physicist Enrico Fermi was discussing the possibility of intelligent alien life with his colleagues. If alien civilizations exist, he said, some should surely have had enough time to expand throughout the cosmos. So where are they?
Many answers to Fermi’s “paradox” have been proposed: Maybe alien civilizations burn out or destroy themselves before they can become interstellar wanderers. But perhaps the simplest answer is that such civilizations don’t appear in the first place: Intelligent life is extremely unlikely, and we pose the question only because we are the supremely rare exception.
A new proposal by an interdisciplinary team of researchers challenges that bleak conclusion. They have proposed nothing less than a new law of nature, according to which the complexity of entities in the universe increases over time with an inexorability comparable to the second law of thermodynamics — the law that dictates an inevitable rise in entropy, a measure of disorder. If they’re right, complex and intelligent life should be widespread.
In this new view, biological evolution appears not as a unique process that gave rise to a qualitatively distinct form of matter — living organisms. Instead, evolution is a special (and perhaps inevitable) case of a more general principle that governs the universe. According to this principle, entities are selected because they are richer in a kind of information that enables them to perform some kind of function.
This hypothesis, formulated by the mineralogist Robert Hazen [here] and the astrobiologist Michael Wong [here] of the Carnegie Institution in Washington, D.C., along with a team of others, has provoked intense debate. Some researchers have welcomed the idea as part of a grand narrative about fundamental laws of nature. They argue that the basic laws of physics are not “complete” in the sense of supplying all we need to comprehend natural phenomena; rather, evolution — biological or otherwise — introduces functions and novelties that could not even in principle be predicted from physics alone. “I’m so glad they’ve done what they’ve done,” said Stuart Kauffman, an emeritus complexity theorist at the University of Pennsylvania. “They’ve made these questions legitimate.”…
[Ball explains the origin and outline of Hazen’s and Wong’s conjecture, explores the critiques– among them, that it’s not clear how to test the hypothesis– and examines the resonant work on Assembly Theory being done by Lee Cronin and Sara Walker…]
… Wong said there is more work still to be done on mineral evolution, and they hope to look at nucleosynthesis and computational “artificial life.” Hazen also sees possible applications in oncology, soil science and language evolution. For example, the evolutionary biologist Frédéric Thomas of the University of Montpellier in France and colleagues have argued that the selective principles governing the way cancer cells change over time in tumors are not like those of Darwinian evolution, in which the selection criterion is fitness, but more closely resemble the idea of selection for function from Hazen and colleagues.
Hazen’s team has been fielding queries from researchers ranging from economists to neuroscientists, who are keen to see if the approach can help. “People are approaching us because they are desperate to find a model to explain their system,” Hazen said.
But whether or not functional information turns out to be the right tool for thinking about these questions, many researchers seem to be converging on similar questions about complexity, information, evolution (both biological and cosmic), function and purpose, and the directionality of time. It’s hard not to suspect that something big is afoot. There are echoes of the early days of thermodynamics, which began with humble questions about how machines work and ended up speaking to the arrow of time, the peculiarities of living matter, and the fate of the universe…
A new suggestion that complexity increases over time, not just in living organisms but in the nonliving world, promises to rewrite notions of time and evolution: “Why Everything in the Universe Turns More Complex,” from @philipcball.bsky.social and @quantamagazine.bsky.social.
See also: Benjamin Bratton‘s explantion of the work he and his collegues are doing at a new institute at UCSD: “Antikythera.” See his recent Long Now Foundation talk on this same subject here.
* Stephen Hawking
###
As we celebrate complication, we might spare a thought for G. N. Ramachandran (Gopalasamudram Narayanan Ramachandran); he died on this date in 2001. A biophysicist, he discovered the triple helical “coiled coil” structure of the collagen molecule, among other remarkable contributions to structural biology.
Ramachandran was a master of X-ray crystallography, and with his colleagues, constructed space filling models of protein molecules. He devised the Ramachandran Plot, a method to diagram the conformation of polypeptides, polysaccharides and polynucleotides– which remains the international standard to describe protein structures.
Ramachandran, inspired by the ancient Syaad Nyaaya (doctrine of “may be”), also explored artificial intelligence. He developed the Boolean Vector Matrix Formulation which has important application in writing software for AI.
“Look before you ere you leap; / For as you sow, y’ are like to reap”*…
Further in a fashion to Saturday’s post, Robert Wright on the recent AI Summit in Paris…
[Last] week at the Paris AI summit, Vice President JD Vance stood before heads of state and tech titans and said, “When conferences like this convene to discuss a cutting edge technology, oftentimes, I think, our response is to be too self-conscious, too risk-averse. But never have I encountered a breakthrough in tech that so clearly calls us to do precisely the opposite.”
Precisely the opposite of “too risk-averse” would seem to be “not risk-averse enough.” Or maybe, as both ChatGPT and Anthropic’s Claude said when asked for the opposite of “too risk-averse”: “too risk-seeking” or “reckless.” In any event, most people in the AI safety community would agree that such terms capture the Trump administration’s approach to AI regulation. And that includes people who generally share Trump’s and Vance’s laissez faire intuitions. AI researcher Rob Miles posted a video of Vance’s speech on X and commented, “It’s so depressing that the one time when the government takes the right approach to an emerging technology, it’s for basically the only technology where that’s actually a terrible idea.”
The news for AI safety advocates gets worse: The summit’s overall vibe wasn’t all that different from Vance’s. The host, French President Emmanuel Macron, after announcing a big AI infrastructure investment, said that France is “back in the AI race” and that “Europe and France must accelerate their investments.” European Commission President Ursula von der Leyen vowed to “accelerate innovation” and “cut red tape” that now hobbles innovators. China and the US may be the world’s AI leaders, she granted, but “the AI race is far from being over.” All of this sat well with the corporate sector. As Axios reported, “A range of tech leaders, including Google CEO Sundar Pichai and Mistral CEO Arthur Mensch, used their speeches to push the acceleration mantra.”
Seems like only yesterday Sundar Pichai was emphasizing the need for international regulation, saying that AI, for all its benefits, holds great dangers. But, actually, that was back in 2023, when people like Open AI’s Sam Altman were also saying such things. That was the year world leaders convened in Britain’s Bletchley Park to discuss ways to collectively address AI risks, including catastrophic ones. The idea was to hold annual global summits on the international governance of AI. In theory, the Paris summit was the third of these (after the 2024 summit in Seoul). But you should always read the fine print: Whereas the official name of the first summit was “AI Safety Summit,” this year’s version was “AI Action Summit.” The headline over the Axios story was: “Don’t miss out” replaces “doom is nigh” at Paris’ AI summit.
The statement that came out of the summit did call for AI “safety” (along with “sustainable development, innovation,” and many other virtuous things). But there was no elaboration. Nothing, for example, about preventing people from using AIs to help make bioweapons—the kind of problem you’d think would call for international regulation, since pandemics don’t recognize national borders (and the kind of problem that some knowledgeable observers worry has been posed by OpenAI’s recently released Deep Research model).
MIT physicist Max Tegmark tweeted on Monday that a leaked draft of the summit statement seemed “optimized to antagonize both the US government (with focus on diversity, gender and disinformation) and the UK government (completely ignoring the scientific and political consensus around risks from smarter-than-human AI systems that was agreed at the Bletchley Park Summit).” And indeed, Britain and the US refused to sign the statement. The other 60 attending nations, including China, signed it.
Journalist Shakeel Hashim wrote about the world’s journey from Bletchley Park to Paris: “What was supposed to be a crucial forum for international cooperation has ended as a cautionary tale about how easily serious governance efforts can be derailed by national self-interest.” But, he said, the Paris Summit may have value “as a wake-up call. It has shown, definitively, that the current approach to AI governance is broken. The question now is whether we have time to fix it.”…
The ropes are down; the brakes are off: “AI Accelerationism Goes Global,” from @robertwrighter.bsky.social.
Apposite: the always-illuminating (and amusing) Matt Levine on Elon Musk’s bid to purchase Open AI (gift link to Bloomberg).
* Samuel Butler, Hudibras
###
As we prioritize prudence, we might spare a thought for Giordano Bruno; he died on this date in 1600. A philosopher, poet, alchemist, astrologer, cosmological theorist, and esotericist (occultist), his theories anticipated modern science. The most notable of these were his theories of the infinite universe and the multiplicity of worlds, in which he rejected the traditional geocentric (or Earth-centred) astronomy and intuitively went beyond the Copernican heliocentric (sun-centred) theory, which still maintained a finite universe with a sphere of fixed stars. Although one of the most important philosophers of the Italian Renaissance, Bruno’s various passionate utterings led to intense opposition. In 1592, after a trial by the Roman Inquisition, he was kept imprisoned for eight years and interrogated periodically. When, in the end, he refused to recant, he was burned at the stake in Rome for heresy.
“They will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”*…
Socrates was worried about the impact of a new technology– writing– on effetive intelligence of its users. Similar concerns have surfaced with the rise of other new communications technologies: moveable-type printing, photography, radio, television, and the internet. As Erik Hoel reminds us, AI is next on that list…
Unfortunately, there’s a growing subfield of psychology research pointing to cognitive atrophy from too much AI usage.
Evidence includes a new paper published by a cohort of researchers at Microsoft (not exactly a group predisposed to finding evidence for brain drain). Yet they do indeed see the effect in the critical thinking of knowledge workers who make heavy use of AI in their workflows.
To measure this, the researchers at Microsoft needed a definition of critical thinking. They used one of the oldest and most storied in the academic literature: that of mid-20th century education researcher Benjamin Bloom (the very same Benjamin Bloom who popularized tutoring as the most effective method of education).
Bloom’s taxonomy of critical thinking makes a great deal of sense. Below, you can see how what we’d call “the creative act” occupies the top two entries of the pyramid of critical thinking, wherein creativity is a combination of the synthesis of new ideas and then evaluative refinement over them.
To see where AI usage shows up in Bloom’s hierarchy, researchers surveyed a group of 319 knowledge workers who had incorporated AI into their workflow. What makes this survey noteworthy is how in-depth it is. They didn’t just ask for opinions; instead they compiled ~1,000 real-world examples of tasks the workers complete with AI assistance, and then surveyed them specifically about those in all sorts of ways, including qualitative and quantitative judgements.
In general, they found that AI decreased the amount of effort spent on critical thinking when performing a task…
… While the researchers themselves don’t make the connection, their data fits the intuitive idea that positive use of AI tools is when they shift cognitive tasks upward in terms of their level of abstraction.
We can view this through the lens of one of the most cited papers in all psychology, “The Magical Number Seven, Plus or Minus Two,” which introduced the eponymous Miller’s law: that working memory in humans caps out at 7 (plus or minus 2) different things. But the critical insight from the author, psychologist George Miller, is that experts don’t really have greater working memory. They’re actually still stuck at ~7 things. Instead, their advantage is how they mentally “chunk” the problem up at a higher-level of abstraction than non-experts, so their 7 things are worth a lot more when in mental motion. The classic example is that poor Chess players think in terms of individual pieces and individual moves, but great Chess players think in terms of patterns of pieces, which are the “chunks” shifted around when playing.
I think the positive aspect for AI augmentation of human workflows can be framed in light of Miller’s law: AI usage is cognitively healthy when it allows humans to mentally “chunk” tasks at a higher level of abstraction.
But if that’s the clear upside, the downside is just as clear. As the Microsoft researchers themselves say…
While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term over-reliance on the tool and diminished skill for independent problem-solving.
This negative effect scaled with the worker’s trust in AI: the more they blindly trusted AI results, the more outsourcing of critical thinking they suffered. That’s bad news, especially if these systems ever do permanently solve their hallucination problem, since many users will be shifted into the “high trust” category by dint of sheer competence.
The study isn’t alone. There’s increasing evidence for the detrimental effects of cognitive offloading, like that creativity gets hindered when there’s reliance on AI usage, and that over-reliance on AI is greatest when outputs are difficult to evaluate. Humans are even willing to offload to AI the decision to kill, at least in mock studies on simulated drone warfare decisions. And again, it was participants less confident in their own judgments, and more trusting of the AI when it disagreed with them, who got brain drained the most…
… Admittedly, there’s not yet high-quality causal evidence for lasting brain drain from AI use. But so it goes with subjects of this nature. What makes these debates difficult is that we want mono-causal universality in order to make ironclad claims about technology’s effect on society. It would be a lot easier to point to the downsides of internet and social media use if it simply made everyone’s attention spans equally shorter and everyone’s mental health equally worse, but that obviously isn’t the case. E.g., long-form content, like blogs, have blossomed on the internet.
But it’s also foolish to therefore dismiss the concern about shorter attention spans, because people will literally describe their own attention spans as shortening! They’ll write personal essays about it, or ask for help with dealing with it, or casually describe it as a generational issue, and the effect continues to be found in academic research.
With that caveat in mind, there’s now enough suggestive evidence from self-reports and workflow analysis to take “brAIn drAIn” seriously as a societal downside to the technology (adding to the list of other issues like AI slop and existential risk).
Similarly to how people use the internet in healthy and unhealthy ways, I think we should expect differential effects. For skilled knowledge workers with strong confidence in their own abilities, AI will be a tool to chunk up cognitively-demanding tasks at a higher level of abstraction in accordance with Miller’s law. For others… it’ll be a crutch.
So then what’s the take-away?
For one, I think we should be cautious about AI exposure in children. E.g., there is evidence from another paper in the brain-drain research subfield wherein it was younger AI users who showed the most dependency, and the younger cohort also didn’t match the critical thinking skills of older, more skeptical, AI users. As a young user put it:
It’s great to have all this information at my fingertips, but I sometimes worry that I’m not really learning or retaining anything. I rely so much on AI that I don’t think I’d know how to solve certain problems without it.
What a lovely new concern for parents we’ve invented!
Already nowadays, parents have to weather internal debates and worries about exposure to short-form video content platforms like TikTok. Of course, certain parents hand their kids an iPad essentially the day they’re born. But culturally this raises eyebrows, the same way handing out junk food at every meal does. Parents are a judgy bunch, which is often for the good, as it makes them cautious instead of waiting for some finalized scientific answer. While there’s still ongoing academic debate about the psychological effects of early smartphone usage, in general the results are visceral and obvious enough in real life for parents to make conservative decisions about prohibition, agonizing over when to introduce phones, the kind of phone, how to not overexpose their child to social media or addictive video games, etc.
Similarly, parents (and schools) will need to be careful about whether kids (and students) rely too much on AI early on. I personally am not worried about a graduate student using ChatGPT to code up eye-catching figures to show off their gathered data. There, the graduate student is using the technology appropriately to create a scientific paper via manipulating more abstract mental chunks (trust me, you don’t get into science to plod through the annoying intricacies of Matplotlib). I am, however, very worried about a 7th grader using AI to do their homework, and then, furthermore, coming to it with questions they should be thinking through themselves, because inevitably those questions are going to be about more and more minor things. People already worry enough about a generation of “iPad kids.” I don’t think we want to worry about a generation of brain-drained “meat puppets” next.
For individuals themselves, the main actionable thing to do about brain drain is to internalize a rule-of-thumb the academic literature already shows: Skepticism of AI capabilities—independent of if that skepticism is warranted or not!—makes for healthier AI usage.
In other words, pro-human bias and AI distrust are cognitively beneficial.
It’s said that first we shape our tools, then they shape us. Well, meet the new boss, same as the old boss… Just as, both as individuals and societies, we’ve had to learn our way into effective use of new technologes before, so we will with AI.
The enhancement and atrophy of human cognition go hand in hand: “brAIn drAIn,” from @erikphoel.
Pair with a broad and thoughtful view from Robin Sloan: “Is It OK?“
* “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” – Socrates, in Plato’s dialogue Phaedrus 14, 274c-275b
###
As we think about thinking, we might send carefully-considered birthday greetings to Alfred North Whitehead; he was born on this date in 1861. Whitehead began his career as a mathematician and logician, perhaps most famously co-authoring (with his former student, Bertrand Russell), the three-volume Principia Mathematica (1910–13), one of the twentieth century’s most important works in mathematical logic.
But in the late teens and early 20s, Whitehead shifted his focus to philosophy, the central result of which was a new field called process philosophy, which has found application in a wide variety of disciplines (e.g., ecology, theology, education, physics, biology, economics, and psychology).
“There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have consequences for the world around us.”










You must be logged in to post a comment.