(Roughly) Daily

Posts Tagged ‘proof

“Mathematics is the music of reason”*…

An illustration of a mathematician engaged in work, drawing geometric shapes and formulas on paper, with a three-dimensional geometric object and interconnected lines of mathematical concepts in the background.

New technologies, most centrally AI, are arming scientists with tools that might not just accelerate or enhance their work, but altogether transform it. As Jordana Cepelewicz reports, mathematicians have started to prepare for a profound shift in what it means to do math…

Since the start of the 20th century, the heart of mathematics has been the proof — a rigorous, logical argument for whether a given statement is true or false. Mathematicians’ careers are measured by what kinds of theorems they can prove, and how many. They spend the bulk of their time coming up with fresh insights to make a proof work, then translating those intuitions into step-by-step deductions, fitting different lines of reasoning together like puzzle pieces.

The best proofs are works of art. They’re not just rigorous; they’re elegant, creative and beautiful. This makes them feel like a distinctly human activity — our way of making sense of the world, of sharpening our minds, of testing the limits of thought itself.

But proofs are also inherently rational. And so it was only natural that when researchers started developing artificial intelligence in the mid-1950s, they hoped to automate theorem proving: to design computer programs capable of generating proofs of their own. They had some success. One of the earliest AI programs could output proofs of dozens of statements in mathematical logic. Other programs followed, coming up with ways to prove statements in geometry, calculus and other areas.

Still, these automated theorem provers were limited. The kinds of theorems that mathematicians really cared about required too much complexity and creativity. Mathematical research continued as it always had, unaffected and undeterred.

Now that’s starting to change. Over the past few years, mathematicians have used machine learning models (opens a new tab) to uncover new patterns, invent new conjectures, and find counterexamples to old ones. They’ve created powerful proof assistants both to verify whether a given proof is correct and to organize their mathematical knowledge.

They have not, as yet, built systems that can generate the proofs from start to finish, but that may be changing. In 2024, Google DeepMind announced that they had developed an AI system that scored a silver medal in the International Mathematical Olympiad, a prestigious proof-based exam for high school students. OpenAI’s more generalized “large language model,” ChatGPT, has made significant headway on reproducing proofs and solving challenging problems, as have smaller-scale bespoke systems. “It’s stunning how much they’re improving,” said Andrew Granville, a mathematician at the University of Montreal who until recently doubted claims that this technology might soon have a real impact on theorem proving. “They absolutely blow apart where I thought the limitations were. The cat’s out of the bag.”

Researchers predict they’ll be able to start outsourcing more tedious sections of proofs to AI within the next few years. They’re mixed on whether AI will ever be able to prove their most important conjectures entirely: Some are willing to entertain the notion, while others think there are insurmountable technological barriers. But it’s no longer entirely out of the question that the more creative aspects of the mathematical enterprise might one day be automated.

Even so, most mathematicians at the moment “have their heads buried firmly in the sand,” Granville said. They’re ignoring the latest developments, preferring to spend their time and energy on their usual jobs.

Continuing to do so, some researchers warn, would be a mistake. Even the ability to outsource boring or rote parts of proofs to AI “would drastically alter what we do and how we think about math over time,” said Akshay Venkatesh, a preeminent mathematician and Fields medalist at the Institute for Advanced Study in Princeton, New Jersey.

He and a relatively small group of other mathematicians are now starting to examine what an AI-powered mathematical future might look like, and how it will change what they value. In such a future, instead of spending most of their time proving theorems, mathematicians will play the role of critic, translator, conductor, experimentalist. Mathematics might draw closer to laboratory sciences, or even to the arts and humanities.

Imagining how AI will transform mathematics isn’t just an exercise in preparation. It has forced mathematicians to reckon with what mathematics really is at its core, and what it’s for…

Absolutely fascinating: “Mathematical Beauty, Truth, and Proof in the Age of AI,” from @jordanacep.bsky.social‬ in @quantamagazine.bsky.social‬. Eminently worth reading in full.

James Joseph Sylvester

###

As we wonder about ways of knowing, we might spare a thought for a man whose work helped trigger an earlier iteration of this enhance/transform discussion and laid the groundwork for the one unpacked in the article linked above above: J. Presper Eckert; he died on this day in 1995. An electrical engineer, he co-designed (with John Mauchly) the first general purpose computer, the ENIAC (see here and here) for the U.S. Army’s Ballistic Research Laboratory. He and Mauchy went on to found the Eckert–Mauchly Computer Corporation, at which they designed and built the first commercial computer in the U.S., the UNIVAC.

Three men interacting with a large vintage computer console, with tape reels in the background.
Eckert (standing and gesturing) and Mauchy (at the console), demonstrating the UNIVAC to Walter Cronkite (source)

“In mathematics, the art of proposing a question must be held of higher value than solving it”*…

Matteo Wong talks with mathematician Terence Tao about the advent of AI in mathematical research and finds that Tao has some very big questions indeed…

Terence Tao, a mathematics professor at UCLA, is a real-life superintelligence. The “Mozart of Math,” as he is sometimes called, is widely considered the world’s greatest living mathematician. He has won numerous awards, including the equivalent of a Nobel Prize for mathematics, for his advances and proofs. Right now, AI is nowhere close to his level.

But technology companies are trying to get it there. Recent, attention-grabbing generations of AI—even the almighty ChatGPT—were not built to handle mathematical reasoning. They were instead focused on language: When you asked such a program to answer a basic question, it did not understand and execute an equation or formulate a proof, but instead presented an answer based on which words were likely to appear in sequence. For instance, the original ChatGPT can’t add or multiply, but has seen enough examples of algebra to solve x + 2 = 4: “To solve the equation x + 2 = 4, subtract 2 from both sides …” Now, however, OpenAI is explicitly marketing a new line of “reasoning models,” known collectively as the o1 series, for their ability to problem-solve “much like a person” and work through complex mathematical and scientific tasks and queries. If these models are successful, they could represent a sea change for the slow, lonely work that Tao and his peers do.

After I saw Tao post his impressions of o1 online—he compared it to a “mediocre, but not completely incompetent” graduate student—I wanted to understand more about his views on the technology’s potential. In a Zoom call last week, he described a kind of AI-enabled, “industrial-scale mathematics” that has never been possible before: one in which AI, at least in the near future, is not a creative collaborator in its own right so much as a lubricant for mathematicians’ hypotheses and approaches. This new sort of math, which could unlock terra incognitae of knowledge, will remain human at its core, embracing how people and machines have very different strengths that should be thought of as complementary rather than competing…

A sample of what follows…

The classic idea of math is that you pick some really hard problem, and then you have one or two people locked away in the attic for seven years just banging away at it. The types of problems you want to attack with AI are the opposite. The naive way you would use AI is to feed it the most difficult problem that we have in mathematics. I don’t think that’s going to be super successful, and also, we already have humans that are working on those problems.

Tao: The type of math that I’m most interested in is math that doesn’t really exist. The project that I launched just a few days ago is about an area of math called universal algebra, which is about whether certain mathematical statements or equations imply that other statements are true. The way people have studied this in the past is that they pick one or two equations and they study them to death, like how a craftsperson used to make one toy at a time, then work on the next one. Now we have factories; we can produce thousands of toys at a time. In my project, there’s a collection of about 4,000 equations, and the task is to find connections between them. Each is relatively easy, but there’s a million implications. There’s like 10 points of light, 10 equations among these thousands that have been studied reasonably well, and then there’s this whole terra incognita.

There are other fields where this transition has happened, like in genetics. It used to be that if you wanted to sequence a genome of an organism, this was an entire Ph.D. thesis. Now we have these gene-sequencing machines, and so geneticists are sequencing entire populations. You can do different types of genetics that way. Instead of narrow, deep mathematics, where an expert human works very hard on a narrow scope of problems, you could have broad, crowdsourced problems with lots of AI assistance that are maybe shallower, but at a much larger scale. And it could be a very complementary way of gaining mathematical insight.

Wong: It reminds me of how an AI program made by Google Deepmind, called AlphaFold, figured out how to predict the three-dimensional structure of proteins, which was for a long time something that had to be done one protein at a time.

Tao: Right, but that doesn’t mean protein science is obsolete. You have to change the problems you study. A hundred and fifty years ago, mathematicians’ primary usefulness was in solving partial differential equations. There are computer packages that do this automatically now. Six hundred years ago, mathematicians were building tables of sines and cosines, which were needed for navigation, but these can now be generated by computers in seconds.

I’m not super interested in duplicating the things that humans are already good at. It seems inefficient. I think at the frontier, we will always need humans and AI. They have complementary strengths. AI is very good at converting billions of pieces of data into one good answer. Humans are good at taking 10 observations and making really inspired guesses…

Terence Tao, the world’s greatest living mathematician, has a vision for AI: “We’re Entering Uncharted Territory for Math,” from @matteo_wong in @TheAtlantic.

Georg Cantor

###

As we go figure, we might think recursively about Benoit Mandelbrot; he died on this date in 2010. A mathematician (and polymath), his interest in “the art of roughness” of physical phenomena and “the uncontrolled element in life” led to work (which included coining the word “fractal”, as well as developing a theory of “self-similarity” in nature) for which he is known as “the father of fractal geometry.”

source

Written by (Roughly) Daily

October 14, 2024 at 1:00 am

“Why, sometimes I’ve believed as many as six impossible things before breakfast”*…

We live in ontologically challenged times, what some call a “metacrisis“:… but then we humans mostly always have. Peter Kauffman reviews an important new book, Carlos Eire‘s They Flew, that explores our capacity to believe the unbelievable…

This past July, the University of Chicago’s National Opinion Research Center (NORC) and the Associated Press published the results of a May 2023 poll. Of the 1,680 American adults whom they interviewed (a representative sampling of the population at large), 69 percent said they believe in angels, 69 percent said they believe in heaven, 58 percent believe in hell, 56 percent in the existence of the devil, and 50 percent that spirits from the dead can communicate with the living. Reincarnation? The power of prayer? Astrology? Karma? Yoga as a spiritual practice? The idea that spiritual energy can be rooted in physical objects? Just six percent of us believe in none of these things. With this poll’s margin of error at plus or minus four points and other national polls over the years indicating much the same thing, it’s safe to say that 90 percent of all Americans today believe in something supernatural.

There is so much that is urgent for us to learn from what the social sciences call the dynamics of belief. Belief in things that come without any ostensible proof or evidence—that presidential elections are stolen, that horse tranquilizers cure COVID-19—is a root cause of the epistemic chaos that we find ourselves in today. How timely, then, for this extraordinary new history of belief to appear, even if the author seems to be focusing on the 16th and 17th centuries. And what a time that was. One accomplished British historian of the period, Keith Thomas, tells us, in his 1971 study Religion and the Decline of Magic, that most people back then were, by our contemporary standards anyway, “exceedingly liable to pain, sickness, and premature death”—also to poverty, “sudden disaster,” and “utter despair.” There was, understandably, a concomitant preoccupation, among the general public and elites alike, with the explanation and the relief of human misfortune; devil’s curses and God’s possible antidotes thus played a supersized role, especially before the so-called Age of Reason and Enlightenment learning started to take hold. The natural world of that time, Carlos Eire tells us, “constantly pulsated with the possibility of the miraculous.” It’s a patch of human history that seems very long ago. Yet today, under the existential terrors of climate change, a global pandemic rivaling the medieval plagues, and media that bombards us with news of one catastrophe after another, yesteryear has a familiar ring.

Eire is a master storyteller…

What the past can teach us about our troubled present: “Trances, Ecstasies, Raptures, and Levitations: On Carlos Eire’s ‘They Flew’,” from @pbkauf in @LAReviewofBooks.

Apposite: “This Simple Philosophical Puzzle Shows How Difficult It Is to Know Something” (read to the end).

* The Red Queen, to Alice, in Lewis Carroll’s Through the Looking Glass

###

As we ponder pondering the imponderable, we might recall that today is the Feast of Saint Nicholas (in Eastern Christian countries using the old church Calendar). It celebrates Saint Nicholas of Myra, an early Christian bishop, who attended the First Council of Nicaea; because of the many miracles attributed to his intercession, he is also known as Nicholas the Wonderworker. His legendary habit of secret gift-giving gave rise to the traditional model of Santa Claus (“Saint Nick”) through Sinterklaas.

A depiction of Saint Nicholas with his sack standing next to a Nativity Scene (source)

Written by (Roughly) Daily

December 19, 2023 at 1:00 am

“A proof tells us where to concentrate our doubts”*…

Andrew Granville at work

Number theorist Andrew Granville on what mathematics really is, on why objectivity is never quite within reach, and on the role that AI might play…

… What is a mathematical proof? We tend to think of it as a revelation of some eternal truth, but perhaps it is better understood as something of a social construct.

Andrew Granville, a mathematician at the University of Montreal, has been thinking about that a lot recently. After being contacted by a philosopher about some of his writing, “I got to thinking about how we arrive at our truths,” he said. “And once you start pushing at that door, you find it’s a vast subject.”

“How mathematicians go about research isn’t generally portrayed well in popular media. People tend to see mathematics as this pure quest, where we just arrive at great truths by pure thought alone. But mathematics is about guesses — often wrong guesses. It’s an experimental process. We learn in stages…

Quanta spoke with Granville about the nature of mathematical proof — from how proofs work in practice to popular misconceptions about them, to how proof-writing might evolve in the age of artificial intelligence…

[excerpts for that interview follow…]

How mathematicians go about research isn’t generally portrayed well in popular media. People tend to see mathematics as this pure quest, where we just arrive at great truths by pure thought alone. But mathematics is about guesses — often wrong guesses. It’s an experimental process. We learn in stages…

The culture of mathematics is all about proof. We sit around and think, and 95% of what we do is proof. A lot of the understanding we gain is from struggling with proofs and interpreting the issues that come up when we struggle with them…

The main point of a proof is to persuade the reader of the truth of an assertion. That means verification is key. The best verification system we have in mathematics is that lots of people look at a proof from different perspectives, and it fits well in a context that they know and believe. In some sense, we’re not saying we know it’s true. We’re saying we hope it’s correct, because lots of people have tried it from different perspectives. Proofs are accepted by these community standards.

Then there’s this notion of objectivity — of being sure that what is claimed is right, of feeling like you have an ultimate truth. But how can we know we’re being objective? It’s hard to take yourself out of the context in which you’ve made a statement — to have a perspective outside of the paradigm that has been put in place by society. This is just as true for scientific ideas as it is for anything else…

[Granville runs through a history of the proof, from Aristotle, through Euclid, to Hilbert, then Russel and Whitehead, ending with Gödel…]

To discuss mathematics, you need a language, and a set of rules to follow in that language. In the 1930s, Gödel proved that no matter how you select your language, there are always statements in that language that are true but that can’t be proved from your starting axioms. It’s actually more complicated than that, but still, you have this philosophical dilemma immediately: What is a true statement if you can’t justify it? It’s crazy.

So there’s a big mess. We are limited in what we can do.

Professional mathematicians largely ignore this. We focus on what’s doable. As Peter Sarnak likes to say, “We’re working people.” We get on and try to prove what we can…

[Granville then turns to computers…]

We’ve moved to a different place, where computers can do some wild things. Now people say, oh, we’ve got this computer, it can do things people can’t. But can it? Can it actually do things people can’t? Back in the 1950s, Alan Turing said that a computer is designed to do what humans can do, just faster. Not much has changed.

For decades, mathematicians have been using computers — to make calculations that can help guide their understanding, for instance. What AI can do that’s new is to verify what we believe to be true. Some terrific developments have happened with proof verification. Like [the proof assistant] Lean, which has allowed mathematicians to verify many proofs, while also helping the authors better understand their own work, because they have to break down some of their ideas into simpler steps to feed into Lean for verification.

But is this foolproof? Is a proof a proof just because Lean agrees it’s one? In some ways, it’s as good as the people who convert the proof into inputs for Lean. Which sounds very much like how we do traditional mathematics. So I’m not saying that I believe something like Lean is going to make a lot of errors. I’m just not sure it’s any more secure than most things done by humans…

Perhaps it could assist in creating a proof. Maybe in five years’ time, I’ll be saying to an AI model like ChatGPT, “I’m pretty sure I’ve seen this somewhere. Would you check it out?” And it’ll come back with a similar statement that’s correct.

And then once it gets very, very good at that, perhaps you could go one step further and say, “I don’t know how to do this, but is there anybody who’s done something like this?” Perhaps eventually an AI model could find skilled ways to search the literature to bring tools to bear that have been used elsewhere — in a way that a mathematician might not foresee.

However, I don’t understand how ChatGPT can go beyond a certain level to do proofs in a way that outstrips us. ChatGPT and other machine learning programs are not thinking. They are using word associations based on many examples. So it seems unlikely that they will transcend their training data. But if that were to happen, what will mathematicians do? So much of what we do is proof. If you take proofs away from us, I’m not sure who we become…

Eminently worth reading in full: “Why Mathematical Proof Is a Social Compact,” in @QuantaMagazine.

Morris Kline

###

As we add it up, we might send carefully calculated birthday greetings to Edward G. Begle; he was born on this date in 1914. A mathematician who was an accomplished topologist, he is best remembered for his role as the director of the School Mathematics Study Group (SMSG), the primary group credited for developing what came to be known as The New Math (a pedagogical response to Sputnik, taught in American grade schools from the late 1950s through the 1970s)… which will be well-known to (if not necessarily fondly recalled by) readers of a certain age.

source

Meditations on First Philosophy/The Love Below…

So, it turns out that Rene Descartes and Andre Benjamin of Outkast have in common rather unconvincing proofs…

From (medical student and nice guy) Sanjay Kulkarni’s wonderful web comic Cowbirds in Love.

As we ponder the the depths of demonstrability, we might recall that it was on this date in 1962 that Izvestia informed its Russian readership that baseball had actually been invented by Russians (and transmitted to the U.S. by emigres who’d brought a 14th century game, lapta, with them).

A  game of lapta, c. 1900 (source)