Posts Tagged ‘Mathematics’
“The present is pregnant with the future”*…
The estimable Tim O’Reilly uses scenario planning to create an insightful look at AI, our futures, and the choices that will define them…
We all read it in the daily news. The New York Times reports that economists who once dismissed the AI job threat are now taking it seriously. In February, Jack Dorsey cut 40% of Block’s workforce, telling shareholders that “intelligence tools have changed what it means to build and run a company.” Block’s stock rose 20%. Salesforce has shed thousands of customer support workers, saying AI was already doing half the work. And a Stanford study found that software developers aged 22 to 25 saw employment drop nearly 20% from its peak, while developers over 26 were doing fine.
But how are we to square this news with a Vanguard study that found that the 100 occupations most exposed to AI were actually outperforming the rest of the labor market in both job growth and wages, and a rigorous NBER study of 25,000 Danish workers that found zero measurable effect of AI on earnings or hours?
Other studies could contribute to either side of the argument. For example, PwC’s 2025 Global AI Jobs Barometer, analyzing close to a billion job ads across six continents, found that workers with AI skills earn a 56% wage premium, and that productivity growth has nearly quadrupled in the industries most exposed to AI.
This is exactly the kind of contradictory, uncertain landscape that scenario planning was designed for. Scenario planning doesn’t ask you to predict what the future will be. It asks you to imagine divergent possible futures and to develop a strategy that improves your odds of success across all of them. I’ve used it many times at O’Reilly and have written about it before with COVID and climate change as illustrative examples. The argument between those who say AI will cause mass unemployment and those who insist technology always creates more jobs than it destroys is a debate that will only be resolved by time. Both sides have evidence. Both are probably right at some level. And both framings are not terribly helpful for anyone trying to figure out what to do next…
[O’Reilly explains the scenario approach, then applies it to our future with AI (see the image above), astutely assessing the conflicting signals that we’ve experiencing; he explores the “robust strategy” for our uncertian future (strategic choices that make sense regardless of which future unfolds); then he concludes…
… I’ll return to the theme that I sounded in my book WTF? What’s the Future and Why It’s Up To Us.
Every time a company uses AI to do what it was already doing with fewer people, it is making a choice for the lower half of the scenario grid. Every time a company uses AI to do something that wasn’t previously possible, to serve a customer who wasn’t previously served, to solve a problem that wasn’t previously solvable, it is making a choice for the upper half. These choices compound, for good or ill. An economy that uses AI primarily for efficiency will slowly hollow itself out.
Looking at the news from the future, both sets of signals are present. The question is which will dominate. AI will give us both the Augmentation Economy and the Displacement Crisis, in different measures in different places, depending on the choices we make.
Scenario planning teaches us that we don’t have to predict which future we’ll get. We do have to prepare for a very uncertain future. But the robust strategy, the one that works across every quadrant, is to focus on doing more, not just doing the same with less, and to find ways that human taste still matters in what is created. As long as there is unmet demand, as long as there are problems we haven’t solved and people we haven’t served, AI will augment human work rather than replacing it. It’s only when we stop looking for new things to do that the machines come for the jobs…
Eminently worth reading in full. Indeed, speaking as a long-time scenario planner, your correspondent can only wish that everyone who wields “scenarios” applies the approach as appropriately, adriotly, and acutely as Tim has: “Scenario Planning for AI and the ‘Jobless Future‘,” from @timoreilly.bsky.social.
* Voltaire
###
As we take the long view, we might send formative birthday greetings to Mark Pinsker; he was born on this date in 1923. A mathematician, he made impoprtant contributions to the fields of information theory, probability theory, coding theory, ergodic theory, mathematical statistics, and communication networks. This work, which helped lay the foundation for AI-as-we-know-it, earned him the IEEE Claude E. Shannon Award in 1978, and the IEEE Richard W. Hamming Medal in 1996, among other honors.
“It’s the bell curve again”*…
Joseph Howlett on how the central limit theorem, which started as a bar trick for 18th-century gamblers, became something on which scientists rely every day…
No matter where you look, a bell curve is close by.
Place a measuring cup in your backyard every time it rains and note the height of the water when it stops: Your data will conform to a bell curve. Record 100 people’s guesses at the number of jelly beans in a jar, and they’ll follow a bell curve. Measure enough women’s heights, men’s weights, SAT scores, marathon times — you’ll always get the same smooth, rounded hump that tapers at the edges.
Why does the bell curve pop up in so many datasets?
The answer boils down to the central limit theorem, a mathematical truth so powerful that it often strikes newcomers as impossible, like a magic trick of nature. “The central limit theorem is pretty amazing because it is so unintuitive and surprising,” said Daniela Witten, a biostatistician at the University of Washington. Through it, the most random, unimaginable chaos can lead to striking predictability.
It’s now a pillar on which much of modern empirical science rests. Almost every time a scientist uses measurements to infer something about the world, the central limit theorem is buried somewhere in the methods. Without it, it would be hard for science to say anything, with any confidence, about anything.
“I don’t think the field of statistics would exist without the central limit theorem,” said Larry Wasserman, a statistician at Carnegie Mellon University. “It’s everything.”
Perhaps it shouldn’t come as a surprise that the push to find regularity in randomness came from the study of gambling…
Read on for the fascinating story of: “The Math That Explains Why Bell Curves Are Everywhere,” from @quantamagazine.bsky.social.
Howlett concludes by observing that “The central limit theorem is a pillar of modern science, ultimately, because it’s a pillar of the world around us. When we combine lots of independent measurements, we get clusters. And if we’re clever enough, we can use those clusters to find out something interesting about the processes that made them”– which follows from the story he shares.
Still, we’d do well to remember that there are limits to its applicability, both descriptively (as Nassim Nicholas Taleb points out, “because the bell curve ignores large deviations, cannot handle them, yet makes us confident that we have tamed uncertainty”) and prescriptively (as Benjamim Bloom argues, “The bell-shaped curve is not sacred. It describes the outcome of a random process. Since education is a purposeful activity….the achievement distribution should be very different from the normal curve if our instruction is effective).
For (much) more, see Peter Bernstein‘s wonderful Against the Gods: The Remarkable Story of Risk
* Robert A. Heinlein, Time Enough for Love
###
As we noodle on the normal distribution, we might send curve-shattering birthday greetings to Norman Borlaug; he was born on ths date in 1914. An agronomist, he developed and led initiatives worldwide that contributed to the voluminous increases in agricultural production we call “the Green Revolution.” Borlaug was awarded multiple honors for his work, including the Nobel Peace Prize, the Presidential Medal of Freedom, and the Congressional Gold Medal; he’s one of only seven people to have received all three of those awards.
“Beauty is the first test: there is no permanent place in this world for ugly mathematics”*…
Is mathematical beauty real? Or is it just a subjective, human ‘wow’ that is becoming redundant in an AI age? Rita Ahmadi explores…
It is a hot July day in London and I take the bus to Bloomsbury. I often come here for the British Library, the British Museum or the London Review Bookshop. More than a location, Bloomsbury feels like stepping into a work of art – maybe one of Virginia Woolf’s stories, or Duncan Grant’s paintings.
This time, I am here for mathematics: the Hardy Lecture at the London Mathematical Society (LMS), named after G H Hardy, a professor of mathematics at the University of Cambridge, a member of the Bloomsbury Group, and a president of the LMS. You may know him from the film The Man Who Knew Infinity (2015), in which he’s played by Jeremy Irons.
The 2025 lecture is by Emily Riehl of Johns Hopkins University in Baltimore, who is talking about a complex mathematical ‘language’ known as infinity category theory: could we teach it to computers so that they could understand it? If successful, computer programs could verify proofs and construct complex structures in this area.
A few seats to my left, I recognise Kevin Buzzard, wearing the multi-coloured, patterned trousers he’s known for among mathematicians. Based at Imperial College London, Buzzard is working on a computer proof assistant called Lean. His interest is personal: after long disputes with a colleague over a flawed proof, he lost trust, as he often puts it, in ‘human mathematicians’. His mission now is to convince all mathematicians to write their proofs in Lean. In the Q&A after one of his talks, he said of the debate between truth and beauty in mathematics: ‘I reject beauty, I want rigour’ – though his vibrant sense of fashion suggests otherwise.
Interest in an AI-driven approach to mathematics has been exponential, and many mathematicians have left traditional academic research to explore its potential. Recently, one group of distinguished mathematicians designed 10 active, research-level questions for AI to tackle. At the time of writing, various AI companies and researchers had claimed to find solutions, which were under evaluation by the community.
Sitting in the room in Bloomsbury, I stared at the Hardy plaque and wondered: would Hardy find proofs generated by AI beautiful? I wasn’t sure. He believed there should be a strong aesthetic judgment in mathematics, drawing parallels with poetry, and argued that beauty is the first test of good mathematics. He went as far as to say that there is no permanent place in the world for ugly mathematics.
If asked, many mathematicians today still talk about the aesthetic appeal of one approach over another.
Yet we live in a different century to Hardy and his Bloomsbury peers, with different technologies and techniques, so perhaps we need a clearer definition of what mathematical beauty actually is. Over the history of mathematics, we can find examples where both rigour and the pursuit of beauty have shaped mathematics itself. So, if we’re completely replacing this with a computer-assisted quest for truth and rigour, we ought to know what we’d be abandoning, if anything. Is mathematical beauty like the beauty in literature and art – or is it something else?…
[Ahmadi explores the idea of “beauty,” generally and in mathematics; traces the rise of AI as a tool, and concludes…]
… my own definition of beauty in mathematics would be as follows:
“Asimplemathematical structure that surprises even the most experienced mathematicians and transfers a sense of vitality.”
But is an AI-assisted proof simple or surprising? How do we define vitality in a machine? On these questions, the jury is out. Myself, I am torn. Maybe models just need more training to match our creativity. But I also wonder whether our limbic system is required. Can we write proofs without emotional kicks? I am also unsure if perfectly efficient brains can come up with novel revolutionary ideas.
Ultimately, this debate is about more than aesthetics; it is closely tied to the development of AI-assisted mathematics. If AI models can produce novel mathematical structures, how should we direct them? Is it a search for beautiful or truthful structures? A question that possibly guides the years to come.
Some mathematicians say they prefer the ‘truth’ and only the ‘truth’. However, my recent discussions with mathematicians showed me that most immediately recognise, enjoy, and even wholeheartedly smile at a beautiful piece of maths. In fact, they spend their whole lives in search of one…
Fascinating: “The eye of the mathematician,” from @ritaahmadi.bsky.social in @aeon.co.
###
As we embrace elegance, we might send garcefully-calculated birthday greetings to Eduard Heine; he was born on this date in 1821. A mathematician, he is best remembered for his introduction of the concept of uniform continuity, for the Mehler–Heine formula, and for the Heine–Cantor theorem… all of them, quite beautiful.
“I am never forget the day I first meet the great Lobachevsky. / In one word he told me secret of success in mathematics: / Plagiarize!”*…
In an 1874 paper, Georg Cantor proved that there are different sizes of infinity and changed math forever. But as Joseph Howlett reports, a trove of newly unearthed letters shows that it was also an act of plagiarism…
When Demian Goos followed Karin Richter into her office on March 12 of last year, the first thing he noticed was the bust. It sat atop a tall pedestal in the corner of the room, depicting a bald, elderly gentleman with a stoic countenance. Goos saw no trace of the anxious, lonely man who had obsessed him for over a year.
Instead, this was Georg Cantor as history saw him. An intellectual giant: steadfast, strong-willed, determined to bring about a mathematical revolution over the clamorous objections of his peers.
It was here, at the University of Halle in Germany, that Cantor launched his revolution 150 years ago. Here, in 1874, he published one of the most important papers in math’s 4,000-year history. That paper crystallized a concept that had long been viewed as a mathematical malignancy to be shunned at all costs: infinity. It forced mathematicians to question some of their longest-held assumptions, rocking mathematics to its very foundations. And it gave rise to a new field of study that would eventually bring about a rewriting of the entire subject.
Now Goos, a 35-year-old mathematician and journalist, had come to Halle — a five-hour train ride from his home in Mainz — to look at some letters from Cantor’s estate. He’d seen a scan of one and was pretty sure he knew what the others would say. But he wanted to see them in person.
Richter — who, like Cantor, had spent her entire career here, first as a research mathematician and then, after retiring, as a lecturer on the history of mathematics — gestured for Goos to sit. She lifted a thin blue binder from the scattered piles of books and papers on her desk. Inside were dozens of plastic sheet protectors, each one containing an old, handwritten letter.
Goos began flipping through, contemplating the letters with the relish of an archaeologist entering a long-lost tomb. Then he reached a particular page and froze. He struggled to catch his breath.
It wasn’t the handwriting. At this point in his research on Cantor, he’d become accustomed to the strange, nearly indecipherable Gothic script known as kurrentschrift, which Germans used until around 1900.
It wasn’t the signature. He knew that the German mathematician Richard Dedekind had been a key player in Cantor’s quest to understand infinity and solidify math’s foundations, and that the two had exchanged many letters.
It was the date: November 30, 1873.
He’d never seen this letter before. No one had. It was believed to be lost, destroyed in the tumult of World War II or perhaps by Cantor himself.
This was the letter that had the power to rewrite Cantor’s legacy. The letter that proved once and for all that Cantor’s famous 1874 paper, the one that would go on to reshape all of mathematics, had been an act of plagiarism…
The extraordinary story of unearthing this extraordinary story: “The Man Who Stole Infinity,” from @quantamagazine.bsky.social.
See also: “How Can Infinity Come in Many Sizes?“
* Tom Lehrer (not just a glorious songwriter, but also a gifted mathematician), “Lobachevsky” (referring to the mathematician Nikolai Ivanovich Lobachevsky— “not intended as a slur on [Lobachevsky’s] character [but chosen]”solely for prosodic reasons”)
###
As we confer credit where credit is due, we might spare a thought for Charles-Jean Étienne Gustave Nicolas, baron de la Vallée Poussin; he died on this date in 1962. A Belgian mathematician, he is best known for proving the prime number theorem (which formalized the intuitive idea that primes become less common as they become larger by precisely quantifying the rate at which this occurs). So great was the contribution that the King of Belgium ennobled him with the title of baron.
“Never tell me the odds!”*…
How likely is it that one will be born on a Leap Day? That one will find a pearl in an oyster? That one will solve Wordle on the first guess? That one will die on a tornado? That two people will share the same fingerprint?
The good folks at R74n (@r74n.com) have these probabilities– and so many more: “What Are The Odds?”
(Image above– and tutorial on the odds ratio: source)
* Han Solo (Harrison Ford) in Star Wars: Episode V– The Empire Strikes Back
###
As we place our bets, we might spare a thought for Harvey Kurtzman; he died on this date in 1993. A cartoonist and editor, he is best know for writing and editing the parodic comic book Mad from 1952 until 1956. Kurtzman scripted every story in the first twenty-three issues. (The New York Times‘ obituary for Kurtzman in 1993, alluding to the role of publisher William Gaines, said Kurtzman had “helped found Mad Magazine.” This prompted an angry response to the newspaper from Art Spiegelman, who complained that awarding Kurtzman partial credit for starting Mad was “like saying Michelangelo helped paint the Sistine Chapel just because some Pope owned the ceiling.”)
Kurtzman, who mentored many younger cartoonists (including Terry Gilliam and Robert Crumb), is considered, with cartoonists like Will Eisner, Jack Kirby, and Carl Barks, one of the defining creators of the Golden Age of American comic books. The prestigious Harvey Awards (for achievement in comic books) are named in his honor.











You must be logged in to post a comment.