Posts Tagged ‘Discovery’
“In mathematics, the art of proposing a question must be held of higher value than solving it”*…
Matteo Wong talks with mathematician Terence Tao about the advent of AI in mathematical research and finds that Tao has some very big questions indeed…
Terence Tao, a mathematics professor at UCLA, is a real-life superintelligence. The “Mozart of Math,” as he is sometimes called, is widely considered the world’s greatest living mathematician. He has won numerous awards, including the equivalent of a Nobel Prize for mathematics, for his advances and proofs. Right now, AI is nowhere close to his level.
But technology companies are trying to get it there. Recent, attention-grabbing generations of AI—even the almighty ChatGPT—were not built to handle mathematical reasoning. They were instead focused on language: When you asked such a program to answer a basic question, it did not understand and execute an equation or formulate a proof, but instead presented an answer based on which words were likely to appear in sequence. For instance, the original ChatGPT can’t add or multiply, but has seen enough examples of algebra to solve x + 2 = 4: “To solve the equation x + 2 = 4, subtract 2 from both sides …” Now, however, OpenAI is explicitly marketing a new line of “reasoning models,” known collectively as the o1 series, for their ability to problem-solve “much like a person” and work through complex mathematical and scientific tasks and queries. If these models are successful, they could represent a sea change for the slow, lonely work that Tao and his peers do.
After I saw Tao post his impressions of o1 online—he compared it to a “mediocre, but not completely incompetent” graduate student—I wanted to understand more about his views on the technology’s potential. In a Zoom call last week, he described a kind of AI-enabled, “industrial-scale mathematics” that has never been possible before: one in which AI, at least in the near future, is not a creative collaborator in its own right so much as a lubricant for mathematicians’ hypotheses and approaches. This new sort of math, which could unlock terra incognitae of knowledge, will remain human at its core, embracing how people and machines have very different strengths that should be thought of as complementary rather than competing…
A sample of what follows…
The classic idea of math is that you pick some really hard problem, and then you have one or two people locked away in the attic for seven years just banging away at it. The types of problems you want to attack with AI are the opposite. The naive way you would use AI is to feed it the most difficult problem that we have in mathematics. I don’t think that’s going to be super successful, and also, we already have humans that are working on those problems.
… Tao: The type of math that I’m most interested in is math that doesn’t really exist. The project that I launched just a few days ago is about an area of math called universal algebra, which is about whether certain mathematical statements or equations imply that other statements are true. The way people have studied this in the past is that they pick one or two equations and they study them to death, like how a craftsperson used to make one toy at a time, then work on the next one. Now we have factories; we can produce thousands of toys at a time. In my project, there’s a collection of about 4,000 equations, and the task is to find connections between them. Each is relatively easy, but there’s a million implications. There’s like 10 points of light, 10 equations among these thousands that have been studied reasonably well, and then there’s this whole terra incognita.
There are other fields where this transition has happened, like in genetics. It used to be that if you wanted to sequence a genome of an organism, this was an entire Ph.D. thesis. Now we have these gene-sequencing machines, and so geneticists are sequencing entire populations. You can do different types of genetics that way. Instead of narrow, deep mathematics, where an expert human works very hard on a narrow scope of problems, you could have broad, crowdsourced problems with lots of AI assistance that are maybe shallower, but at a much larger scale. And it could be a very complementary way of gaining mathematical insight.
Wong: It reminds me of how an AI program made by Google Deepmind, called AlphaFold, figured out how to predict the three-dimensional structure of proteins, which was for a long time something that had to be done one protein at a time.
Tao: Right, but that doesn’t mean protein science is obsolete. You have to change the problems you study. A hundred and fifty years ago, mathematicians’ primary usefulness was in solving partial differential equations. There are computer packages that do this automatically now. Six hundred years ago, mathematicians were building tables of sines and cosines, which were needed for navigation, but these can now be generated by computers in seconds.
I’m not super interested in duplicating the things that humans are already good at. It seems inefficient. I think at the frontier, we will always need humans and AI. They have complementary strengths. AI is very good at converting billions of pieces of data into one good answer. Humans are good at taking 10 observations and making really inspired guesses…
Terence Tao, the world’s greatest living mathematician, has a vision for AI: “We’re Entering Uncharted Territory for Math,” from @matteo_wong in @TheAtlantic.
###
As we go figure, we might think recursively about Benoit Mandelbrot; he died on this date in 2010. A mathematician (and polymath), his interest in “the art of roughness” of physical phenomena and “the uncontrolled element in life” led to work (which included coining the word “fractal”, as well as developing a theory of “self-similarity” in nature) for which he is known as “the father of fractal geometry.”
“The greatest obstacle to discovery is not ignorance – it is the illusion of knowledge”*…
Learning from the past: as John Thornhill explains in his consideration of Jason Roberts‘ Every Living Thing, the rivalry between Buffon and Linnaeus has lessons about disrupters and exploitation…
The aristocratic French polymath Georges-Louis Leclerc, Comte de Buffon chose a good year to die: 1788. Reflecting his status as a star of the Enlightenment and author of 35 popular volumes on natural history, Buffon’s funeral carriage drawn by 14 horses was watched by an estimated 20,000 mourners as it processed through Paris. A grateful Louis XVI had earlier erected a statue of a heroic Buffon in the Jardin du Roi, over which the naturalist had masterfully presided. “All nature bows to his genius,” the inscription read.
The next year the French Revolution erupted. As a symbol of the ancien regime, Buffon was denounced as an enemy of progress, his estates in Burgundy seized, and his son, known as the Buffonet, guillotined. In further insult to his memory, zealous revolutionaries marched through the king’s gardens (nowadays known as the Jardin des Plantes) with a bust of Buffon’s great rival, Carl Linnaeus. They hailed the Swedish scientific revolutionary as a true man of the people.
The intense intellectual rivalry between Buffon and Linnaeus, which still resonates today, is fascinatingly told by the author Jason Roberts in his book Every Living Thing, my holiday reading while staying near Buffon’s birthplace in Burgundy. Natural history, like all history, might be written by the victors, as Roberts argues. And for a long time, Linnaeus’s highly influential, but flawed, views held sway. But the book makes a sympathetic case for the further rehabilitation of the much-maligned Buffon.
The two men were, as Roberts writes, exact contemporaries and polar opposites. While Linnaeus obsessed about classifying all biological species into neat categories with fixed attributes and Latin names (Homo sapiens, for example), Buffon emphasised the vast diversity and constantly changing nature of every living thing.
In Roberts’s telling, Linnaeus emerges as a brilliant but ruthless dogmatist, who ignored inconvenient facts that did not fit his theories and gave birth to racial pseudoscience. But it was Buffon’s painstaking investigations and acceptance of complexity that helped inspire the evolutionary theories of Charles Darwin, who later acknowledged that the Frenchman’s ideas were “laughably like mine”.
In two aspects, at least, this 18th-century scientific clash rhymes with our times. The first is to show how intellectual knowledge can often be a source of financial gain. The discovery of crops and commodities in other parts of the world and the development of new methods of cultivation had a huge impact on the economy in that era. “All that is useful to man originates from these natural objects,” Linnaeus wrote. “In one word, it is the foundation of every industry.”
Great wealth was generated from trade in sugar, potatoes, coffee, tea and cochineal while Linnaeus himself explored ways of cultivating pineapples, strawberries and freshwater pearls.
“In many ways, the discipline of natural history in the 18th century was roughly analogous to technology today: a means of disrupting old markets, creating new ones, and generating fortunes in the process,” Roberts writes. As a former software engineer at Apple and a West Coast resident, Roberts knows the tech industry.
Then as now, the addition of fresh inputs into the economy — whether natural commodities back then or digital data today — can lead to astonishing progress, benefiting millions. But it can also lead to exploitation. As Roberts tells me in a telephone interview, it was the scaling up of the sugar industry in the West Indies that led to the slave trade. “Sometimes we think we are inventing the future when we are retrofitting the past,” he says.
The second resonance with today is the danger of believing we know more than we do. Roberts compares Buffon’s state of “curious unknowing” to the concept of “negative capability” described by the English poet John Keats. In a letter written in 1817, Keats argued that we should resist the temptation to explain away things we do not properly understand and accept “uncertainties, mysteries, doubts, without any irritable reaching after fact and reason.”
Armed today with instant access to information and smart machines, the temptation is to ascribe a rational order to everything, as Linnaeus did. But scientific progress depends on a humble acceptance of relative ignorance and a relentless study of the fabric of reality. The spooky nature of quantum mechanics would have blown Linnaeus’s mind. If Buffon still teaches us anything, it is to study the peculiarity of things as they are, not as we might wish them to be…
“What an epic 18th-century scientific row teaches us today,” @johnthornhillft on @itsJason in @FT (gift link)
Pair with “Frameworks” from Céline Henne (@celinehenne) “Knowledge is often a matter of discovery. But when the nature of an enquiry itself is at question, it is an act of creation.”
* Daniel J. Boorstin
###
As we embrace the exceptions, we might send carefully-coded birthday greetings to John McCarthy; he was born on this date in 1927. An eminent computer and cognitive scientist– he was awarded both the Turning Prize and the National Medal of Science– McCarthy coined the phrase “artificial Intelligence” to describe the field of which he was a founder.
It was McCarthy’s 1979 article, “Ascribing Mental Qualities to Machines” (in which he wrote, “Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance”) that provoked John Searle‘s 1980 disagreement in the form of his famous Chinese Room Argument… provoking a broad debate that continues to this day.

“If someone separated the art of counting and measuring and weighing from all the other arts, what was left of each (of the others) would be, so to speak, insignificant”*…
Mathematics, Bo Malmberg and Hannes Malmberg argue, was the cornerstone of the Industrial Revolution. A new paradigm of measurement and calculation, more than scientific discovery, built industry, modernity, and the world we inhabit today…
In school, you might have heard that the Industrial Revolution was preceded by the Scientific Revolution, when Newton uncovered the mechanical laws underlying motion and Galileo learned the true shape of the cosmos. Armed with this newfound knowledge and the scientific method, the inventors of the Industrial Revolution created machines – from watches to steam engines – that would change everything.
But was science really the key? Most of the significant inventions of the Industrial Revolution were not undergirded by a deep scientific understanding, and their inventors were not scientists.
The standard chronology ignores many of the important events of the previous 500 years. Widespread trade expanded throughout Europe. Artists began using linear perspective and mathematicians learned to use derivatives. Financiers started joint stock corporations and ships navigated the open seas. Fiscally powerful states were conducting warfare on a global scale.
There is an intellectual thread that runs through all of these advances: measurement and calculation. Geometric calculations led to breakthroughs in painting, astronomy, cartography, surveying, and physics. The introduction of mathematics in human affairs led to advancements in accounting, finance, fiscal affairs, demography, and economics – a kind of social mathematics. All reflect an underlying ‘calculating paradigm’ – the idea that measurement, calculation, and mathematics can be successfully applied to virtually every domain. This paradigm spread across Europe through education, which we can observe by the proliferation of mathematics textbooks and schools. It was this paradigm, more than science itself, that drove progress. It was this mathematical revolution that created modernity…
The fascinating story: “How mathematics built the modern world,” from @bomalmb and @HannesMalmberg1 in @WorksInProgMag.
* Plato
###
As we muse on measurement, we might recall that it was on this date in 1790, early in the French Revolution, that the French Assembly, acting on the urging of Bishop Charles Maurice de Talleyrand, moved to create a new system of weights and measures based on natural units– what we now know as the metric system.
“Foul cankering rust the hidden treasure frets, but gold that’s put to use more gold begets.”*…
The scientific literature is vast. No individual human can fully know all the published research findings, even within a single field of science. As Ulkar Aghayeva explains, regardless of how much time a scientist spends reading the literature, there’ll always be what the information scientist Don Swanson called ‘undiscovered public knowledge’: knowledge that exists and is published somewhere, but still remains largely unknown.
Some scientific papers receive very little attention after their publication – some, indeed, receive no attention whatsoever. Others, though, can languish with few citations for years or decades, but are eventually rediscovered and become highly cited. These are the so-called ‘sleeping beauties’ of science.
The reasons for their hibernation vary. Sometimes it is because contemporaneous scientists lack the tools or practical technology to test the idea. Other times, the scientific community does not understand or appreciate what has been discovered, perhaps because of a lack of theory. Yet other times it’s a more sublunary reason: the paper is simply published somewhere obscure and it never makes its way to the right readers.
What can sleeping beauties tell us about how science works? How do we rediscover information the scientific body of knowledge already contains but that is not widely known? Is it possible that, if we could understand sleeping beauties in a more systematic way, we might be able to accelerate scientific progress?
Sleeping beauties are more common than you might expect.
The term sleeping beauties was coined by Anthony van Raan, a researcher in quantitative studies of science, in 2004. In his study, he identified sleeping beauties between 1980 and 2000 based on three criteria: first, the length of their ‘sleep’ during which they received few if any citations. Second, the depth of that sleep – the average number of citations during the sleeping period. And third, the intensity of their awakening – the number of citations that came in the four years after the sleeping period ended. Equipped with (somewhat arbitrarily chosen) thresholds for these criteria, van Raan identified sleeping beauties at a rate of about 0.01 percent of all published papers in a given year.
Later studies hinted that sleeping beauties are even more common than that. A systematic study in 2015, using data from 384,649 papers published in American Physical Society journals, along with 22,379,244 papers from the search engine Web of Science, found a wide, continuous range of delayed recognition of papers in all scientific fields. This increases the estimate of the percentage of sleeping beauties at least 100-fold compared to van Raan’s.
Many of those papers became highly influential many decades after their publication – far longer than the typical time windows for measuring citation impact. For example, Herbert Freundlich’s paper ‘Concerning Adsorption in Solutions’ (though its original title is in German) was published in 1907, but began being regularly cited in the early 2000s due to its relevance to new water purification technologies. William Hummers and Richard Offeman’s ‘Preparation of Graphitic Oxide’, published in 1958, also didn’t ‘awaken’ until the 2000s: in this case because it was very relevant to the creation of the soon-to-be Nobel Prize–winning material graphene…
Indeed, one of the most famous physics papers, Albert Einstein, Boris Podolsky, and Nathan Rosen (EPR)’s ‘Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?’ (1935) is a classic example of a sleeping beauty…
More examples, and explanation of why they slumber, and thoughts on how to awaken them sooner: “Waking up science’s sleeping beauties,” from @ulkar_aghayeva in @WorksInProgMag.
[Image above: source]
* Shakespeare, “Venus and Adonis”
###
As we dwell on discovery, we might send healing birthday greetings to a woman whose scientific work thankfully rarely napped, Gertrude Elion; she was born on this date in 1918. A pharmacologist, she shared the 1988 Nobel Prize in Physiology or Medicine with George H. Hitchings and Sir James Black for their use of innovative methods of rational drug design (focused on understanding the target of the drug rather than simply using trial-and-error) in the development of new drugs. Her work led to the creation of the anti-retroviral drug AZT, which was the first drug widely used against AIDS. Her well-known and widely deployed creations also include the first immunosuppressive drug, azathioprine, used to fight rejection in organ transplants, the first successful antiviral drug, acyclovir (ACV), used in the treatment of herpes infection, and a number of drugs used in cancer treatment.









You must be logged in to post a comment.