(Roughly) Daily

Posts Tagged ‘Godel

“For what man in the natural state or course of thinking did ever conceive it in his power to reduce the notions of all mankind exactly to the same length, and breadth, and height of his own? Yet this is the first humble and civil design of all innovators in the empire of reason.”*…

A “theory of everything” (a Grand Unified Theory on steriods)– a (still hypothetical) coherent theoretical framework of physics containing and explaining all physical principles– is the holy grail of physicists. Natalie Wolchover checks in on the most recent front-runner in the hunt…

Fifty-eight years after it first appeared, string theory remains the most popular candidate for the “theory of everything,” the unified mathematical framework for all matter and forces in the universe. This is much to the chagrin of its rather vocal critics. “String theory is not dead; it’s undead and now walks around like a zombie eating people’s brains,” the former physicist Sabine Hossenfelder said on her popular YouTube channel in 2024.

String theory is a “failure,” the mathematical physicist and blogger Peter Woit often says. His complaint is not that string theory is wrong — it’s that it’s “not even wrong,” as he titled a 2006 book. The theory says that, on scales of billionths of trillionths of trillionths of a centimeter, extra curled-up spatial dimensions reveal themselves and particles resolve into extended objects — strands and loops of energy — rather than points. But this alleged substructure is too small to detect, probably ever. The prediction is untestable.

A further problem is that uncountably many different configurations of dimensions and strings are permitted at those tiny scales; the theory can give rise to a limitless variety of universes. Amid this vast landscape of solutions, no one can hope to find a precise microscopic configuration that undergirds our particular macroscopic world.

These issues are profound indeed. Yet in my experience, the typical high-energy theorist in a prestigious university physics department still thinks string theory has a good chance of being correct, at least in part. The field has become siloed between those who deem it worth studying and those who don’t.

Recently, a new angle of attack has opened up. An approach called bootstrapping has allowed physicists to calculate that, under various starting assumptions about the universe, a key equation from string theory naturally follows. For some experts, these findings support the notion of “string uniqueness,” the idea that it is the only mathematically consistent quantum description of gravity and everything else.

Responding to one bootstrap paper on her YouTube channel, mere weeks after the “undead” comment, Hossenfelder said it was “string theorists do[ing] something sensible for once.” She added, “I’d say this paper strengthens the argument for string theory.”

Not everyone agrees, but the findings are reviving an important question. “This question of ‘Does string theory describe the world?’ has just been so taboo,” said Cliff Cheung, a physicist at the California Institute of Technology and an author of the paper discussed by Hossenfelder. Now, “people are actually thinking about it for the first time in decades.”

Getting wind of this work, I wanted to drill down on the logic and examine how the string hypothesis is faring these days…

And so she does: “Are Strings Still Our Best Hope for a Theory of Everything?” from @nattyover.bsky.social in @quantamagazine.bsky.social. Eminently worth reading in full.

Compare/contrast with: “Where Some See Strings, She Sees a Space-Time Made of Fractals.”

* Jonathan Swift, A Tale of a Tub

###

As we grapple with Godel, we might spare a thought for Hermann Rorschach; he died on this date in 1922. A psychiatrist and psychoanalyst, his education in art helped to spur the development of a set of inkblots that were used experimentally to measure various unconscious parts of the subject’s personality. Rorschach knew the human tendency to project interpretations and feelings onto ambiguous stimuli and believed that the subjective responses of his subjects enabled him to distinguish among them on the basis of their perceptive abilities, intelligence, and emotional characteristics. His method has come to be known as the Rorschach test, iterations of which have continued to be used over the years to help identify personality, psychotic, and neurological disorders.

Perhaps his insight that we humans tend “to project interpretations and feelings onto ambiguous stimuli” can inform our understanding of physicists trying to construct mental/conceptual models of our reality, which they’ve been doing for a very long time, and of the limitations of that quest.

source

“This incompleteness is all we have”*…

An abstract illustration featuring geometric shapes in various colors, depicting a face with glasses, embodying a modern artistic style.

The impulse to “systemitize” morality is as old as philosophy. Many now hope that AI will discover and organize moral truths. But Elad Uzan suggests that Kurt Gödel’s work on incompleteness demonstrates that deciding what is right will always be our burden…

Imagine a world in which artificial intelligence is entrusted with the highest moral responsibilities: sentencing criminals, allocating medical resources, and even mediating conflicts between nations. This might seem like the pinnacle of human progress: an entity unburdened by emotion, prejudice or inconsistency, making ethical decisions with impeccable precision. Unlike human judges or policymakers, a machine would not be swayed by personal interests or lapses in reasoning. It does not lie. It does not accept bribes or pleas. It does not weep over hard decisions.

Yet beneath this vision of an idealised moral arbiter lies a fundamental question: can a machine understand morality as humans do, or is it confined to a simulacrum of ethical reasoning? AI might replicate human decisions without improving on them, carrying forward the same biases, blind spots and cultural distortions from human moral judgment. In trying to emulate us, it might only reproduce our limitations, not transcend them. But there is a deeper concern. Moral judgment draws on intuition, historical awareness and context – qualities that resist formalisation. Ethics may be so embedded in lived experience that any attempt to encode it into formal structures risks flattening its most essential features. If so, AI would not merely reflect human shortcomings; it would strip morality of the very depth that makes ethical reflection possible in the first place.

Still, many have tried to formalise ethics, by treating certain moral claims not as conclusions, but as starting points. A classic example comes from utilitarianism, which often takes as a foundational axiom the principle that one should act to maximise overall wellbeing. From this, more specific principles can be derived, for example, that it is right to benefit the greatest number, or that actions should be judged by their consequences for total happiness. As computational resources increase, AI becomes increasingly well-suited to the task of starting from fixed ethical assumptions and reasoning through their implications in complex situations.

But what, exactly, does it mean to formalise something like ethics? The question is easier to grasp by looking at fields in which formal systems have long played a central role. Physics, for instance, has relied on formalisation for centuries. There is no single physical theory that explains everything. Instead, we have many physical theories, each designed to describe specific aspects of the Universe: from the behaviour of quarks and electrons to the motion of galaxies. These theories often diverge. Aristotelian physics, for instance, explained falling objects in terms of natural motion toward Earth’s centre; Newtonian mechanics replaced this with a universal force of gravity. These explanations are not just different; they are incompatible. Yet both share a common structure: they begin with basic postulates – assumptions about motion, force or mass – and derive increasingly complex consequences. Isaac Newton’s laws of motion and James Clerk Maxwell’s equations are classic examples: compact, elegant formulations from which wide-ranging predictions about the physical world can be deduced.

Ethical theories have a similar structure. Like physical theories, they attempt to describe a domain – in this case, the moral landscape. They aim to answer questions about which actions are right or wrong, and why. These theories also diverge and, even when they recommend similar actions, such as giving to charity, they justify them in different ways. Ethical theories also often begin with a small set of foundational principles or claims, from which they reason about more complex moral problems. A consequentialist begins with the idea that actions should maximise wellbeing; a deontologist starts from the idea that actions must respect duties or rights. These basic commitments function similarly to their counterparts in physics: they define the structure of moral reasoning within each ethical theory.

Just as AI is used in physics to operate within existing theories – for example, to optimise experimental designs or predict the behaviour of complex systems – it can also be used in ethics to extend moral reasoning within a given framework. In physics, AI typically operates within established models rather than proposing new physical laws or conceptual frameworks. It may calculate how multiple forces interact and predict their combined effect on a physical system. Similarly, in ethics, AI does not generate new moral principles but applies existing ones to novel and often intricate situations. It may weigh competing values – fairness, harm minimisation, justice – and assess their combined implications for what action is morally best. The result is not a new moral system, but a deepened application of an existing one, shaped by the same kind of formal reasoning that underlies scientific modelling. But is there an inherent limit to what AI can know about morality? Could there be true ethical propositions that no machine, no matter how advanced, can ever prove?

These questions echo a fundamental discovery in mathematical logic, probably the most fundamental insight ever to be proven: Kurt Gödel’s incompleteness theorems. They show that any logical system powerful enough to describe arithmetic is either inconsistent or incomplete. In this essay, I argue that this limitation, though mathematical in origin, has deep consequences for ethics, and for how we design AI systems to reason morally…

Eminently worth reading in full: “The incompleteness of ethics,” from @aeon.co‬.

And as if that were not enough, consider the cultural challenge implicit in this chart:

More background at “Cultural Bias in LLMs” (and here and here).

* Charles Bukowski

###

As we own up to it, we might recall that it was on this date in 1942 that actress Hedy Lamarr and musician George Antheil received a patent (#2,292,387) for a frequency-hopping radio communication system which later became the basis for modern technologies like Bluetooth, wireless telephones, and Wi-Fi.

Hedy Lamarr made it big in acting before ever moving to the United States. Her role in the Czech film Ecstasy got international attention in 1933 for containing scandalous, intimate scenes that were unheard of in the movie industry up until then.

Backlash from her early acting career was the least of her worries, however, as tensions began to rise in Europe. Lamarr, born Hedwig Eva Maria Kiesler, grew up in a Catholic household in Austria, but both of her parents had a Jewish heritage. In addition, she was married to Friedrich Mandl, a rich ammunition manufacturer with connections to both Fascist Italy and Nazi Germany.  

Her time with Friedrich Mandl was bittersweet. While the romance quickly died and Mandl became very possessive of his young wife, Lamarr was often taken to meetings on scientific innovations in the military world. These meetings are said to have been the spark that led to her becoming an inventor. As tensions in both her household and in the world around her became overwhelming, she fled Europe and found her way to the United States through a job offer from Hollywood’s MGM Studios.

Lamarr became one of the most sought-after leading women in Hollywood and starred in popular movies like the 1939 film Algiers, but once the United States began helping the Allies and preparing to possibly enter the war, Lamarr almost left Hollywood forever. Her eyes were no longer fixed on the bright lights of the film set but on the flashes of bombs and gunfire. Lamarr wanted to join the Inventors’ Council in Washington, DC, where she thought she would be of better service to the war effort.

Lamarr’s path to inventing the cornerstone of Wi-Fi began when she heard about the Navy’s difficulties with radio-controlled torpedoes. She recruited George Antheil, a composer she met through MGM Studios, in order to create what was known as a Secret Communication System.

The idea behind the invention was to create a system that constantly changed frequencies, making it difficult for the Axis powers to decode the radio messages. The invention would help the Navy make their torpedo systems become more stealthy and make it less likely for the torpedoes to be rendered useless by enemies. 

Lamarr was the brains behind the invention, with her background knowledge in ammunition, and Antheil was the artist that brought it to life, using the piano for inspiration. In 1942, under her then-married name, Hedy Kiesler Markey, she filed for a patent for the Secret Communication System, patent case file 2,292,387, and proposed it to the Navy.

The first part of Lamarr and Antheil’s Secret Communication System story did not see a happy Hollywood ending. The Navy refused to accept the new technology during World War II. Not only did the invention come from a civilian, but it was complex and ahead of its time.  

As the invention sat unused, Lamarr continued on in Hollywood and found other ways to help with the war effort, such as working with the USO. It wasn’t until Lamarr’s Hollywood career came to an end that her invention started gaining notice.  

Around the time Lamarr filmed her last scene with the 1958 film The Female Animal, her patented invention caught the attention of other innovators in technology. The Secret Communication System saw use in the 1950s during the development of CDMA network technology in the private sector, while the Navy officially adopted the technology in the 1960s around the time of the Cuban Missile Crisis. The methods described in the patent assisted greatly in the development of Bluetooth and Wi-Fi.

Despite the world finally embracing the methods of the patent as early as the mid-to-late 1950s, the Lamarr-Antheil duo were not recognized and awarded for their invention until the late 1990s and early 2000s. They both received the Electronic Frontier Foundation Pioneer Award and the Bulbie Gnass Spirit of Achievement Bronze Award, and in 2014 they were inducted into the National Inventors Hall of Fame…

– National Archive

220px-Hedy_Lamarr_Publicity_Photo_for_The_Heavenly_Body_1944

source

Patent illustration for the Secret Communication System invented by Hedy Kiesler Markey and George Antheil, featuring technical drawings and specifications, filed on June 10, 1941, and issued on August 11, 1942.

source

“As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality”*…

An illustration featuring a person gardening surrounded by large, stylized infinity symbols and vibrant clouds, symbolizing concepts of infinity in mathematics.

As Gregory Barber explains, two new notions of infinity challenge a long-standing plan to define the mathematical universe…

It was minus 20 degrees Celsius, and while some went cross-country skiing, Juan Aguilera, a set theorist at the Vienna University of Technology, preferred to linger in the cafeteria, tearing pieces of pulla pastry and debating the nature of two new notions of infinity. The consequences, Aguilera believed, were grand. “We just don’t know what they are yet,” he said.

Infinity, counterintuitively, comes in many shapes and sizes. This has been known since the 1870s, when the German mathematician Georg Cantor proved that the set of real numbers (all the numbers on the number line) is larger than the set of whole numbers, even though both sets are infinite. (The short version: No matter how you try to match real numbers to whole numbers, you’ll always end up with more real numbers.) The two sets, Cantor argued, represented entirely different flavors of infinity and therefore had profoundly different properties.

From there, Cantor constructed larger infinities, too. He took the set of real numbers, built a new set out of all of its subsets, then proved that this new set was bigger than the original set of real numbers. And when he took all the subsets of this new set, he got an even bigger set. In this way, he built infinitely many sets, each larger than the last. He referred to the different sizes of these infinite sets as cardinal numbers (not to be confused with the ordinary cardinals 1, 2, 3…).

Set theorists have continued to define cardinals that are far more exotic and difficult to describe than Cantor’s. In doing so, they’ve discovered something surprising: These “large cardinals” fall into a surprisingly neat hierarchy. They can be clearly defined in terms of size and complexity. Together, they form a massive tower of infinities that set theorists then use to probe the boundaries of what’s mathematically possible.

But the two new cardinals that Aguilera was pondering in the Arctic cold behaved oddly. He had recently constructed them, along with Joan Bagaria of the University of Barcelona and Philipp Lücke of the University of Hamburg, only to find that they didn’t quite fit into the usual hierarchy. Instead, they “exploded,” Aguilera said, creating a new class of infinities that their colleagues hadn’t bargained on — and implying that far more chaos abounds in mathematics than expected.

It’s a provocative claim. The prospect is, to some, exciting. “I love this paper,” said Toby Meadows, a logician and philosopher at the University of California, Irvine. “It seems like real progress — a really interesting insight that we didn’t have before.”

But it’s also difficult to really know whether the claim is true. That’s the nature of studying infinity. If mathematics is a tapestry sewn together by traditional assumptions that everyone agrees on, the higher reaches of the infinite are its tattered fringes. Set theorists working in these extreme areas operate in a space where the traditional axioms used to write mathematical proofs do not always apply, and where new axioms must be written — and often break down.

Up here, most questions are fundamentally unprovable, and uncertainty reigns. And so to some, the new cardinals don’t change anything. “I don’t buy it at all,” said Hugh Woodin, a set theorist at Harvard University who is currently leading the quest to fully define the mathematical universe. Woodin was Bagaria’s doctoral adviser 35 years ago and Aguilera’s in the 2010s. But his students are cutting their own path through infinity’s thickets. “Your children grow up and defy you,” Woodin said…

More on the fascinating state of play at: “Is Mathematics Mostly Chaos or Mostly Order?” from @GregoryJBarber in @quantamagazine.bsky.social‬.

* Albert Einstein

###

As we get down with Gödel, we might send insightful birthday greetings to John Allen Paulos; he was born on this date in 1945. A mathematician, he is best known as an advocate for– and a skilled teacher of– mathematical literacy. His book Innumeracy: Mathematical Illiteracy and its Consequences (1988) was a bestseller, and A Mathematician Reads the Newspaper (1995) extended the critique. Paulos was a regular columinst for both The Guardian and ABC News. And in 2001 he created and taught a course on quantitative literacy for journalists at the Columbia University School of Journalism– an exercise that stimulated further programs at Columbia and elsewhere in precision and data-driven journalism.

A portrait of John Allen Paulos, a mathematician known for advocating mathematical literacy, smiling while wearing a dark blazer over a white shirt.

source

Happy 4th of July to readers in the U.S… but are we commemorating the right day?

Written by (Roughly) Daily

July 4, 2025 at 1:00 am

“A proof tells us where to concentrate our doubts”*…

Andrew Granville at work

Number theorist Andrew Granville on what mathematics really is, on why objectivity is never quite within reach, and on the role that AI might play…

… What is a mathematical proof? We tend to think of it as a revelation of some eternal truth, but perhaps it is better understood as something of a social construct.

Andrew Granville, a mathematician at the University of Montreal, has been thinking about that a lot recently. After being contacted by a philosopher about some of his writing, “I got to thinking about how we arrive at our truths,” he said. “And once you start pushing at that door, you find it’s a vast subject.”

“How mathematicians go about research isn’t generally portrayed well in popular media. People tend to see mathematics as this pure quest, where we just arrive at great truths by pure thought alone. But mathematics is about guesses — often wrong guesses. It’s an experimental process. We learn in stages…

Quanta spoke with Granville about the nature of mathematical proof — from how proofs work in practice to popular misconceptions about them, to how proof-writing might evolve in the age of artificial intelligence…

[excerpts for that interview follow…]

How mathematicians go about research isn’t generally portrayed well in popular media. People tend to see mathematics as this pure quest, where we just arrive at great truths by pure thought alone. But mathematics is about guesses — often wrong guesses. It’s an experimental process. We learn in stages…

The culture of mathematics is all about proof. We sit around and think, and 95% of what we do is proof. A lot of the understanding we gain is from struggling with proofs and interpreting the issues that come up when we struggle with them…

The main point of a proof is to persuade the reader of the truth of an assertion. That means verification is key. The best verification system we have in mathematics is that lots of people look at a proof from different perspectives, and it fits well in a context that they know and believe. In some sense, we’re not saying we know it’s true. We’re saying we hope it’s correct, because lots of people have tried it from different perspectives. Proofs are accepted by these community standards.

Then there’s this notion of objectivity — of being sure that what is claimed is right, of feeling like you have an ultimate truth. But how can we know we’re being objective? It’s hard to take yourself out of the context in which you’ve made a statement — to have a perspective outside of the paradigm that has been put in place by society. This is just as true for scientific ideas as it is for anything else…

[Granville runs through a history of the proof, from Aristotle, through Euclid, to Hilbert, then Russel and Whitehead, ending with Gödel…]

To discuss mathematics, you need a language, and a set of rules to follow in that language. In the 1930s, Gödel proved that no matter how you select your language, there are always statements in that language that are true but that can’t be proved from your starting axioms. It’s actually more complicated than that, but still, you have this philosophical dilemma immediately: What is a true statement if you can’t justify it? It’s crazy.

So there’s a big mess. We are limited in what we can do.

Professional mathematicians largely ignore this. We focus on what’s doable. As Peter Sarnak likes to say, “We’re working people.” We get on and try to prove what we can…

[Granville then turns to computers…]

We’ve moved to a different place, where computers can do some wild things. Now people say, oh, we’ve got this computer, it can do things people can’t. But can it? Can it actually do things people can’t? Back in the 1950s, Alan Turing said that a computer is designed to do what humans can do, just faster. Not much has changed.

For decades, mathematicians have been using computers — to make calculations that can help guide their understanding, for instance. What AI can do that’s new is to verify what we believe to be true. Some terrific developments have happened with proof verification. Like [the proof assistant] Lean, which has allowed mathematicians to verify many proofs, while also helping the authors better understand their own work, because they have to break down some of their ideas into simpler steps to feed into Lean for verification.

But is this foolproof? Is a proof a proof just because Lean agrees it’s one? In some ways, it’s as good as the people who convert the proof into inputs for Lean. Which sounds very much like how we do traditional mathematics. So I’m not saying that I believe something like Lean is going to make a lot of errors. I’m just not sure it’s any more secure than most things done by humans…

Perhaps it could assist in creating a proof. Maybe in five years’ time, I’ll be saying to an AI model like ChatGPT, “I’m pretty sure I’ve seen this somewhere. Would you check it out?” And it’ll come back with a similar statement that’s correct.

And then once it gets very, very good at that, perhaps you could go one step further and say, “I don’t know how to do this, but is there anybody who’s done something like this?” Perhaps eventually an AI model could find skilled ways to search the literature to bring tools to bear that have been used elsewhere — in a way that a mathematician might not foresee.

However, I don’t understand how ChatGPT can go beyond a certain level to do proofs in a way that outstrips us. ChatGPT and other machine learning programs are not thinking. They are using word associations based on many examples. So it seems unlikely that they will transcend their training data. But if that were to happen, what will mathematicians do? So much of what we do is proof. If you take proofs away from us, I’m not sure who we become…

Eminently worth reading in full: “Why Mathematical Proof Is a Social Compact,” in @QuantaMagazine.

Morris Kline

###

As we add it up, we might send carefully calculated birthday greetings to Edward G. Begle; he was born on this date in 1914. A mathematician who was an accomplished topologist, he is best remembered for his role as the director of the School Mathematics Study Group (SMSG), the primary group credited for developing what came to be known as The New Math (a pedagogical response to Sputnik, taught in American grade schools from the late 1950s through the 1970s)… which will be well-known to (if not necessarily fondly recalled by) readers of a certain age.

source

“Machines take me by surprise with great frequency”*…

In search of universals in the 17th century, Gottfried Leibniz imagined the calculus ratiocinator, a theoretical logical calculation framework aimed at universal application, that led Norbert Wiener to suggest that Leibniz should be considered the patron saint of cybernetics. In the 19th century, Charles Babbage and Ada Lovelace took a pair of whacks at making it real.

Ironically, it was confronting the impossibility of a universal calculator that led to modern computing. In 1936 (the same year that Charlie Chaplin released Modern Times) Alan Turing (following on Godel’s demonstration that mathematics is incomplete and addressing Hilbert‘s “decision problem,” querying the limits of computation) published the (notional) design of a “machine” that elegantly demonstrated those limits– and, as Sheon Han explains, birthed computing as we know it…

… [Hilbert’s] question would lead to a formal definition of computability, one that allowed mathematicians to answer a host of new problems and laid the foundation for theoretical computer science.

The definition came from a 23-year-old grad student named Alan Turing, who in 1936 wrote a seminal paper that not only formalized the concept of computation, but also proved a fundamental question in mathematics and created the intellectual foundation for the invention of the electronic computer. Turing’s great insight was to provide a concrete answer to the computation question in the form of an abstract machine, later named the Turing machine by his doctoral adviser, Alonzo Church. It’s abstract because it doesn’t (and can’t) physically exist as a tangible device. Instead, it’s a conceptual model of computation: If the machine can calculate a function, then the function is computable.

With his abstract machine, Turing established a model of computation to answer the Entscheidungsproblem, which formally asks: Given a set of mathematical axioms, is there a mechanical process — a set of instructions, which today we’d call an algorithm — that can always determine whether a given statement is true?…

… in 1936, Church and Turing — using different methods — independently proved that there is no general way of solving every instance of the Entscheidungsproblem. For example, some games, such as John Conway’s Game of Life, are undecidable: No algorithm can determine whether a certain pattern will appear from an initial pattern.

Beyond answering these fundamental questions, Turing’s machine also led directly to the development of modern computers, through a variant known as the universal Turing machine. This is a special kind of Turing machine that can simulate any other Turing machine on any input. It can read a description of other Turing machines (their rules and input tapes) and simulate their behaviors on its own input tape, producing the same output that the simulated machine would produce, just as today’s computers can read any program and execute it. In 1945, John von Neumann proposed a computer architecture — called the von Neumann architecture — that made the universal Turing machine concept possible in a real-life machine…

As Turing said, “if a machine is expected to be infallible, it cannot also be intelligent.” On the importance of thought experiments: “The Most Important Machine That Was Never Built,” from @sheonhan in @QuantaMagazine.

* Alan Turing

###

As we sum it up, we might spare a thought for Martin Gardner; he died on this date in 2010.  Though not an academic, nor ever a formal student of math or science, he wrote widely and prolifically on both subjects in such popular books as The Ambidextrous Universe and The Relativity Explosion and as the “Mathematical Games” columnist for Scientific American. Indeed, his elegant– and understandable– puzzles delighted professional and amateur readers alike, and helped inspire a generation of young mathematicians.

Gardner’s interests were wide; in addition to the math and science that were his power alley, he studied and wrote on topics that included magic, philosophy, religion, and literature (c.f., especially his work on Lewis Carroll– including the delightful Annotated Alice— and on G.K. Chesterton).  And he was a fierce debunker of pseudoscience: a founding member of CSICOP, and contributor of a monthly column (“Notes of a Fringe Watcher,” from 1983 to 2002) in Skeptical Inquirer, that organization’s monthly magazine.

 source

Written by (Roughly) Daily

May 22, 2023 at 1:00 am