Posts Tagged ‘Godel’
“For what man in the natural state or course of thinking did ever conceive it in his power to reduce the notions of all mankind exactly to the same length, and breadth, and height of his own? Yet this is the first humble and civil design of all innovators in the empire of reason.”*…
A “theory of everything” (a Grand Unified Theory on steriods)– a (still hypothetical) coherent theoretical framework of physics containing and explaining all physical principles– is the holy grail of physicists. Natalie Wolchover checks in on the most recent front-runner in the hunt…
Fifty-eight years after it first appeared, string theory remains the most popular candidate for the “theory of everything,” the unified mathematical framework for all matter and forces in the universe. This is much to the chagrin of its rather vocal critics. “String theory is not dead; it’s undead and now walks around like a zombie eating people’s brains,” the former physicist Sabine Hossenfelder said on her popular YouTube channel in 2024.
String theory is a “failure,” the mathematical physicist and blogger Peter Woit often says. His complaint is not that string theory is wrong — it’s that it’s “not even wrong,” as he titled a 2006 book. The theory says that, on scales of billionths of trillionths of trillionths of a centimeter, extra curled-up spatial dimensions reveal themselves and particles resolve into extended objects — strands and loops of energy — rather than points. But this alleged substructure is too small to detect, probably ever. The prediction is untestable.
A further problem is that uncountably many different configurations of dimensions and strings are permitted at those tiny scales; the theory can give rise to a limitless variety of universes. Amid this vast landscape of solutions, no one can hope to find a precise microscopic configuration that undergirds our particular macroscopic world.
These issues are profound indeed. Yet in my experience, the typical high-energy theorist in a prestigious university physics department still thinks string theory has a good chance of being correct, at least in part. The field has become siloed between those who deem it worth studying and those who don’t.
Recently, a new angle of attack has opened up. An approach called bootstrapping has allowed physicists to calculate that, under various starting assumptions about the universe, a key equation from string theory naturally follows. For some experts, these findings support the notion of “string uniqueness,” the idea that it is the only mathematically consistent quantum description of gravity and everything else.
Responding to one bootstrap paper on her YouTube channel, mere weeks after the “undead” comment, Hossenfelder said it was “string theorists do[ing] something sensible for once.” She added, “I’d say this paper strengthens the argument for string theory.”
Not everyone agrees, but the findings are reviving an important question. “This question of ‘Does string theory describe the world?’ has just been so taboo,” said Cliff Cheung, a physicist at the California Institute of Technology and an author of the paper discussed by Hossenfelder. Now, “people are actually thinking about it for the first time in decades.”
Getting wind of this work, I wanted to drill down on the logic and examine how the string hypothesis is faring these days…
And so she does: “Are Strings Still Our Best Hope for a Theory of Everything?” from @nattyover.bsky.social in @quantamagazine.bsky.social. Eminently worth reading in full.
Compare/contrast with: “Where Some See Strings, She Sees a Space-Time Made of Fractals.”
* Jonathan Swift, A Tale of a Tub
###
As we grapple with Godel, we might spare a thought for Hermann Rorschach; he died on this date in 1922. A psychiatrist and psychoanalyst, his education in art helped to spur the development of a set of inkblots that were used experimentally to measure various unconscious parts of the subject’s personality. Rorschach knew the human tendency to project interpretations and feelings onto ambiguous stimuli and believed that the subjective responses of his subjects enabled him to distinguish among them on the basis of their perceptive abilities, intelligence, and emotional characteristics. His method has come to be known as the Rorschach test, iterations of which have continued to be used over the years to help identify personality, psychotic, and neurological disorders.
Perhaps his insight that we humans tend “to project interpretations and feelings onto ambiguous stimuli” can inform our understanding of physicists trying to construct mental/conceptual models of our reality, which they’ve been doing for a very long time, and of the limitations of that quest.
“This incompleteness is all we have”*…
The impulse to “systemitize” morality is as old as philosophy. Many now hope that AI will discover and organize moral truths. But Elad Uzan suggests that Kurt Gödel’s work on incompleteness demonstrates that deciding what is right will always be our burden…
Imagine a world in which artificial intelligence is entrusted with the highest moral responsibilities: sentencing criminals, allocating medical resources, and even mediating conflicts between nations. This might seem like the pinnacle of human progress: an entity unburdened by emotion, prejudice or inconsistency, making ethical decisions with impeccable precision. Unlike human judges or policymakers, a machine would not be swayed by personal interests or lapses in reasoning. It does not lie. It does not accept bribes or pleas. It does not weep over hard decisions.
Yet beneath this vision of an idealised moral arbiter lies a fundamental question: can a machine understand morality as humans do, or is it confined to a simulacrum of ethical reasoning? AI might replicate human decisions without improving on them, carrying forward the same biases, blind spots and cultural distortions from human moral judgment. In trying to emulate us, it might only reproduce our limitations, not transcend them. But there is a deeper concern. Moral judgment draws on intuition, historical awareness and context – qualities that resist formalisation. Ethics may be so embedded in lived experience that any attempt to encode it into formal structures risks flattening its most essential features. If so, AI would not merely reflect human shortcomings; it would strip morality of the very depth that makes ethical reflection possible in the first place.
Still, many have tried to formalise ethics, by treating certain moral claims not as conclusions, but as starting points. A classic example comes from utilitarianism, which often takes as a foundational axiom the principle that one should act to maximise overall wellbeing. From this, more specific principles can be derived, for example, that it is right to benefit the greatest number, or that actions should be judged by their consequences for total happiness. As computational resources increase, AI becomes increasingly well-suited to the task of starting from fixed ethical assumptions and reasoning through their implications in complex situations.
But what, exactly, does it mean to formalise something like ethics? The question is easier to grasp by looking at fields in which formal systems have long played a central role. Physics, for instance, has relied on formalisation for centuries. There is no single physical theory that explains everything. Instead, we have many physical theories, each designed to describe specific aspects of the Universe: from the behaviour of quarks and electrons to the motion of galaxies. These theories often diverge. Aristotelian physics, for instance, explained falling objects in terms of natural motion toward Earth’s centre; Newtonian mechanics replaced this with a universal force of gravity. These explanations are not just different; they are incompatible. Yet both share a common structure: they begin with basic postulates – assumptions about motion, force or mass – and derive increasingly complex consequences. Isaac Newton’s laws of motion and James Clerk Maxwell’s equations are classic examples: compact, elegant formulations from which wide-ranging predictions about the physical world can be deduced.
Ethical theories have a similar structure. Like physical theories, they attempt to describe a domain – in this case, the moral landscape. They aim to answer questions about which actions are right or wrong, and why. These theories also diverge and, even when they recommend similar actions, such as giving to charity, they justify them in different ways. Ethical theories also often begin with a small set of foundational principles or claims, from which they reason about more complex moral problems. A consequentialist begins with the idea that actions should maximise wellbeing; a deontologist starts from the idea that actions must respect duties or rights. These basic commitments function similarly to their counterparts in physics: they define the structure of moral reasoning within each ethical theory.
Just as AI is used in physics to operate within existing theories – for example, to optimise experimental designs or predict the behaviour of complex systems – it can also be used in ethics to extend moral reasoning within a given framework. In physics, AI typically operates within established models rather than proposing new physical laws or conceptual frameworks. It may calculate how multiple forces interact and predict their combined effect on a physical system. Similarly, in ethics, AI does not generate new moral principles but applies existing ones to novel and often intricate situations. It may weigh competing values – fairness, harm minimisation, justice – and assess their combined implications for what action is morally best. The result is not a new moral system, but a deepened application of an existing one, shaped by the same kind of formal reasoning that underlies scientific modelling. But is there an inherent limit to what AI can know about morality? Could there be true ethical propositions that no machine, no matter how advanced, can ever prove?
These questions echo a fundamental discovery in mathematical logic, probably the most fundamental insight ever to be proven: Kurt Gödel’s incompleteness theorems. They show that any logical system powerful enough to describe arithmetic is either inconsistent or incomplete. In this essay, I argue that this limitation, though mathematical in origin, has deep consequences for ethics, and for how we design AI systems to reason morally…
Eminently worth reading in full: “The incompleteness of ethics,” from @aeon.co.
And as if that were not enough, consider the cultural challenge implicit in this chart:
More background at “Cultural Bias in LLMs” (and here and here).
* Charles Bukowski
###
As we own up to it, we might recall that it was on this date in 1942 that actress Hedy Lamarr and musician George Antheil received a patent (#2,292,387) for a frequency-hopping radio communication system which later became the basis for modern technologies like Bluetooth, wireless telephones, and Wi-Fi.
Hedy Lamarr made it big in acting before ever moving to the United States. Her role in the Czech film Ecstasy got international attention in 1933 for containing scandalous, intimate scenes that were unheard of in the movie industry up until then.
Backlash from her early acting career was the least of her worries, however, as tensions began to rise in Europe. Lamarr, born Hedwig Eva Maria Kiesler, grew up in a Catholic household in Austria, but both of her parents had a Jewish heritage. In addition, she was married to Friedrich Mandl, a rich ammunition manufacturer with connections to both Fascist Italy and Nazi Germany.
Her time with Friedrich Mandl was bittersweet. While the romance quickly died and Mandl became very possessive of his young wife, Lamarr was often taken to meetings on scientific innovations in the military world. These meetings are said to have been the spark that led to her becoming an inventor. As tensions in both her household and in the world around her became overwhelming, she fled Europe and found her way to the United States through a job offer from Hollywood’s MGM Studios.
Lamarr became one of the most sought-after leading women in Hollywood and starred in popular movies like the 1939 film Algiers, but once the United States began helping the Allies and preparing to possibly enter the war, Lamarr almost left Hollywood forever. Her eyes were no longer fixed on the bright lights of the film set but on the flashes of bombs and gunfire. Lamarr wanted to join the Inventors’ Council in Washington, DC, where she thought she would be of better service to the war effort.
Lamarr’s path to inventing the cornerstone of Wi-Fi began when she heard about the Navy’s difficulties with radio-controlled torpedoes. She recruited George Antheil, a composer she met through MGM Studios, in order to create what was known as a Secret Communication System.
The idea behind the invention was to create a system that constantly changed frequencies, making it difficult for the Axis powers to decode the radio messages. The invention would help the Navy make their torpedo systems become more stealthy and make it less likely for the torpedoes to be rendered useless by enemies.
Lamarr was the brains behind the invention, with her background knowledge in ammunition, and Antheil was the artist that brought it to life, using the piano for inspiration. In 1942, under her then-married name, Hedy Kiesler Markey, she filed for a patent for the Secret Communication System, patent case file 2,292,387, and proposed it to the Navy.
The first part of Lamarr and Antheil’s Secret Communication System story did not see a happy Hollywood ending. The Navy refused to accept the new technology during World War II. Not only did the invention come from a civilian, but it was complex and ahead of its time.
As the invention sat unused, Lamarr continued on in Hollywood and found other ways to help with the war effort, such as working with the USO. It wasn’t until Lamarr’s Hollywood career came to an end that her invention started gaining notice.
Around the time Lamarr filmed her last scene with the 1958 film The Female Animal, her patented invention caught the attention of other innovators in technology. The Secret Communication System saw use in the 1950s during the development of CDMA network technology in the private sector, while the Navy officially adopted the technology in the 1960s around the time of the Cuban Missile Crisis. The methods described in the patent assisted greatly in the development of Bluetooth and Wi-Fi.
Despite the world finally embracing the methods of the patent as early as the mid-to-late 1950s, the Lamarr-Antheil duo were not recognized and awarded for their invention until the late 1990s and early 2000s. They both received the Electronic Frontier Foundation Pioneer Award and the Bulbie Gnass Spirit of Achievement Bronze Award, and in 2014 they were inducted into the National Inventors Hall of Fame…

“As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality”*…
As Gregory Barber explains, two new notions of infinity challenge a long-standing plan to define the mathematical universe…
It was minus 20 degrees Celsius, and while some went cross-country skiing, Juan Aguilera, a set theorist at the Vienna University of Technology, preferred to linger in the cafeteria, tearing pieces of pulla pastry and debating the nature of two new notions of infinity. The consequences, Aguilera believed, were grand. “We just don’t know what they are yet,” he said.
Infinity, counterintuitively, comes in many shapes and sizes. This has been known since the 1870s, when the German mathematician Georg Cantor proved that the set of real numbers (all the numbers on the number line) is larger than the set of whole numbers, even though both sets are infinite. (The short version: No matter how you try to match real numbers to whole numbers, you’ll always end up with more real numbers.) The two sets, Cantor argued, represented entirely different flavors of infinity and therefore had profoundly different properties.
From there, Cantor constructed larger infinities, too. He took the set of real numbers, built a new set out of all of its subsets, then proved that this new set was bigger than the original set of real numbers. And when he took all the subsets of this new set, he got an even bigger set. In this way, he built infinitely many sets, each larger than the last. He referred to the different sizes of these infinite sets as cardinal numbers (not to be confused with the ordinary cardinals 1, 2, 3…).
Set theorists have continued to define cardinals that are far more exotic and difficult to describe than Cantor’s. In doing so, they’ve discovered something surprising: These “large cardinals” fall into a surprisingly neat hierarchy. They can be clearly defined in terms of size and complexity. Together, they form a massive tower of infinities that set theorists then use to probe the boundaries of what’s mathematically possible.
But the two new cardinals that Aguilera was pondering in the Arctic cold behaved oddly. He had recently constructed them, along with Joan Bagaria of the University of Barcelona and Philipp Lücke of the University of Hamburg, only to find that they didn’t quite fit into the usual hierarchy. Instead, they “exploded,” Aguilera said, creating a new class of infinities that their colleagues hadn’t bargained on — and implying that far more chaos abounds in mathematics than expected.
It’s a provocative claim. The prospect is, to some, exciting. “I love this paper,” said Toby Meadows, a logician and philosopher at the University of California, Irvine. “It seems like real progress — a really interesting insight that we didn’t have before.”
But it’s also difficult to really know whether the claim is true. That’s the nature of studying infinity. If mathematics is a tapestry sewn together by traditional assumptions that everyone agrees on, the higher reaches of the infinite are its tattered fringes. Set theorists working in these extreme areas operate in a space where the traditional axioms used to write mathematical proofs do not always apply, and where new axioms must be written — and often break down.
Up here, most questions are fundamentally unprovable, and uncertainty reigns. And so to some, the new cardinals don’t change anything. “I don’t buy it at all,” said Hugh Woodin, a set theorist at Harvard University who is currently leading the quest to fully define the mathematical universe. Woodin was Bagaria’s doctoral adviser 35 years ago and Aguilera’s in the 2010s. But his students are cutting their own path through infinity’s thickets. “Your children grow up and defy you,” Woodin said…
More on the fascinating state of play at: “Is Mathematics Mostly Chaos or Mostly Order?” from @GregoryJBarber in @quantamagazine.bsky.social.
* Albert Einstein
###
As we get down with Gödel, we might send insightful birthday greetings to John Allen Paulos; he was born on this date in 1945. A mathematician, he is best known as an advocate for– and a skilled teacher of– mathematical literacy. His book Innumeracy: Mathematical Illiteracy and its Consequences (1988) was a bestseller, and A Mathematician Reads the Newspaper (1995) extended the critique. Paulos was a regular columinst for both The Guardian and ABC News. And in 2001 he created and taught a course on quantitative literacy for journalists at the Columbia University School of Journalism– an exercise that stimulated further programs at Columbia and elsewhere in precision and data-driven journalism.
Happy 4th of July to readers in the U.S… but are we commemorating the right day?
“Machines take me by surprise with great frequency”*…
In search of universals in the 17th century, Gottfried Leibniz imagined the calculus ratiocinator, a theoretical logical calculation framework aimed at universal application, that led Norbert Wiener to suggest that Leibniz should be considered the patron saint of cybernetics. In the 19th century, Charles Babbage and Ada Lovelace took a pair of whacks at making it real.
Ironically, it was confronting the impossibility of a universal calculator that led to modern computing. In 1936 (the same year that Charlie Chaplin released Modern Times) Alan Turing (following on Godel’s demonstration that mathematics is incomplete and addressing Hilbert‘s “decision problem,” querying the limits of computation) published the (notional) design of a “machine” that elegantly demonstrated those limits– and, as Sheon Han explains, birthed computing as we know it…
… [Hilbert’s] question would lead to a formal definition of computability, one that allowed mathematicians to answer a host of new problems and laid the foundation for theoretical computer science.
The definition came from a 23-year-old grad student named Alan Turing, who in 1936 wrote a seminal paper that not only formalized the concept of computation, but also proved a fundamental question in mathematics and created the intellectual foundation for the invention of the electronic computer. Turing’s great insight was to provide a concrete answer to the computation question in the form of an abstract machine, later named the Turing machine by his doctoral adviser, Alonzo Church. It’s abstract because it doesn’t (and can’t) physically exist as a tangible device. Instead, it’s a conceptual model of computation: If the machine can calculate a function, then the function is computable.
…
With his abstract machine, Turing established a model of computation to answer the Entscheidungsproblem, which formally asks: Given a set of mathematical axioms, is there a mechanical process — a set of instructions, which today we’d call an algorithm — that can always determine whether a given statement is true?…
… in 1936, Church and Turing — using different methods — independently proved that there is no general way of solving every instance of the Entscheidungsproblem. For example, some games, such as John Conway’s Game of Life, are undecidable: No algorithm can determine whether a certain pattern will appear from an initial pattern.
…
Beyond answering these fundamental questions, Turing’s machine also led directly to the development of modern computers, through a variant known as the universal Turing machine. This is a special kind of Turing machine that can simulate any other Turing machine on any input. It can read a description of other Turing machines (their rules and input tapes) and simulate their behaviors on its own input tape, producing the same output that the simulated machine would produce, just as today’s computers can read any program and execute it. In 1945, John von Neumann proposed a computer architecture — called the von Neumann architecture — that made the universal Turing machine concept possible in a real-life machine…
As Turing said, “if a machine is expected to be infallible, it cannot also be intelligent.” On the importance of thought experiments: “The Most Important Machine That Was Never Built,” from @sheonhan in @QuantaMagazine.
* Alan Turing
###
As we sum it up, we might spare a thought for Martin Gardner; he died on this date in 2010. Though not an academic, nor ever a formal student of math or science, he wrote widely and prolifically on both subjects in such popular books as The Ambidextrous Universe and The Relativity Explosion and as the “Mathematical Games” columnist for Scientific American. Indeed, his elegant– and understandable– puzzles delighted professional and amateur readers alike, and helped inspire a generation of young mathematicians.
Gardner’s interests were wide; in addition to the math and science that were his power alley, he studied and wrote on topics that included magic, philosophy, religion, and literature (c.f., especially his work on Lewis Carroll– including the delightful Annotated Alice— and on G.K. Chesterton). And he was a fierce debunker of pseudoscience: a founding member of CSICOP, and contributor of a monthly column (“Notes of a Fringe Watcher,” from 1983 to 2002) in Skeptical Inquirer, that organization’s monthly magazine.











You must be logged in to post a comment.