Posts Tagged ‘AI’
“In mathematics, the art of proposing a question must be held of higher value than solving it”*…
Matteo Wong talks with mathematician Terence Tao about the advent of AI in mathematical research and finds that Tao has some very big questions indeed…
Terence Tao, a mathematics professor at UCLA, is a real-life superintelligence. The “Mozart of Math,” as he is sometimes called, is widely considered the world’s greatest living mathematician. He has won numerous awards, including the equivalent of a Nobel Prize for mathematics, for his advances and proofs. Right now, AI is nowhere close to his level.
But technology companies are trying to get it there. Recent, attention-grabbing generations of AI—even the almighty ChatGPT—were not built to handle mathematical reasoning. They were instead focused on language: When you asked such a program to answer a basic question, it did not understand and execute an equation or formulate a proof, but instead presented an answer based on which words were likely to appear in sequence. For instance, the original ChatGPT can’t add or multiply, but has seen enough examples of algebra to solve x + 2 = 4: “To solve the equation x + 2 = 4, subtract 2 from both sides …” Now, however, OpenAI is explicitly marketing a new line of “reasoning models,” known collectively as the o1 series, for their ability to problem-solve “much like a person” and work through complex mathematical and scientific tasks and queries. If these models are successful, they could represent a sea change for the slow, lonely work that Tao and his peers do.
After I saw Tao post his impressions of o1 online—he compared it to a “mediocre, but not completely incompetent” graduate student—I wanted to understand more about his views on the technology’s potential. In a Zoom call last week, he described a kind of AI-enabled, “industrial-scale mathematics” that has never been possible before: one in which AI, at least in the near future, is not a creative collaborator in its own right so much as a lubricant for mathematicians’ hypotheses and approaches. This new sort of math, which could unlock terra incognitae of knowledge, will remain human at its core, embracing how people and machines have very different strengths that should be thought of as complementary rather than competing…
A sample of what follows…
The classic idea of math is that you pick some really hard problem, and then you have one or two people locked away in the attic for seven years just banging away at it. The types of problems you want to attack with AI are the opposite. The naive way you would use AI is to feed it the most difficult problem that we have in mathematics. I don’t think that’s going to be super successful, and also, we already have humans that are working on those problems.
… Tao: The type of math that I’m most interested in is math that doesn’t really exist. The project that I launched just a few days ago is about an area of math called universal algebra, which is about whether certain mathematical statements or equations imply that other statements are true. The way people have studied this in the past is that they pick one or two equations and they study them to death, like how a craftsperson used to make one toy at a time, then work on the next one. Now we have factories; we can produce thousands of toys at a time. In my project, there’s a collection of about 4,000 equations, and the task is to find connections between them. Each is relatively easy, but there’s a million implications. There’s like 10 points of light, 10 equations among these thousands that have been studied reasonably well, and then there’s this whole terra incognita.
There are other fields where this transition has happened, like in genetics. It used to be that if you wanted to sequence a genome of an organism, this was an entire Ph.D. thesis. Now we have these gene-sequencing machines, and so geneticists are sequencing entire populations. You can do different types of genetics that way. Instead of narrow, deep mathematics, where an expert human works very hard on a narrow scope of problems, you could have broad, crowdsourced problems with lots of AI assistance that are maybe shallower, but at a much larger scale. And it could be a very complementary way of gaining mathematical insight.
Wong: It reminds me of how an AI program made by Google Deepmind, called AlphaFold, figured out how to predict the three-dimensional structure of proteins, which was for a long time something that had to be done one protein at a time.
Tao: Right, but that doesn’t mean protein science is obsolete. You have to change the problems you study. A hundred and fifty years ago, mathematicians’ primary usefulness was in solving partial differential equations. There are computer packages that do this automatically now. Six hundred years ago, mathematicians were building tables of sines and cosines, which were needed for navigation, but these can now be generated by computers in seconds.
I’m not super interested in duplicating the things that humans are already good at. It seems inefficient. I think at the frontier, we will always need humans and AI. They have complementary strengths. AI is very good at converting billions of pieces of data into one good answer. Humans are good at taking 10 observations and making really inspired guesses…
Terence Tao, the world’s greatest living mathematician, has a vision for AI: “We’re Entering Uncharted Territory for Math,” from @matteo_wong in @TheAtlantic.
###
As we go figure, we might think recursively about Benoit Mandelbrot; he died on this date in 2010. A mathematician (and polymath), his interest in “the art of roughness” of physical phenomena and “the uncontrolled element in life” led to work (which included coining the word “fractal”, as well as developing a theory of “self-similarity” in nature) for which he is known as “the father of fractal geometry.”
“The greatest obstacle to discovery is not ignorance – it is the illusion of knowledge”*…
Learning from the past: as John Thornhill explains in his consideration of Jason Roberts‘ Every Living Thing, the rivalry between Buffon and Linnaeus has lessons about disrupters and exploitation…
The aristocratic French polymath Georges-Louis Leclerc, Comte de Buffon chose a good year to die: 1788. Reflecting his status as a star of the Enlightenment and author of 35 popular volumes on natural history, Buffon’s funeral carriage drawn by 14 horses was watched by an estimated 20,000 mourners as it processed through Paris. A grateful Louis XVI had earlier erected a statue of a heroic Buffon in the Jardin du Roi, over which the naturalist had masterfully presided. “All nature bows to his genius,” the inscription read.
The next year the French Revolution erupted. As a symbol of the ancien regime, Buffon was denounced as an enemy of progress, his estates in Burgundy seized, and his son, known as the Buffonet, guillotined. In further insult to his memory, zealous revolutionaries marched through the king’s gardens (nowadays known as the Jardin des Plantes) with a bust of Buffon’s great rival, Carl Linnaeus. They hailed the Swedish scientific revolutionary as a true man of the people.
The intense intellectual rivalry between Buffon and Linnaeus, which still resonates today, is fascinatingly told by the author Jason Roberts in his book Every Living Thing, my holiday reading while staying near Buffon’s birthplace in Burgundy. Natural history, like all history, might be written by the victors, as Roberts argues. And for a long time, Linnaeus’s highly influential, but flawed, views held sway. But the book makes a sympathetic case for the further rehabilitation of the much-maligned Buffon.
The two men were, as Roberts writes, exact contemporaries and polar opposites. While Linnaeus obsessed about classifying all biological species into neat categories with fixed attributes and Latin names (Homo sapiens, for example), Buffon emphasised the vast diversity and constantly changing nature of every living thing.
In Roberts’s telling, Linnaeus emerges as a brilliant but ruthless dogmatist, who ignored inconvenient facts that did not fit his theories and gave birth to racial pseudoscience. But it was Buffon’s painstaking investigations and acceptance of complexity that helped inspire the evolutionary theories of Charles Darwin, who later acknowledged that the Frenchman’s ideas were “laughably like mine”.
In two aspects, at least, this 18th-century scientific clash rhymes with our times. The first is to show how intellectual knowledge can often be a source of financial gain. The discovery of crops and commodities in other parts of the world and the development of new methods of cultivation had a huge impact on the economy in that era. “All that is useful to man originates from these natural objects,” Linnaeus wrote. “In one word, it is the foundation of every industry.”
Great wealth was generated from trade in sugar, potatoes, coffee, tea and cochineal while Linnaeus himself explored ways of cultivating pineapples, strawberries and freshwater pearls.
“In many ways, the discipline of natural history in the 18th century was roughly analogous to technology today: a means of disrupting old markets, creating new ones, and generating fortunes in the process,” Roberts writes. As a former software engineer at Apple and a West Coast resident, Roberts knows the tech industry.
Then as now, the addition of fresh inputs into the economy — whether natural commodities back then or digital data today — can lead to astonishing progress, benefiting millions. But it can also lead to exploitation. As Roberts tells me in a telephone interview, it was the scaling up of the sugar industry in the West Indies that led to the slave trade. “Sometimes we think we are inventing the future when we are retrofitting the past,” he says.
The second resonance with today is the danger of believing we know more than we do. Roberts compares Buffon’s state of “curious unknowing” to the concept of “negative capability” described by the English poet John Keats. In a letter written in 1817, Keats argued that we should resist the temptation to explain away things we do not properly understand and accept “uncertainties, mysteries, doubts, without any irritable reaching after fact and reason.”
Armed today with instant access to information and smart machines, the temptation is to ascribe a rational order to everything, as Linnaeus did. But scientific progress depends on a humble acceptance of relative ignorance and a relentless study of the fabric of reality. The spooky nature of quantum mechanics would have blown Linnaeus’s mind. If Buffon still teaches us anything, it is to study the peculiarity of things as they are, not as we might wish them to be…
“What an epic 18th-century scientific row teaches us today,” @johnthornhillft on @itsJason in @FT (gift link)
Pair with “Frameworks” from Céline Henne (@celinehenne) “Knowledge is often a matter of discovery. But when the nature of an enquiry itself is at question, it is an act of creation.”
* Daniel J. Boorstin
###
As we embrace the exceptions, we might send carefully-coded birthday greetings to John McCarthy; he was born on this date in 1927. An eminent computer and cognitive scientist– he was awarded both the Turning Prize and the National Medal of Science– McCarthy coined the phrase “artificial Intelligence” to describe the field of which he was a founder.
It was McCarthy’s 1979 article, “Ascribing Mental Qualities to Machines” (in which he wrote, “Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance”) that provoked John Searle‘s 1980 disagreement in the form of his famous Chinese Room Argument… provoking a broad debate that continues to this day.

“Advances are made by answering questions. Discoveries are made by questioning answers.”*…

Three years ago, Google’s AlphaFold pulled off the biggest artificial intelligence breakthrough in science to date [see here]. Yasemin Saplakoglu explains how this has accelerated molecular research and kindled deep questions about why we do science….
In December 2020, when pandemic lockdowns made in-person meetings impossible, hundreds of computational scientists gathered in front of their screens to watch a new era of science unfold.
They were assembled for a conference, a friendly competition some of them had attended in person for almost three decades where they could all get together and obsess over the same question. Known as the protein folding problem, it was simple to state: Could they accurately predict the three-dimensional shape of a protein molecule from the barest of information — its one-dimensional molecular code? Proteins keep our cells and bodies alive and running. Because the shape of a protein determines its behavior, successfully solving this problem would have profound implications for our understanding of diseases, production of new medicines and insight into how life works.
At the conference, held every other year, the scientists put their latest protein-folding tools to the test. But a solution always loomed beyond reach. Some of them had spent their entire careers trying to get just incrementally better at such predictions. These competitions were marked by baby steps, and the researchers had little reason to think that 2020 would be any different.
They were wrong about that.
That week, a relative newcomer to the protein science community named John Jumper had presented a new artificial intelligence tool, AlphaFold2, which had emerged from the offices of Google DeepMind, the tech company’s artificial intelligence arm in London. Over Zoom, he presented data showing that AlphaFold2’s predictive models of 3D protein structures were over 90% accurate — five times better than those of its closest competitor.
In an instant, the protein folding problem had gone from impossible to painless. The success of artificial intelligence where the human mind had floundered rocked the community of biologists. “I was in shock,” said Mohammed AlQuraishi, a systems biologist at Columbia University’s Program for Mathematical Genomics, who attended the meeting. “A lot of people were in denial.”
But in the conference’s concluding remarks, its organizer John Moult left little room for doubt: AlphaFold2 had “largely solved” the protein folding problem — and shifted protein science forever. Sitting in front of a bookshelf in his home office in a black turtleneck, clicking through his slides on Zoom, Moult spoke in tones that were excited but also ominous. “This is not an end but a beginning,” he said…
[Saplakoglu tells the story of AlphaFold and of subsequent developments…]
… Seventy years ago, proteins were thought to be a gelatinous substance, Porter said. “Now look at what we can see”: structure after structure of a vast world of proteins, whether they exist in nature or were designed.
The field of protein biology is “more exciting right now than it was before AlphaFold,” Perrakis said. The excitement comes from the promise of reviving structure-based drug discovery, the acceleration in creating hypotheses and the hope of understanding complex interactions happening within cells.
“It [feels] like the genomics revolution,” AlQuraishi said. There is so much data, and biologists, whether in their wet labs or in front of their computers, are just starting to figure out what to do with it all.
But like other artificial intelligence breakthroughs sparking across the world, this one might have a ceiling.
AlphaFold2’s success was founded on the availability of training data — hundreds of thousands of protein structures meticulously determined by the hands of patient experimentalists. While AlphaFold3 and related algorithms have shown some success in determining the structures of molecular compounds, their accuracy lags behind that of their single-protein predecessors. That’s in part because there is significantly less training data available.
The protein folding problem was “almost a perfect example for an AI solution,” Thornton said, because the algorithm could train on hundreds of thousands of protein structures collected in a uniform way. However, the Protein Data Bank may be an unusual example of organized data sharing in biology. Without high-quality data to train algorithms, they won’t make accurate predictions.
“We got lucky,” Jumper said. “We met the problem at the time it was ready to be solved.”
No one knows if deep learning’s success at addressing the protein folding problem will carry over to other fields of science, or even other areas of biology. But some, like AlQuraishi, are optimistic. “Protein folding is really just the tip of the iceberg,” he said. Chemists, for example, need to perform computationally expensive calculations. With deep learning, these calculations are already being computed up to a million times faster than before, AlQuraishi said.
Artificial intelligence can clearly advance specific kinds of scientific questions. But it may get scientists only so far in advancing knowledge. “Historically, science has been about understanding nature,” AlQuraishi said — the processes that underlie life and the universe. If science moves forward with deep learning tools that reveal solutions and no process, is it really science?
“If you can cure cancer, do you care about how it really works?” AlQuraishi said. “It is a question that we’re going to wrestle with for years to come.”
If many researchers decide to give up on understanding nature’s processes, then artificial intelligence will not just have changed science — it will have changed the scientists too.
Meanwhile, the CASP organizers are wrestling with a different question: how to continue their competition and conference. AlphaFold2 is a product of CASP, and it solved the main problem the conference was organized to address. “It was a big shock for us in terms of: Just what is CASP anymore?” Moult said.
In 2022, the CASP meeting was held in Antalya, Turkey. Google DeepMind didn’t enter, but the team’s presence was felt. “It was more or less just people using AlphaFold,” Jones said. In that sense, he said, Google won anyway.
Some researchers are now less keen on attending. “Once I saw that result, I switched my research,” Xu said. Others continue to hone their algorithms. Jones still dabbles in structure prediction, but it’s more of a hobby for him now. Others, like AlQuraishi and Baker, continue on by developing new algorithms for structure prediction and design, undaunted by the prospect of competing against a multibillion-dollar company.
Moult and the conference organizers are trying to evolve. The next round of CASP opened for entries in May. He is hoping that deep learning will conquer more areas of structural biology, like RNA or biomolecular complexes. “This method worked on this one problem,” Moult said. “There are lots of other related problems in structural biology.”
The next meeting will be held in December 2024 by the aqua waters of the Caribbean Sea. The winds are cordial, as the conversation will probably be. The stamping has long since died down — at least out loud. What this year’s competition will look like is anyone’s guess. But if the past few CASPs are any indication, Moult knows to expect only one thing: “surprises.”…
When one door closes, another opens: “How AI Revolutionized Protein Science, but Didn’t End It,” from @yasemin_sap in @QuantaMagazine.
See also: “How Colorful Ribbon Diagrams Became the Face of Proteins” from the same author.
###
As we ponder progress, we might spare a thought for Edmond H. Fischer; he died on this date in 2021. A biochemist, he and his collaborator, Edwin G. Krebs were awarded the Nobel Prize in Physiology or Medicine in 1992 for describing how reversible phosphorylation works as a switch to activate proteins and regulate a number of cellular processes. Their discovery was a key to unlocking how glycogen in the body breaks down into glucose. It fostered techniques that prevent the body from rejecting transplanted organs and opened new doors for research into cancer, blood pressure, inflammatory reactions, and brain signals.
“We ceased to be the lunatic fringe. We’re now the lunatic core.”*…
Further, in a fashion, to yesterday’s post on analog computing, an essay from Benjamin Labatut (the author of two remarkable works of “scientific-historical fiction,” When We Cease to Understand the World and The MANIAC, continuing the animating theme of those books…
We will never know how many died during the Butlerian Jihad. Was it millions? Billions? Trillions, perhaps? It was a fantastic rage, a great revolt that spread like wildfire, consuming everything in its path, a chaos that engulfed generations in an orgy of destruction lasting almost a hundred years. A war with a death toll so high that it left a permanent scar on humanity’s soul. But we will never know the names of those who fought and died in it, or the immense suffering and destruction it caused, because the Butlerian Jihad, abominable and devastating as it was, never happened.
The Jihad was an imagined event, conjured up by Frank Herbert as part of the lore that animates his science-fiction saga Dune. It was humanity’s last stand against sentient technology, a crusade to overthrow the god of machine-logic and eradicate the conscious computers and robots that in the future had almost entirely enslaved us. Herbert described it as “a thalamic pause for all humankind,” an era of such violence run amok that it completely transformed the way society developed from then onward. But we know very little of what actually happened during the struggle itself, because in the original Dune series, Herbert gives us only the faintest outlines—hints, murmurs, and whispers, which carry the ghostly weight of prophecy. The Jihad reshaped civilization by outlawing artificial intelligence or any machine that simulated our minds, placing a damper on the worst excesses of technology. However, it was fought so many eons before the events portrayed in the novels that by the time they occur it has faded into legend and crystallized in apocrypha. The hard-won lessons of the catastrophe are preserved in popular wisdom and sayings: “Man may not be replaced.” “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” “We do not trust the unknown which can arise from imaginative technology.” “We must negate the machines-that-think.” The most enduring legacy of the Jihad was a profound change in humankind’s relationship to technology. Because the target of that great hunt, where we stalked and preyed upon the very artifacts we had created to lift ourselves above the seat that nature had intended for us, was not just mechanical intelligence but the machinelike attitude that had taken hold of our species: “Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments,” Herbert wrote.
Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program!
The Butlerian Jihad removed a crutch—the part of ourselves that we had given over to technology—and forced human minds to develop above and beyond the limits of mechanistic reasoning, so that we would no longer depend on computers to do our thinking for us.
Herbert’s fantasy, his far-flung vision of a devastating war between humanity and the god of machine-logic, seemed quaint when he began writing it in the Sixties. Back then, computers were primitive by modern standards, massive mainframe contraptions that could process only hundreds of thousands of cycles per second (instead of billions, like today), had very little memory, operated via punch cards, and were not connected to one another. And we have easily ignored Herbert’s warnings ever since, but now the Butlerian Jihad has suddenly returned to plague us. The artificial-intelligence apocalypse is a new fear that keeps many up at night, a terror born of great advances that seem to suggest that, if we are not very careful, we may—with our own hands—bring forth a future where humanity has no place. This strange nightmare is a credible danger only because so many of our dreams are threatening to come true. It is the culmination of a long process that hearkens back to the origins of civilization itself, to the time when the world was filled with magic and dread, and the only way to guarantee our survival was to call down the power of the gods.
Apotheosis has always haunted the soul of humankind. Since ancient times we have suffered the longing to become gods and exceed the limits nature has placed on us. To achieve this, we built altars and performed rituals to ask for wisdom, blessings, and the means to reach beyond our capabilities. While we tend to believe that it is only now, in the modern world, that power and knowledge carry great risks, primitive knowledge was also dangerous, because in antiquity a part of our understanding of the world and ourselves did not come from us, but from the Other. From the gods, from spirits, from raging voices that spoke in silence.
[Labatut invokes the mysteries of the Vedas and their Altar of Fire, which was meant to develop “a mind, (that) when properly developed, could fly like a bird with outstretched wings and conquer the skies.”…]
Seen from afar by people who were not aware of what was being made, these men and women must surely have looked like bricklayers gone mad. And that same frantic folly seems to possess those who, in recent decades, have dedicated their hearts and minds to the building of a new mathematical construct, a soulless copy of certain aspects of our thinking that we have chosen to name “artificial intelligence,” a tool so formidable that, if we are to believe the most zealous among its devotees, will help us reach the heavens and become immortal…
[Labatut recounts the stories– and works– of some of the creators of AI’s DNA: George Boole (and his logic), Claude Shannon (who put that logic to work), and Geoffrey Hinton (Boole’s great-great-grandson, and “the Godfather of AI,” who created of the first neural networks, but has more recently undergone a change of opinion)…]
… Hinton has been transformed. He has mutated from an evangelist of a new form of reason into a prophet of doom. He says that what changed his mind was the realization that we had, in fact, not replicated our intelligence, but created a superior one.
Or was it something else, perhaps? Did some unconscious part of him whisper that it was he, rather than his great-great-grandfather, who was intended by God to find the mechanisms of thought? Hinton does not believe in God, and he would surely deny his ancestor’s claim that pain is an instrument of the Lord’s will, since he was forced to have every one of his meals on his knees, resting on a pillow like a monk praying at the altar, because of a back injury that caused him excruciating pain. For more than seventeen years, he could not sit down, and only since 2022 has he managed to do so long enough to eat.
Hinton is adamant that the dangers of thinking machines are real. And not just short-term effects like job replacement, disinformation, or autonomous lethal weapons, but an existential risk that some discount as fantasy: that our place in the world might be supplanted by AI. Part of his fear is that he believes AI could actually achieve a sort of immortality, as the Vedic gods did. “The good news,” he has said, “is we figured out how to build things that are immortal. When a piece of hardware dies, they don’t die. If you’ve got the weights stored in some medium and you can find another piece of hardware that can run the same instructions, then you can bring it to life again. So, we’ve got immortality. But it’s not for us.”
Hinton seems to be afraid of what we might see when the embers of the Altar of Fire die down at the end of the sacrifice and the sharp coldness of the beings we have conjured up starts to seep into our bones. Are we really headed for obsolescence? Will humanity perish, not because of the way we treat all that surrounds us, nor due to some massive unthinking rock hurled at us by gravity, but as a consequence of our own irrational need to know all that can be known? The supposed AI apocalypse is different from the mushroom-cloud horror of nuclear war, and unlike the ravages of the wildfires, droughts, and inundations that are becoming commonplace, because it arises from things that we have, since the beginning of civilization, always considered positive and central to what makes us human: reason, intelligence, logic, and the capacity to solve the problems, puzzles, and evils that taint even the most fortunate person’s existence with everyday suffering. But in clawing our way to apotheosis, in daring to follow the footsteps of the Vedic gods who managed to escape from Death, we may shine a light on things that should remain in darkness. Because even if artificial intelligence never lives up to the grand and terrifying nightmare visions that presage a nonhuman world where algorithms hum along without us, we will still have to contend with the myriad effects this technology will have on human society, culture, and economics.
In the meantime, the larger specter of superintelligent AI looms over us. And while it is less likely and perhaps even impossible (nothing but a fairy tale, some say, a horror story intended to attract more money and investment by presenting a series of powerful systems not as the next step in our technological development but as a death-god that ends the world), it cannot be easily dispelled, for it reaches down and touches the fibers of our mythmaking apparatus, that part of our being that is atavistic and fearful, because it reminds us of a time when we shivered in caves and huddled together, while outside in the dark, with eyes that could see in the night, the many savage beasts and monsters of the past sniffed around for traces of our scent.
As every new AI model becomes stronger, as the voices of warning form a chorus, and even the most optimistic among us begin to fear this new technology, it is harder and harder to think without panic or to reason with logic. Thankfully, we have many other talents that don’t answer to reason. And we can always rise and take a step back from the void toward which we have so hurriedly thrown ourselves, by lending an ear to the strange voices that arise from our imagination, that feral territory that will always remain a necessary refuge and counterpoint to rationality.
Faced, as we are, with wild speculation, confronted with dangers that no one, however smart or well informed, is truly capable of managing or understanding, and taunted by the promises of unlimited potential, we may have to sound out the future not merely with science, politics, and reason, but with that devil-eye we use to see in the dark: fiction. Because we can find keys to doors we have yet to encounter in the worlds that authors have imagined in the past. As we grope forward in a daze, battered and bewildered by the capabilities of AI, we could do worse than to think about the desert planet where the protagonists of Herbert’s Dune novels sought to peer into the streaming sands of future time, under the heady spell of a drug called spice, to find the Golden Path, a way for human beings to break from tyranny and avoid extinction or stagnation by being more diverse, resilient, and free, evolving past purely logical reasoning and developing our minds and faculties to the point where our thoughts and actions are unpredictable and not bound by statistics. Herbert’s books, with their strange mixture of past and present, remind us that there are many ways in which we can continue forward while preserving our humanity. AI is here already, but what we choose to do with it and what limits we agree to place on its development remain decisions to be made. No matter how many billions of dollars are invested in the AI companies that promise to eliminate work, solve climate change, cure cancer, and rain down miracles unlike anything we have seen before, we can never fully give ourselves over to these mathematical creatures, these beings with no soul or sympathy, because they are neither alive nor conscious—at least not yet, and certainly not like us—so they do not share the contradictory nature of our minds.
In the coming years, as people armed with AI continue making the world faster, stranger, and more chaotic, we should do all we can to prevent these systems from giving more and more power to the few who can build them. But we should also consider a warning from Herbert, the central commandment he chose to enshrine at the heart of future humanity’s key religious text, a rule meant to keep us from becoming subservient to the products of our reason, and from bowing down before the God of Logic and his many fearsome offspring:
Thou shalt not make a machine in the likeness of a human mind…
Before and after artificial intelligence: “The Gods of Logic” in @Harpers. Eminently worth reading in full.
For a less pessimistic view, see: “A Journey Through the Uncanny Valley: Our Relational Futures with AI,” from @dylanhendricks at @iftf.
* Geoffrey Hinton
###
As we deliberate on Daedalus’ caution, we might we might send fantastically far-sighted birthday greetings to a tecno-optimist who might likely have brushed aside Labatut’s concerns: Hugo Gernsback, a Luxemborgian-American inventor, broadcast pioneer, writer, and publisher; he was born on this date in 1884.
Gernsback held 80 patents at the time of his death; he founded radio station WRNY, was involved in the first television broadcasts, and is considered a pioneer in amateur radio. But it was as a writer and publisher that he probably left his most lasting mark: In 1926, as owner/publisher of the magazine Modern Electrics, he filled a blank spot in his publication by dashing off the first chapter of a series called “Ralph 124C 41+.” The twelve installments of “Ralph” were filled with inventions unknown in 1926, including “television” (Gernsback is credited with introducing the word), fluorescent lighting, juke boxes, solar energy, television, microfilm, vending machines, and the device we now call radar.
The “Ralph” series was an astounding success with readers; and later that year Gernsback founded the first magazine devoted to science fiction, Amazing Stories. Believing that the perfect sci-fi story is “75 percent literature interwoven with 25 percent science,” he coined the term “science fiction.”
Gernsback was a “careful” businessman, who was tight with the fees that he paid his writers– so tight that H. P. Lovecraft and Clark Ashton Smith referred to him as “Hugo the Rat.”
Still, his contributions to the genre as publisher were so significant that, along with H.G. Wells and Jules Verne, he is sometimes called “The Father of Science Fiction”; in his honor, the annual Science Fiction Achievement awards are called the “Hugos.”
(Coincidentally, today is also the birthday– in 1906– of Philo T. Farnsworth, the man who actually did invent television.)

“Few people have the imagination for reality”*…
Experiments that test physics and philosophy as “a single whole,” Amanda Gefter suggests, may be our only route to surefire knowledge about the universe…
Metaphysics is the branch of philosophy that deals in the deep scaffolding of the world: the nature of space, time, causation and existence, the foundations of reality itself. It’s generally considered untestable, since metaphysical assumptions underlie all our efforts to conduct tests and interpret results. Those assumptions usually go unspoken.
Most of the time, that’s fine. Intuitions we have about the way the world works rarely conflict with our everyday experience. At speeds far slower than the speed of light or at scales far larger than the quantum one, we can, for instance, assume that objects have definite features independent of our measurements, that we all share a universal space and time, that a fact for one of us is a fact for all. As long as our philosophy works, it lurks undetected in the background, leading us to mistakenly believe that science is something separable from metaphysics.
But at the uncharted edges of experience — at high speeds and tiny scales — those intuitions cease to serve us, making it impossible for us to do science without confronting our philosophical assumptions head-on. Suddenly we find ourselves in a place where science and philosophy can no longer be neatly distinguished. A place, according to the physicist Eric Cavalcanti, called “experimental metaphysics.”
Cavalcanti is carrying the torch of a tradition that stretches back through a long line of rebellious thinkers who have resisted the usual dividing lines between physics and philosophy. In experimental metaphysics, the tools of science can be used to test our philosophical worldviews, which in turn can be used to better understand science. Cavalcanti, a 46-year-old native of Brazil who is a professor at Griffith University in Brisbane, Australia, and his colleagues have published the strongest result attained in experimental metaphysics yet, a theorem that places strict and surprising constraints on the nature of reality. They’re now designing clever, if controversial, experiments to test our assumptions not only about physics, but about the mind.
While we might expect the injection of philosophy into science to result in something less scientific, in fact, says Cavalcanti, the opposite is true. “In some sense, the knowledge that we obtain through experimental metaphysics is more secure and more scientific,” he said, because it vets not only our scientific hypotheses but the premises that usually lie hidden beneath…
Gefter traces the history of this integrative train of thought (Kant, Duhem, Poincaré, Popper, Einstein, Bell), its potential for helping understand quantum theory… and the prospect of harnessing AI to run the necessary experiments– seemingly comlex and intensive beyond the scope of currenT experimental techniques…
Cavalcanti… is holding out hope. We may never be able to run the experiment on a human, he says, but why not an artificial intelligence algorithm? In his newest work, along with the physicist Howard Wiseman and the mathematician Eleanor Rieffel, he argues that the friend could be an AI algorithm running on a large quantum computer, performing a simulated experiment in a simulated lab. “At some point,” Cavalcanti contends, “we’ll have artificial intelligence that will be essentially indistinguishable from humans as far as cognitive abilities are concerned,” and we’ll be able to test his inequality once and for all.
But that’s not an uncontroversial assumption. Some philosophers of mind believe in the possibility of strong AI, but certainly not all. Thinkers in what’s known as embodied cognition, for instance, argue against the notion of a disembodied mind, while the enactive approach to cognition grants minds only to living creatures.
All of which leaves physics in an awkward position. We can’t know whether nature violates Cavalcanti’s [theorem] — we can’t know, that is, whether objectivity itself is on the metaphysical chopping block — until we can define what counts as an observer, and figuring that out involves physics, cognitive science and philosophy. The radical space of experimental metaphysics expands to entwine all three of them. To paraphrase Gonseth, perhaps they form a single whole…
“‘Metaphysical Experiments’ Probe Our Hidden Assumptions About Reality,” in @QuantaMagazine.
* Johann Wolfgang von Goethe
###
As we examine edges, we might send thoughtful birthday greetings to Rudolf Schottlaender; he was born on this date in 1900. A philosopher who studied with Edmund Husserl, Martin Heidegger, Nicolai Hartmann, and Karl Jaspers, Schottlaender survived the Nazi regime and the persecution of the Jews, hiding in Berlin. After the war, as his democratic and humanist proclivities kept him from posts in philosophy faculties, he distinguished himself as a classical philologist and translator (e.g., new translations of Sophocles which were very effective on the stage, and an edition of Petrarch).
But he continued to publish philosophical and political essays and articles, which he predominantly published in the West and in which he saw himself as a mediator between the systems. Because of his positions critical to East Germany, he was put under close surveillance by the Ministry for State Security (Ministerium für Staatssicherheit or Stasi)– and inspired leading minds of the developing opposition in East Germany.







You must be logged in to post a comment.