Posts Tagged ‘artificial intelligence’
“I like to think (it has to be) of a cybernetic ecology where we are free of our labors and joined back to nature, returned to our mammal brothers and sisters”*…
A.I. pioneer Dario Amodei with a positive scenario for artificial intelligence…
I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.
In this essay I try to sketch out what that upside might look like—what a world with powerful AI might look like if everything goes right. Of course no one can know the future with any certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one…
How AI could transform the world for the better: “Machines of Loving Grace,” from @DarioAmodei. Eminently worth reading in full…
A (similarly positive, but slightly more focused) piece from a team at Deepmind: “AI for Science.”
Apposite (if not opposite): “Shoggoths amongst us,” from Henry Farrell, and an earlier (R)D, “We ceased to be the lunatic fringe. We’re now the lunatic core.”
See also: “AI Isn’t Your God—But It Might Be Your Intern.”
* Richard Brautigan, “All Watched Over By Machines Of Loving Grace” (the source of Amodei’s title)
###
As we ponder the perplexities of progress, we might send carefully-calculated birthday greetings to Grace Brewster Murray Hopper; she was born on this date in 19o6. A seminal computer scientist and Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) was one of the first programmers of the Harvard Mark I computer (in 1944), invented the first compiler for a computer programming language, and was one of the leaders in popularizing the concept of machine-independent programming languages– which led to the development of COBOL, one of the first high-level programming languages.
Hopper also (inadvertently) contributed one of the most ubiquitous metaphors in computer science: she found and documented the first computer “bug” (in 1947).
She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.

“One cannot conceive anything so strange and so implausible that it has not already been said by one philosopher or another”*…
Wisdom for the exquisite Existential Comics (“A philosophy comic about the inevitable anguish of living a brief life in an absurd world. Also jokes.”)…
Frege was an early philosopher of language, who formulated a theory of semantics that largely had to do with how we form truth propositions about the world. His theories were enormously influential for people like Russel, Carnap, and even Wittgenstein early in his career. They all recognized that the languages we use are ambiguous, so making exact determinations was always difficult. Most of them were logicians and mathematicians, and wanted to render ordinary language as exact and precise as mathematical language, so we could go about doing empirical science with perfect clarity. Russell, Carnap, and others even vowed to create an exact scientific language (narrator: “they didn’t create an exact scientific language”).
Later on, Wittgenstein and other philosophers such as J.L. Austin came to believe that a fundamental mistake was made about the nature of language itself. Language, they thought, doesn’t pick out truth propositions about the world at all. Speech acts were fundamentally no different than other actions, and were merely used in social situations to bring about certain effects. For example, in asking for a sandwich to be passed across the table, we do not pick out a certain set of facts about the world, we only utter the words with the expectations that it will cause certain behavior in others. Learning what is and isn’t a sandwich is more like learning the rules of a game than making declarations about what exists in the world, so for Wittgenstein, what is or isn’t a sandwich depends only on the success or failure of the word “sandwich” in a social context, regardless of what actual physical properties a sandwich has in common with, say, a hotdog.
“Is a Hotdog a Sandwich? A Definitive Study,” from @existentialcomics.com.
* René Descartes
###
As we add mayonnaise, we might send thoughtful birthday greetings to Norbert Wiener; he was born on this date in 1894. A computer scientist, mathematician, and philosopher, Wiener is considered the originator of cybernetics, the science of communication as it relates to living things and machines– a field that has had implications for implications for a wide variety of fields, including engineering, systems control, computer science, biology, neuroscience, and philosophy. (Wiener credited Leibniz as the “patron saint of cybernetics.)
His work heavily influenced computer pioneer John von Neumann, information theorist Claude Shannon, anthropologists Margaret Mead and Gregory Bateson, and many others. Wiener was one of the first to theorize that all intelligent behavior was the result of feedback mechanisms and could possibly be simulated by machines– an important early step towards the development of modern artificial intelligence.
“I fear the day when the technology overlaps with our humanity. The world will only have a generation of idiots.”*…
Alva Noë on the importance of humans hanging on to their humanity– for all the promise and dangers of AI, computers plainly can’t think. To think is to resist – something no machine does:
Computers don’t actually do anything. They don’t write, or play; they don’t even compute. Which doesn’t mean we can’t play with computers, or use them to invent, or make, or problem-solve. The new AI is unexpectedly reshaping ways of working and making, in the arts and sciences, in industry, and in warfare. We need to come to terms with the transformative promise and dangers of this new tech. But it ought to be possible to do so without succumbing to bogus claims about machine minds.
What could ever lead us to take seriously the thought that these devices of our own invention might actually understand, and think, and feel, or that, if not now, then later, they might one day come to open their artificial eyes thus finally to behold a shiny world of their very own? One source might simply be the sense that, now unleashed, AI is beyond our control. Fast, microscopic, distributed and astronomically complex, it is hard to understand this tech, and it is tempting to imagine that it has power over us.
But this is nothing new. The story of technology – from prehistory to now – has always been that of the ways we are entrained by the tools and systems that we ourselves have made. Think of the pathways we make by walking. To every tool there is a corresponding habit, that is, an automatised way of acting and being. From the humble pencil to the printing press to the internet, our human agency is enacted in part by the creation of social and technological landscapes that in turn transform what we can do, and so seem, or threaten, to govern and control us.
Yet it is one thing to appreciate the ways we make and remake ourselves through the cultural transformation of our worlds via tool use and technology, and another to mystify dumb matter put to work by us. If there is intelligence in the vicinity of pencils, shoes, cigarette lighters, maps or calculators, it is the intelligence of their users and inventors. The digital is no different.
But there is another origin of our impulse to concede mind to devices of our own invention, and this is what I focus on here: the tendency of some scientists to take for granted what can only be described as a wildly simplistic picture of human and animal cognitive life. They rely unchecked on one-sided, indeed, milquetoast conceptions of human activity, skill and cognitive accomplishment. The surreptitious substitution (to use a phrase of Edmund Husserl’s) of this thin gruel version of the mind at work – a substitution that I hope to convince you traces back to Alan Turing and the very origins of AI – is the decisive move in the conjuring trick.
What scientists seem to have forgotten is that the human animal is a creature of disturbance. Or as the mid-20th-century philosopher of biology Hans Jonas wrote: ‘Irritability is the germ, and as it were the atom, of having a world…’ With us there is always, so to speak, a pebble in the shoe. And this is what moves us, turns us, orients us to reorient ourselves, to do things differently, so that we might carry on. It is irritation and disorientation that is the source of our concern. In the absence of disturbance, there is nothing: no language, no games, no goals, no tasks, no world, no care, and so, yes, no consciousness…
[Starting with Turing, Noë considers the relative roles of humans and technology across a number of spheres, including music…]
… The piano was invented, to be sure, but not by you or me. We encounter it. It pre-exists us and solicits our submission. To learn to play is to be altered, made to adapt one’s posture, hands, fingers, legs and feet to the piano’s mechanical requirements. Under the regime of the piano keyboard, it is demanded that we ourselves become player pianos, that is to say, extensions of the machine itself.
But we can’t. And we won’t. To learn to play, to take on the machine, for us, is to struggle. It is hard to master the instrument’s demands.
And this fact – the difficulty we encounter in the face of the keyboard’s insistence – is productive. We make art out of it. It stops us being player pianos, but it is exactly what is required if we are to become piano players.
For it is the player’s fraught relation to the machine, and to the history and tradition that the machine imposes, that supplies the raw material of musical invention. Music and play happen in that entanglement. To master the piano, as only a person can, is not just to conform to the machine’s demands. It is, rather, to push back, to say no, to rage against the machine. And so, for example, we slap and bang and shout out. In this way, the piano becomes not merely a vehicle of habit and control – a mechanism – but rather an opportunity for action and expression.
And, as with the piano, so with the whole of human cultural life. We live in the entanglement between government and resistance. We fight back…
… The telling fact: computers are used to play our games; they are engineered to make moves in the spaces opened up by our concerns. They don’t have concerns of their own, and they make no new games. They invent no new language.
The British philosopher R G Collingwood noticed that the painter doesn’t invent painting, and the musician doesn’t invent the musical culture in which they find themselves. And for Collingwood this served to show that no person is fully autonomous, a God-like fount of creativity; we are always to some degree recyclers and samplers and, at our best, participants in something larger than ourselves.
But this should not be taken to show that we become what we are (painters, musicians, speakers) by doing what, for example, LLMs do – i.e., merely by getting trained up on large data sets. Humans aren’t trained up. We have experience. We learn. And for us, learning a language, for example, isn’t learning to generate ‘the next token’. It’s learning to work, play, eat, love, flirt, dance, fight, pray, manipulate, negotiate, pretend, invent and think. And crucially, we don’t merely incorporate what we learn and carry on; we always resist. Our values are always problematic. We are not merely word-generators. We are makers of meaning.
We can’t help doing this; no computer can do this…
Eminently worth reading in full: “Rage against the machine,” from @alvanoe in @aeonmag.
For more, see Noë’s The Entanglement: How Art and Philosophy Make Us What We Are.
* Albert Einstein
###
As we resolve to wrestle, we might recall that it was on this date in 1969 that UCLA professor Leonard Kleinrock (aided by his student assistant Charley Kline) created the first networked computer-to-computer connection (with SRI programmer Bill Duvall in Palo Alto), via which they sent the first networked computer-to-computer communication)… or at least part of it. Duvall’s machine crashed partway through the transmission, meaning the only letters received from the attempted “login” were “lo.” The next month two more nodes were added (UCSB and the University of Utah) and the network was dubbed ARPANET.
Still, “lo”– perhaps an appropriate way to announce what would grow up to be the internet.

“In mathematics, the art of proposing a question must be held of higher value than solving it”*…
Matteo Wong talks with mathematician Terence Tao about the advent of AI in mathematical research and finds that Tao has some very big questions indeed…
Terence Tao, a mathematics professor at UCLA, is a real-life superintelligence. The “Mozart of Math,” as he is sometimes called, is widely considered the world’s greatest living mathematician. He has won numerous awards, including the equivalent of a Nobel Prize for mathematics, for his advances and proofs. Right now, AI is nowhere close to his level.
But technology companies are trying to get it there. Recent, attention-grabbing generations of AI—even the almighty ChatGPT—were not built to handle mathematical reasoning. They were instead focused on language: When you asked such a program to answer a basic question, it did not understand and execute an equation or formulate a proof, but instead presented an answer based on which words were likely to appear in sequence. For instance, the original ChatGPT can’t add or multiply, but has seen enough examples of algebra to solve x + 2 = 4: “To solve the equation x + 2 = 4, subtract 2 from both sides …” Now, however, OpenAI is explicitly marketing a new line of “reasoning models,” known collectively as the o1 series, for their ability to problem-solve “much like a person” and work through complex mathematical and scientific tasks and queries. If these models are successful, they could represent a sea change for the slow, lonely work that Tao and his peers do.
After I saw Tao post his impressions of o1 online—he compared it to a “mediocre, but not completely incompetent” graduate student—I wanted to understand more about his views on the technology’s potential. In a Zoom call last week, he described a kind of AI-enabled, “industrial-scale mathematics” that has never been possible before: one in which AI, at least in the near future, is not a creative collaborator in its own right so much as a lubricant for mathematicians’ hypotheses and approaches. This new sort of math, which could unlock terra incognitae of knowledge, will remain human at its core, embracing how people and machines have very different strengths that should be thought of as complementary rather than competing…
A sample of what follows…
The classic idea of math is that you pick some really hard problem, and then you have one or two people locked away in the attic for seven years just banging away at it. The types of problems you want to attack with AI are the opposite. The naive way you would use AI is to feed it the most difficult problem that we have in mathematics. I don’t think that’s going to be super successful, and also, we already have humans that are working on those problems.
… Tao: The type of math that I’m most interested in is math that doesn’t really exist. The project that I launched just a few days ago is about an area of math called universal algebra, which is about whether certain mathematical statements or equations imply that other statements are true. The way people have studied this in the past is that they pick one or two equations and they study them to death, like how a craftsperson used to make one toy at a time, then work on the next one. Now we have factories; we can produce thousands of toys at a time. In my project, there’s a collection of about 4,000 equations, and the task is to find connections between them. Each is relatively easy, but there’s a million implications. There’s like 10 points of light, 10 equations among these thousands that have been studied reasonably well, and then there’s this whole terra incognita.
There are other fields where this transition has happened, like in genetics. It used to be that if you wanted to sequence a genome of an organism, this was an entire Ph.D. thesis. Now we have these gene-sequencing machines, and so geneticists are sequencing entire populations. You can do different types of genetics that way. Instead of narrow, deep mathematics, where an expert human works very hard on a narrow scope of problems, you could have broad, crowdsourced problems with lots of AI assistance that are maybe shallower, but at a much larger scale. And it could be a very complementary way of gaining mathematical insight.
Wong: It reminds me of how an AI program made by Google Deepmind, called AlphaFold, figured out how to predict the three-dimensional structure of proteins, which was for a long time something that had to be done one protein at a time.
Tao: Right, but that doesn’t mean protein science is obsolete. You have to change the problems you study. A hundred and fifty years ago, mathematicians’ primary usefulness was in solving partial differential equations. There are computer packages that do this automatically now. Six hundred years ago, mathematicians were building tables of sines and cosines, which were needed for navigation, but these can now be generated by computers in seconds.
I’m not super interested in duplicating the things that humans are already good at. It seems inefficient. I think at the frontier, we will always need humans and AI. They have complementary strengths. AI is very good at converting billions of pieces of data into one good answer. Humans are good at taking 10 observations and making really inspired guesses…
Terence Tao, the world’s greatest living mathematician, has a vision for AI: “We’re Entering Uncharted Territory for Math,” from @matteo_wong in @TheAtlantic.
###
As we go figure, we might think recursively about Benoit Mandelbrot; he died on this date in 2010. A mathematician (and polymath), his interest in “the art of roughness” of physical phenomena and “the uncontrolled element in life” led to work (which included coining the word “fractal”, as well as developing a theory of “self-similarity” in nature) for which he is known as “the father of fractal geometry.”
“The greatest obstacle to discovery is not ignorance – it is the illusion of knowledge”*…
Learning from the past: as John Thornhill explains in his consideration of Jason Roberts‘ Every Living Thing, the rivalry between Buffon and Linnaeus has lessons about disrupters and exploitation…
The aristocratic French polymath Georges-Louis Leclerc, Comte de Buffon chose a good year to die: 1788. Reflecting his status as a star of the Enlightenment and author of 35 popular volumes on natural history, Buffon’s funeral carriage drawn by 14 horses was watched by an estimated 20,000 mourners as it processed through Paris. A grateful Louis XVI had earlier erected a statue of a heroic Buffon in the Jardin du Roi, over which the naturalist had masterfully presided. “All nature bows to his genius,” the inscription read.
The next year the French Revolution erupted. As a symbol of the ancien regime, Buffon was denounced as an enemy of progress, his estates in Burgundy seized, and his son, known as the Buffonet, guillotined. In further insult to his memory, zealous revolutionaries marched through the king’s gardens (nowadays known as the Jardin des Plantes) with a bust of Buffon’s great rival, Carl Linnaeus. They hailed the Swedish scientific revolutionary as a true man of the people.
The intense intellectual rivalry between Buffon and Linnaeus, which still resonates today, is fascinatingly told by the author Jason Roberts in his book Every Living Thing, my holiday reading while staying near Buffon’s birthplace in Burgundy. Natural history, like all history, might be written by the victors, as Roberts argues. And for a long time, Linnaeus’s highly influential, but flawed, views held sway. But the book makes a sympathetic case for the further rehabilitation of the much-maligned Buffon.
The two men were, as Roberts writes, exact contemporaries and polar opposites. While Linnaeus obsessed about classifying all biological species into neat categories with fixed attributes and Latin names (Homo sapiens, for example), Buffon emphasised the vast diversity and constantly changing nature of every living thing.
In Roberts’s telling, Linnaeus emerges as a brilliant but ruthless dogmatist, who ignored inconvenient facts that did not fit his theories and gave birth to racial pseudoscience. But it was Buffon’s painstaking investigations and acceptance of complexity that helped inspire the evolutionary theories of Charles Darwin, who later acknowledged that the Frenchman’s ideas were “laughably like mine”.
In two aspects, at least, this 18th-century scientific clash rhymes with our times. The first is to show how intellectual knowledge can often be a source of financial gain. The discovery of crops and commodities in other parts of the world and the development of new methods of cultivation had a huge impact on the economy in that era. “All that is useful to man originates from these natural objects,” Linnaeus wrote. “In one word, it is the foundation of every industry.”
Great wealth was generated from trade in sugar, potatoes, coffee, tea and cochineal while Linnaeus himself explored ways of cultivating pineapples, strawberries and freshwater pearls.
“In many ways, the discipline of natural history in the 18th century was roughly analogous to technology today: a means of disrupting old markets, creating new ones, and generating fortunes in the process,” Roberts writes. As a former software engineer at Apple and a West Coast resident, Roberts knows the tech industry.
Then as now, the addition of fresh inputs into the economy — whether natural commodities back then or digital data today — can lead to astonishing progress, benefiting millions. But it can also lead to exploitation. As Roberts tells me in a telephone interview, it was the scaling up of the sugar industry in the West Indies that led to the slave trade. “Sometimes we think we are inventing the future when we are retrofitting the past,” he says.
The second resonance with today is the danger of believing we know more than we do. Roberts compares Buffon’s state of “curious unknowing” to the concept of “negative capability” described by the English poet John Keats. In a letter written in 1817, Keats argued that we should resist the temptation to explain away things we do not properly understand and accept “uncertainties, mysteries, doubts, without any irritable reaching after fact and reason.”
Armed today with instant access to information and smart machines, the temptation is to ascribe a rational order to everything, as Linnaeus did. But scientific progress depends on a humble acceptance of relative ignorance and a relentless study of the fabric of reality. The spooky nature of quantum mechanics would have blown Linnaeus’s mind. If Buffon still teaches us anything, it is to study the peculiarity of things as they are, not as we might wish them to be…
“What an epic 18th-century scientific row teaches us today,” @johnthornhillft on @itsJason in @FT (gift link)
Pair with “Frameworks” from Céline Henne (@celinehenne) “Knowledge is often a matter of discovery. But when the nature of an enquiry itself is at question, it is an act of creation.”
* Daniel J. Boorstin
###
As we embrace the exceptions, we might send carefully-coded birthday greetings to John McCarthy; he was born on this date in 1927. An eminent computer and cognitive scientist– he was awarded both the Turning Prize and the National Medal of Science– McCarthy coined the phrase “artificial Intelligence” to describe the field of which he was a founder.
It was McCarthy’s 1979 article, “Ascribing Mental Qualities to Machines” (in which he wrote, “Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance”) that provoked John Searle‘s 1980 disagreement in the form of his famous Chinese Room Argument… provoking a broad debate that continues to this day.










You must be logged in to post a comment.