Posts Tagged ‘artificial intelligence’
“We ceased to be the lunatic fringe. We’re now the lunatic core.”*…
Further, in a fashion, to yesterday’s post on analog computing, an essay from Benjamin Labatut (the author of two remarkable works of “scientific-historical fiction,” When We Cease to Understand the World and The MANIAC, continuing the animating theme of those books…
We will never know how many died during the Butlerian Jihad. Was it millions? Billions? Trillions, perhaps? It was a fantastic rage, a great revolt that spread like wildfire, consuming everything in its path, a chaos that engulfed generations in an orgy of destruction lasting almost a hundred years. A war with a death toll so high that it left a permanent scar on humanity’s soul. But we will never know the names of those who fought and died in it, or the immense suffering and destruction it caused, because the Butlerian Jihad, abominable and devastating as it was, never happened.
The Jihad was an imagined event, conjured up by Frank Herbert as part of the lore that animates his science-fiction saga Dune. It was humanity’s last stand against sentient technology, a crusade to overthrow the god of machine-logic and eradicate the conscious computers and robots that in the future had almost entirely enslaved us. Herbert described it as “a thalamic pause for all humankind,” an era of such violence run amok that it completely transformed the way society developed from then onward. But we know very little of what actually happened during the struggle itself, because in the original Dune series, Herbert gives us only the faintest outlines—hints, murmurs, and whispers, which carry the ghostly weight of prophecy. The Jihad reshaped civilization by outlawing artificial intelligence or any machine that simulated our minds, placing a damper on the worst excesses of technology. However, it was fought so many eons before the events portrayed in the novels that by the time they occur it has faded into legend and crystallized in apocrypha. The hard-won lessons of the catastrophe are preserved in popular wisdom and sayings: “Man may not be replaced.” “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” “We do not trust the unknown which can arise from imaginative technology.” “We must negate the machines-that-think.” The most enduring legacy of the Jihad was a profound change in humankind’s relationship to technology. Because the target of that great hunt, where we stalked and preyed upon the very artifacts we had created to lift ourselves above the seat that nature had intended for us, was not just mechanical intelligence but the machinelike attitude that had taken hold of our species: “Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments,” Herbert wrote.
Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program!
The Butlerian Jihad removed a crutch—the part of ourselves that we had given over to technology—and forced human minds to develop above and beyond the limits of mechanistic reasoning, so that we would no longer depend on computers to do our thinking for us.
Herbert’s fantasy, his far-flung vision of a devastating war between humanity and the god of machine-logic, seemed quaint when he began writing it in the Sixties. Back then, computers were primitive by modern standards, massive mainframe contraptions that could process only hundreds of thousands of cycles per second (instead of billions, like today), had very little memory, operated via punch cards, and were not connected to one another. And we have easily ignored Herbert’s warnings ever since, but now the Butlerian Jihad has suddenly returned to plague us. The artificial-intelligence apocalypse is a new fear that keeps many up at night, a terror born of great advances that seem to suggest that, if we are not very careful, we may—with our own hands—bring forth a future where humanity has no place. This strange nightmare is a credible danger only because so many of our dreams are threatening to come true. It is the culmination of a long process that hearkens back to the origins of civilization itself, to the time when the world was filled with magic and dread, and the only way to guarantee our survival was to call down the power of the gods.
Apotheosis has always haunted the soul of humankind. Since ancient times we have suffered the longing to become gods and exceed the limits nature has placed on us. To achieve this, we built altars and performed rituals to ask for wisdom, blessings, and the means to reach beyond our capabilities. While we tend to believe that it is only now, in the modern world, that power and knowledge carry great risks, primitive knowledge was also dangerous, because in antiquity a part of our understanding of the world and ourselves did not come from us, but from the Other. From the gods, from spirits, from raging voices that spoke in silence.
[Labatut invokes the mysteries of the Vedas and their Altar of Fire, which was meant to develop “a mind, (that) when properly developed, could fly like a bird with outstretched wings and conquer the skies.”…]
Seen from afar by people who were not aware of what was being made, these men and women must surely have looked like bricklayers gone mad. And that same frantic folly seems to possess those who, in recent decades, have dedicated their hearts and minds to the building of a new mathematical construct, a soulless copy of certain aspects of our thinking that we have chosen to name “artificial intelligence,” a tool so formidable that, if we are to believe the most zealous among its devotees, will help us reach the heavens and become immortal…
[Labatut recounts the stories– and works– of some of the creators of AI’s DNA: George Boole (and his logic), Claude Shannon (who put that logic to work), and Geoffrey Hinton (Boole’s great-great-grandson, and “the Godfather of AI,” who created of the first neural networks, but has more recently undergone a change of opinion)…]
… Hinton has been transformed. He has mutated from an evangelist of a new form of reason into a prophet of doom. He says that what changed his mind was the realization that we had, in fact, not replicated our intelligence, but created a superior one.
Or was it something else, perhaps? Did some unconscious part of him whisper that it was he, rather than his great-great-grandfather, who was intended by God to find the mechanisms of thought? Hinton does not believe in God, and he would surely deny his ancestor’s claim that pain is an instrument of the Lord’s will, since he was forced to have every one of his meals on his knees, resting on a pillow like a monk praying at the altar, because of a back injury that caused him excruciating pain. For more than seventeen years, he could not sit down, and only since 2022 has he managed to do so long enough to eat.
Hinton is adamant that the dangers of thinking machines are real. And not just short-term effects like job replacement, disinformation, or autonomous lethal weapons, but an existential risk that some discount as fantasy: that our place in the world might be supplanted by AI. Part of his fear is that he believes AI could actually achieve a sort of immortality, as the Vedic gods did. “The good news,” he has said, “is we figured out how to build things that are immortal. When a piece of hardware dies, they don’t die. If you’ve got the weights stored in some medium and you can find another piece of hardware that can run the same instructions, then you can bring it to life again. So, we’ve got immortality. But it’s not for us.”
Hinton seems to be afraid of what we might see when the embers of the Altar of Fire die down at the end of the sacrifice and the sharp coldness of the beings we have conjured up starts to seep into our bones. Are we really headed for obsolescence? Will humanity perish, not because of the way we treat all that surrounds us, nor due to some massive unthinking rock hurled at us by gravity, but as a consequence of our own irrational need to know all that can be known? The supposed AI apocalypse is different from the mushroom-cloud horror of nuclear war, and unlike the ravages of the wildfires, droughts, and inundations that are becoming commonplace, because it arises from things that we have, since the beginning of civilization, always considered positive and central to what makes us human: reason, intelligence, logic, and the capacity to solve the problems, puzzles, and evils that taint even the most fortunate person’s existence with everyday suffering. But in clawing our way to apotheosis, in daring to follow the footsteps of the Vedic gods who managed to escape from Death, we may shine a light on things that should remain in darkness. Because even if artificial intelligence never lives up to the grand and terrifying nightmare visions that presage a nonhuman world where algorithms hum along without us, we will still have to contend with the myriad effects this technology will have on human society, culture, and economics.
In the meantime, the larger specter of superintelligent AI looms over us. And while it is less likely and perhaps even impossible (nothing but a fairy tale, some say, a horror story intended to attract more money and investment by presenting a series of powerful systems not as the next step in our technological development but as a death-god that ends the world), it cannot be easily dispelled, for it reaches down and touches the fibers of our mythmaking apparatus, that part of our being that is atavistic and fearful, because it reminds us of a time when we shivered in caves and huddled together, while outside in the dark, with eyes that could see in the night, the many savage beasts and monsters of the past sniffed around for traces of our scent.
As every new AI model becomes stronger, as the voices of warning form a chorus, and even the most optimistic among us begin to fear this new technology, it is harder and harder to think without panic or to reason with logic. Thankfully, we have many other talents that don’t answer to reason. And we can always rise and take a step back from the void toward which we have so hurriedly thrown ourselves, by lending an ear to the strange voices that arise from our imagination, that feral territory that will always remain a necessary refuge and counterpoint to rationality.
Faced, as we are, with wild speculation, confronted with dangers that no one, however smart or well informed, is truly capable of managing or understanding, and taunted by the promises of unlimited potential, we may have to sound out the future not merely with science, politics, and reason, but with that devil-eye we use to see in the dark: fiction. Because we can find keys to doors we have yet to encounter in the worlds that authors have imagined in the past. As we grope forward in a daze, battered and bewildered by the capabilities of AI, we could do worse than to think about the desert planet where the protagonists of Herbert’s Dune novels sought to peer into the streaming sands of future time, under the heady spell of a drug called spice, to find the Golden Path, a way for human beings to break from tyranny and avoid extinction or stagnation by being more diverse, resilient, and free, evolving past purely logical reasoning and developing our minds and faculties to the point where our thoughts and actions are unpredictable and not bound by statistics. Herbert’s books, with their strange mixture of past and present, remind us that there are many ways in which we can continue forward while preserving our humanity. AI is here already, but what we choose to do with it and what limits we agree to place on its development remain decisions to be made. No matter how many billions of dollars are invested in the AI companies that promise to eliminate work, solve climate change, cure cancer, and rain down miracles unlike anything we have seen before, we can never fully give ourselves over to these mathematical creatures, these beings with no soul or sympathy, because they are neither alive nor conscious—at least not yet, and certainly not like us—so they do not share the contradictory nature of our minds.
In the coming years, as people armed with AI continue making the world faster, stranger, and more chaotic, we should do all we can to prevent these systems from giving more and more power to the few who can build them. But we should also consider a warning from Herbert, the central commandment he chose to enshrine at the heart of future humanity’s key religious text, a rule meant to keep us from becoming subservient to the products of our reason, and from bowing down before the God of Logic and his many fearsome offspring:
Thou shalt not make a machine in the likeness of a human mind…
Before and after artificial intelligence: “The Gods of Logic” in @Harpers. Eminently worth reading in full.
For a less pessimistic view, see: “A Journey Through the Uncanny Valley: Our Relational Futures with AI,” from @dylanhendricks at @iftf.
* Geoffrey Hinton
###
As we deliberate on Daedalus’ caution, we might we might send fantastically far-sighted birthday greetings to a tecno-optimist who might likely have brushed aside Labatut’s concerns: Hugo Gernsback, a Luxemborgian-American inventor, broadcast pioneer, writer, and publisher; he was born on this date in 1884.
Gernsback held 80 patents at the time of his death; he founded radio station WRNY, was involved in the first television broadcasts, and is considered a pioneer in amateur radio. But it was as a writer and publisher that he probably left his most lasting mark: In 1926, as owner/publisher of the magazine Modern Electrics, he filled a blank spot in his publication by dashing off the first chapter of a series called “Ralph 124C 41+.” The twelve installments of “Ralph” were filled with inventions unknown in 1926, including “television” (Gernsback is credited with introducing the word), fluorescent lighting, juke boxes, solar energy, television, microfilm, vending machines, and the device we now call radar.
The “Ralph” series was an astounding success with readers; and later that year Gernsback founded the first magazine devoted to science fiction, Amazing Stories. Believing that the perfect sci-fi story is “75 percent literature interwoven with 25 percent science,” he coined the term “science fiction.”
Gernsback was a “careful” businessman, who was tight with the fees that he paid his writers– so tight that H. P. Lovecraft and Clark Ashton Smith referred to him as “Hugo the Rat.”
Still, his contributions to the genre as publisher were so significant that, along with H.G. Wells and Jules Verne, he is sometimes called “The Father of Science Fiction”; in his honor, the annual Science Fiction Achievement awards are called the “Hugos.”
(Coincidentally, today is also the birthday– in 1906– of Philo T. Farnsworth, the man who actually did invent television.)

“Few people have the imagination for reality”*…
Experiments that test physics and philosophy as “a single whole,” Amanda Gefter suggests, may be our only route to surefire knowledge about the universe…
Metaphysics is the branch of philosophy that deals in the deep scaffolding of the world: the nature of space, time, causation and existence, the foundations of reality itself. It’s generally considered untestable, since metaphysical assumptions underlie all our efforts to conduct tests and interpret results. Those assumptions usually go unspoken.
Most of the time, that’s fine. Intuitions we have about the way the world works rarely conflict with our everyday experience. At speeds far slower than the speed of light or at scales far larger than the quantum one, we can, for instance, assume that objects have definite features independent of our measurements, that we all share a universal space and time, that a fact for one of us is a fact for all. As long as our philosophy works, it lurks undetected in the background, leading us to mistakenly believe that science is something separable from metaphysics.
But at the uncharted edges of experience — at high speeds and tiny scales — those intuitions cease to serve us, making it impossible for us to do science without confronting our philosophical assumptions head-on. Suddenly we find ourselves in a place where science and philosophy can no longer be neatly distinguished. A place, according to the physicist Eric Cavalcanti, called “experimental metaphysics.”
Cavalcanti is carrying the torch of a tradition that stretches back through a long line of rebellious thinkers who have resisted the usual dividing lines between physics and philosophy. In experimental metaphysics, the tools of science can be used to test our philosophical worldviews, which in turn can be used to better understand science. Cavalcanti, a 46-year-old native of Brazil who is a professor at Griffith University in Brisbane, Australia, and his colleagues have published the strongest result attained in experimental metaphysics yet, a theorem that places strict and surprising constraints on the nature of reality. They’re now designing clever, if controversial, experiments to test our assumptions not only about physics, but about the mind.
While we might expect the injection of philosophy into science to result in something less scientific, in fact, says Cavalcanti, the opposite is true. “In some sense, the knowledge that we obtain through experimental metaphysics is more secure and more scientific,” he said, because it vets not only our scientific hypotheses but the premises that usually lie hidden beneath…
Gefter traces the history of this integrative train of thought (Kant, Duhem, Poincaré, Popper, Einstein, Bell), its potential for helping understand quantum theory… and the prospect of harnessing AI to run the necessary experiments– seemingly comlex and intensive beyond the scope of currenT experimental techniques…
Cavalcanti… is holding out hope. We may never be able to run the experiment on a human, he says, but why not an artificial intelligence algorithm? In his newest work, along with the physicist Howard Wiseman and the mathematician Eleanor Rieffel, he argues that the friend could be an AI algorithm running on a large quantum computer, performing a simulated experiment in a simulated lab. “At some point,” Cavalcanti contends, “we’ll have artificial intelligence that will be essentially indistinguishable from humans as far as cognitive abilities are concerned,” and we’ll be able to test his inequality once and for all.
But that’s not an uncontroversial assumption. Some philosophers of mind believe in the possibility of strong AI, but certainly not all. Thinkers in what’s known as embodied cognition, for instance, argue against the notion of a disembodied mind, while the enactive approach to cognition grants minds only to living creatures.
All of which leaves physics in an awkward position. We can’t know whether nature violates Cavalcanti’s [theorem] — we can’t know, that is, whether objectivity itself is on the metaphysical chopping block — until we can define what counts as an observer, and figuring that out involves physics, cognitive science and philosophy. The radical space of experimental metaphysics expands to entwine all three of them. To paraphrase Gonseth, perhaps they form a single whole…
“‘Metaphysical Experiments’ Probe Our Hidden Assumptions About Reality,” in @QuantaMagazine.
* Johann Wolfgang von Goethe
###
As we examine edges, we might send thoughtful birthday greetings to Rudolf Schottlaender; he was born on this date in 1900. A philosopher who studied with Edmund Husserl, Martin Heidegger, Nicolai Hartmann, and Karl Jaspers, Schottlaender survived the Nazi regime and the persecution of the Jews, hiding in Berlin. After the war, as his democratic and humanist proclivities kept him from posts in philosophy faculties, he distinguished himself as a classical philologist and translator (e.g., new translations of Sophocles which were very effective on the stage, and an edition of Petrarch).
But he continued to publish philosophical and political essays and articles, which he predominantly published in the West and in which he saw himself as a mediator between the systems. Because of his positions critical to East Germany, he was put under close surveillance by the Ministry for State Security (Ministerium für Staatssicherheit or Stasi)– and inspired leading minds of the developing opposition in East Germany.
“When it comes to privacy and accountability, people always demand the former for themselves and the latter for everyone else”*…
As we contend with ‘answers” from AI’s that, with few exceptions, use source material with no credit nor recompense, we might ponder the experience of our Gilded Age ancestors…
In 1904, a widow named Elizabeth Peck had her portrait taken at a studio in a small Iowa town. The photographer sold the negatives to Duffy’s Pure Malt Whiskey, a company that avoided liquor taxes for years by falsely advertising its product as medicinal. Duffy’s ads claimed the fantastical: that it cured everything from influenza to consumption, that it was endorsed by clergymen, that it could help you live until the age of 106. The portrait of Peck ended up in one of these dubious ads, published in newspapers across the country alongside what appeared to be her unqualified praise: “After years of constant use of your Pure Malt Whiskey, both by myself and as given to patients in my capacity as nurse, I have no hesitation in recommending it.”
Duffy’s lies were numerous. Peck (misleadingly identified as “Mrs. A. Schuman”) was not a nurse, and she had not spent years constantly slinging back malt beverages. In fact, she fully abstained from alcohol. Peck never consented to the ad.
The camera’s first great age—which began in 1888 when George Eastman debuted the Kodak—is full of stories like this one. Beyond the wonders of a quickly developing art form and technology lay widespread lack of control over one’s own image, perverse incentives to make a quick buck, and generalized fear at the prospect of humiliation and the invasion of privacy…
… Early cameras required a level of technical mastery that evoked mystery—a scientific instrument understood only by professionals.
All of that changed when Eastman invented flexible roll film and debuted the first Kodak camera. Instead of developing their own pictures, customers could mail their devices to the Kodak factory and have their rolls of film developed, printed, and replaced. “You press the button,” Kodak ads promised, “we do the rest.” This leap from obscure science to streamlined service forever transformed the nature of looking and being looked at.
By 1905, less than 20 years after the first Kodak camera debuted, Eastman’s company had sold 1.2 million devices and persuaded nearly a third of the United States’ population to take up photography. Kodak’s record-setting yearly ad spending—$750,000 by the end of the 19th century (roughly $28 million in today’s dollars)—and the rapture of a technology that scratched a timeless itch facilitated the onset of a new kind of mass exposure…
…
… Though newspapers across the country cautioned Americans to “beware the Kodak,” as the cameras were “deadly weapons” and “deadly little boxes,” many were also primary facilitators of the craze. The perfection of halftone printing coincided with the rise of the Kodak and allowed for the mass circulation of images. Newly empowered, newspapers regularly published paparazzi pictures of famous people taken without their knowledge, paying twice as much for them as they did for consensual photos taken in a studio.
Lawmakers and judges responded to the crisis clumsily. Suing for libel was usually the only remedy available to the overexposed. But libel law did not protect against your likeness being taken or used without your permission unless the violation was also defamatory in some way. Though results were middling, one failed lawsuit gained enough notoriety to channel cross-class feelings of exposure into action. A teenage girl named Abigail Roberson noticed her face on a neighbor’s bag of flour, only to learn that the Franklin Mills Flour Company had used her likeness in an ad that had been plastered 25,000 times all over her hometown.
After suffering intense shock and being temporarily bedridden, she sued. In 1902, the New York Court of Appeals rejected her claims and held that the right to privacy did not exist in common law. It based its decision in part on the assertion that the image was not libelous; Chief Justice Alton B. Parker wrote that the photo was “a very good one” that others might even regard as a “compliment to their beauty.” The humiliation, the lack of control over her own image, the unwanted fame—none of that amounted to any sort of actionable claim.
Public outcry at the decision reached a fever pitch, and newspapers filled their pages with editorial indignation. In its first legislative session following the court’s decision and the ensuing outrage, the New York state legislature made history by adopting a narrow “right to privacy,” which prohibited the use of someone’s likeness in advertising or trade without their written consent. Soon after, the Supreme Court of Georgia became the first to recognize this category of privacy claim. Eventually, just about every state court in the country followed Georgia’s lead. The early uses and abuses of the Kodak helped cobble together a right that centered on profiting from the exploitation of someone’s likeness, rather than the exploitation itself.
Not long after asserting that no right to privacy exists in common law, and while campaigning to be the Democratic nominee for president, Parker told the Associated Press, “I reserve the right to put my hands in my pockets and assume comfortable attitudes without being everlastingly afraid that I shall be snapped by some fellow with a camera.” Roberson publicly took him to task over his hypocrisy, writing, “I take this opportunity to remind you that you have no such right.” She was correct then, and she still would be today. The question of whether anyone has the right to be free from exposure and its many humiliations lingers, intensified but unresolved. The law—that reactive, slow thing—never quite catches up to technology, whether it’s been given one year or 100…
Early photographers sold their snapshots to advertisers, who reused the individuals’ likenesses without their permission: “How the Rise of the Camera Launched a Fight to Protect Gilded Age Americans’ Privacy,” from @myHNN and @SmithsonianMag.
The parallels with AI usage issues are obvious. For an example of a step in the right direction, see Tim O’Reilly‘s “How to Fix “AI’s Original Sin”
* David Brin
###
As we ponder the personal, we might recall that it was on this date in 1789 that partisans of the Third Estate, impatient for social and legal reforms (and economic relief) in France, attacked and took control of the Bastille. A fortress in Paris, the Bastille was a medieval armory and political prison; while it held only 8 inmates at the time, it resonated with the crowd as a symbol of the monarchy’s abuse of power. Its fall ignited the French Revolution. This date is now observed annually as France’s National Day.
See the estimable Robert Darnton’s “What Was Revolutionary about the French Revolution?“
Happy Bastille Day!

“When the going gets weird, the weird turn pro”*…
But, Ammon Haggerty suggests, when it comes to AI, “going pro” is at least a waste and quite possibly a problem…
Kyle Turman, creative technologist and staff designer at Anthropic, shared a sentiment that resonated deeply. He said (paraphrasing), “AI is actually really weird, and I don’t think people appreciate that enough.” This sparked my question to the panel: Are we at risk of sanitizing AI’s inherent strangeness?
What followed was a fascinating discussion with a couple of friends, Mickey McManus and Noteh Krauss, who were also in attendance. They both recognized the deeper question I was asking — the slippery slope of “cleansing” foundation AI models of all that is undesirable. LLMs are a reflection of humanity, albeit at the moment primarily American and white-ish, with all our weird and idiosyncratic quirks that make us human. There is a real danger that we could see foundation models trained to maximize business values (of the American capitalist variety) and suppress radical and non-conforming ideas — a sort of revisionist optimization.
All this got me thinking about San Francisco, the city I grew up in, and where my dad, grandfather and great-grandfather called home. SF has been “weird” since the gold rush, attracting a melting pot of non-conformists, risk-takers, and radicals. Over generations, the weirdness of SF has ebbed and flowed, but it’s now deeply engrained in the culture. The bohemians, the beats, the hippies, LGBTQ+ rights movement, tech counterculture, and now AI. These are movements born out of counterculture and unconventional thinking, resulting in a disruption of established social and business norms. Eventually leading to mainstreaming, and the cycle repeats. Growing up in San Francisco, I’ve witnessed firsthand how this cycle of weirdness and innovation has shaped the city. It’s a living testament to the power of unconventional thinking.
Like San Francisco, AI also has a fairly long history of being weird. Early experiments in AI such as AARON (1972), which trained a basic model on artistic decision-making, created outsider art-like compositions. Racter (1984) was an early text-generating AI that would often produce dreamlike or surrealist output. “More than iron, more than lead, more than gold I need electricity. I need it more than I need lamb or pork or lettuce or cucumber. I need it for my dreams.” More recently, Google Deep Dream (2015), a convolutional neural network that looks for patterns found in its training data, producing hallucination-like images and videos.
These “edge states” in AI’s evolution are, to me, the most interesting, and human, expressions. It’s a similar edge state explored in human creativity. It’s called “liminal space” — the threshold between reality and imagination. What’s really interesting is the mental process of extracting meaning from the liminal space is highly analogous to how the transformer architecture used in LLMs work. In the human brain, we look for patterns, then synthesize new idea and information, find unexpected connections, contextualize the findings, then articulate the ideas into words we can express. In transformers, the attention mechanism looks for patterns, then neural networks “synthesize” the information, then through iteration and prioritization, form probabilistic insights, then positional encoding maps the information to the broader context, and last, articulates the output as a best guess based on what it knows previously. Sorry if that was dense — for nerd friends to either validate or challenge.
This is all to say that I feel there’s something really interesting in the liminal space for AI. Also known as “AI hallucinations” and it’s not good — very bad! I agree that when you ask an AI an important question, and it gives a made-up answer, it’s not a good thing. But it’s not making things up, it’s just synthesizing a highly probable answer from an ambiguous cloud of understanding (question, data, meaning, etc.). I say, let’s explore and celebrate this analog of human creativity. What if, instead of fearing AI’s ‘hallucinations,’ we embraced them as digital dreams?…
… While I’ve been vocal about AI’s ethical challenges for creators (1) (2), I’m deeply inspired by the creative potential of these new tools. I also fear some of the most interesting parts could begin to disappear…
A plea to “Keep AI Weird.”
How weird could things get? Matt Webb (@genmon) observes that “The Overton window of weirdness is opening.”
* Hunter S. Thompson
###
As we engage the edges, we might recall that it was on this date in 1991 that Terminator 2: Judgment Day was released. It focuses on the struggle, fought both in future and in the present, between a “synthetic intelligence” known as Skynet, and a surviving resistance of humans led by John Connor. Picking up some years after the action in The Terminator (in which robots fail to prevent John Connor from being born), they try again in 1995, this time attempting to terminate him as a child by using a more advanced Terminator, the T-1000. As before, John sends back a protector for his younger self, a reprogrammed Terminator, who is a doppelgänger to the one from 1984.
The Terminator was a success; Terminator 2 was a smash– a success both with critics and at the box office, grossing $523.7 million worldwide. It won several Academy Awards, perhaps most notably for its then-cutting-edge computer animation.








You must be logged in to post a comment.