(Roughly) Daily

Posts Tagged ‘AI

“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim”*…

An empty set of stadium seats with a single bright red chair standing out among predominantly white chairs.

Anil Dash, with a grounded view of artificial intelligence…

Even though AI has been the most-talked-about topic in tech for a few years now, we’re in an unusual situation where the most common opinion about AI within the tech industry is barely ever mentioned.

Most people who actually have technical roles within the tech industry, like engineers, product managers, and others who actually make the technologies we all use, are fluent in the latest technologies like LLMs. They aren’t the big, loud billionaires that usually get treated as the spokespeople for all of tech.

And what they all share is an extraordinary degree of consistency in their feelings about AI, which can be pretty succinctly summed up:

Technologies like LLMs have utility, but the absurd way they’ve been over-hyped, the fact they’re being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.

What’s amazing is the reality that virtually 100% of tech experts I talk to in the industry feel this way, yet nobody outside of that cohort will mention this reality. What we all want is for people to just treat AI as a “normal technology“, as Arvind Naryanan and Sayash Kapoor so perfectly put it. I might be a little more angry and a little less eloquent: stop being so goddamn creepy and weird about the technology! It’s just tech, everything doesn’t have to become some weird religion that you beat people over the head with, or gamble the entire stock market on…

Eminently worth reading in full: “The Majority AI View,” from @anildash.com.

Pair with: “Artificial Intelligences, So Far,” from @kevinkelly.bsky.social.

For an explanation of (some of) the dangers of over-hyping, see: “America’s future could hinge on whether AI slightly disappoints,” from @noahpinion.blog.web.brid.gy.

And for a peek at what lies behind each GenAI query: “Cartography of generative AI,” from @tallerestampa.bsky.social via @flowingdata.com.

While the arguments above are practical, note that a plethora of tech experts have weighed in with a a note of existential caution: “Statement on Superintelligence.”

Further to which (and finally), a piece from the Federal Reserve Bank of Dallas, projecting the economic impact of AI. It suggests that AI could provide a modest but meaningful boost to GDP over the next 25 years… if The Fed’s “Goldilocks Scenario” (in which, per Dash’s and Kelly’s comments, AI makes consistent incremental contributions to “keep living standards improving at their historical rate”) plays out. You’ll note that they also considered two other scenarios: a “benign singularity” scenario in which “AI eventually surpasses human intelligence, leading to rapid and unpredictable changes to the economy and society” and an “extinction singularity” in which “machine intelligence overtakes human intelligence at some finite point in the near future, the machines become malevolent, and this eventually leads to human extinction.”

Interesting times in which we live…

A line graph depicting different AI scenario projections for GDP growth from 1870 to 2050, including benign and extinction scenarios, with a log scale on the y-axis.

Edsger W. Dijkstra

###

As we parse pumped prognostication, we might recall that it was on this date in 4004 BCE that the Universe was created… as per calculations by Archbishop James Ussher in the mid-17th century. Ussher, the head of the Anglican Church of Ireland at the time, attempted to calculate the dates of many important events described in the Old Testament. His calculations, which he published in 1650, were not that far off from many other estimates made at the time. Isaac Newton, for example, believed that the world was created in 4000 BC.

When Clarence Darrow prepared his famous examination of William Jennings Bryan in the Scopes trial [see here], he chose to focus primarily on a chronology of Biblical events prepared by a seventeenth-century Irish bishop, James Ussher. American fundamentalists in 1925 found—and generally accepted as accurate—Ussher’s careful calculation of dates, going all the way back to Creation, in the margins of their family Bibles.  (In fact, until the 1970s, the Bibles placed in nearly every hotel room by the Gideon Society carried his chronology.)  The King James Version of the Bible introduced into evidence by the prosecution in Dayton contained Ussher’s famous chronology, and Bryan more than once would be forced to resort to the bishop’s dates as he tried to respond to Darrow’s questions.

“Bishop James Ussher Sets the Date for Creation”

Ussher

source

Written by (Roughly) Daily

October 23, 2025 at 1:00 am

“I believe there are more instances of the abridgement of freedom of the people by gradual and silent encroachments by those in power than by violent and sudden usurpations”*…

… so we’d do well to stay focused on those in power– in government, to be sure; but increasingly also on the emerging oligarchs grabbing the reins.

Further, in a fashion, to yesterday’s post… there’s so much going on these days– threats to democracy and freedom and well-being coming from so many directions– that it’s all too easy to miss something important. Allison Stanger calls our attention to one such dynamic: just as, starting in the 17th century, the East India Company’s commercial success gradually justified new powers [see, e.g., here, here, and the almanac entry here), today’s AI firms seek to leverage technical prowess to assume public functions by default…

On December 31, 1600, Queen Elizabeth I signed a royal charter granting the East India Company exclusive rights to conduct trade in the Indian Ocean region. The document was precise in its limitations: The company could establish trading posts, negotiate with local rulers, and defend its commercial interests. Nothing more.

Seventy-seven years later, the same company had acquired the right to mint currency on behalf of the British crown. By 1765, it controlled the tax collection (ruthlessly enforced by its own private army) for the Indian provinces of Bengal, Bihar, and Orissa—territories containing roughly 20 million people. What began as commercial efficiency had become imperial governance. The transformation was so gradual that few contemporaries even noticed sovereignty shifting in the region from local rule to corporation.

A similar pattern can be seen today with national governments and Big Tech—only this time, centuries of drift have been compressed into months. Where the East India Company deployed trading posts and private armies, today’s technology firms and specifically AI development companies use data pipelines, data centers, and algorithmic systems. The medium has changed; the mechanics of private power assuming public functions remain the same.

Consider the trajectory of Elon Musk’s so-called “Department of Government Efficiency” (DOGE). Established in February 2025 with the stated goal of eliminating bureaucratic waste but an unstated aspiration to vacuum up new data to improve Musk’s companies, DOGE began with access to federal payment systems—ostensibly to identify inefficiencies. Within weeks, reports emerged that DOGE personnel had gained the ability to alter government databases, including Social Security records and contractor payments. The justification remained consistent: To deliver efficiency, one must first seize control.

The parallel extends beyond metaphor. Just as the East India Company’s commercial success gradually justified new powers, today’s AI firms seek to leverage technical prowess to assume public functions by default, implicitly assuming that the reallocation of power will serve human flourishing. Each efficiency gain becomes justification for the next transfer of authority, yet the costs of that automation go uncalculated.

What once took generations now takes quarters; the key difference is the ease with which private digital systems can be aligned with the politics of friends and enemies. Communications systems, financial networks, and governance mechanisms are no longer reshaped through military conquest but by software updates. Increasingly, those same systems are being weaponized against the very allies who helped build them.

From content moderation to infrastructure control to monetary governance, AI companies are taking on public operations. As AI becomes a more prominent feature of everyday life, already existing problems in our public life will proliferate exponentially. The transformation before us is likely to proceed through three variants—algorithmic capture of information systems, weaponization of critical infrastructure, and cryptocurrency’s escape from public accountability. Absent immediate intervention, democratic societies risk permanent subordination to unelected digital sovereigns…

[Stanger unpacks the three variants, with examples from Meta, Starlink, and the Trump organization’s World Liberty Financial…]

… The choice is still ours, but the time to act is now. Democracies can reclaim control over critical infrastructure—or continue outsourcing it to corporate entities that increasingly resemble the East India Company: efficient, unaccountable, and sovereign in all but name.

As American allies have discovered, platform dependency is a trap that snaps shut when you least expect it. The question facing democratic societies is whether they will escape this trap while they still can, or whether they will remain subject to the whims of unelected digital sovereigns.

Everything scientists most value—objectivity, truth-seeking, skepticism and transparency—is at stake. These digital sovereigns are no longer merely connecting the world—they are remaking it. Whether this transformation serves public values or corporate profits will decide not only the future of technology—but the fate of self-governance.

“The right to search for truth, implies a duty,” warned Albert Einstein. “One must not conceal any part of what one has recognized to be true.” The true cost of “efficiency” may be democracy itself, which is currently at risk of becoming just another social atavism of the analog age…

The AI Raj: How tech giants are recolonizing power,” from @allisonstanger.bsky.social in @thebulletin.org.

Oh, and how might all of this work out even if there are no reins?: “Longtime Investor Warns the AI Industry Is Set to Collapse for a Basic Financial Reason“: “Each big tech company needs a global monopoly in AI to sustain their success and market value. They are not all going to get one.”… meantime, the damage to society is done…

* James Madison

###

As we take it back, we might recall that Battle of Gaugamela was fought on this date in 331 BCE. The forces of the Army of Macedon under Alexander the Great and the Persian Army under King Darius III met for the second time. Alexander and the Macedonians were victorious. The battle is considered the final blow to the Achaemenid (Persian) Empire, resulting in its complete conquest by Alexander.

A historical painting depicting the chaotic Battle of Gaugamela, featuring soldiers on horseback in combat, with figures in elaborate armor and banners against a dramatic sky.
Battle of Alexander versus Darius by Pietro da Cortona (source)

“Knowledge without character”*…

An empty canvas framed in gold, titled 'ERASED de KOONING DRAWING' by Robert Rauschenberg, created in 1953.
Robert Rauschenberg, Erased de Kooning Drawing (1953), SFMOMA

Much has been written about AI and its possible consequences, both positive (e.g., productivity, innovation) and negative (e.g. resource consumption, job elimination, and economic inequality).

The estimable Nicholas Carr weighs in on AI’s potential impact on culture…

One day in 1953, a young and at the time little-known experimental artist named Robert Rauschenberg arrived at the studio of the great abstract expressionist Willem de Kooning bearing a bottle of Jack Daniels and a strange request. He wanted the famous artist to give him one of his drawings so he could erase it. De Kooning was taken aback. “I remember that the idea of destruction kept coming into the conversation,” Rauschenberg later recalled, “and I kept trying to show that it wouldn’t be destruction.”

Rauschenberg explained to de Kooning that he wanted to see if a work of art could be created not just through the inscription of marks but through their removal. Could art be erasive as well as inscriptive? After much back-and-forth, and several servings of brown liquor, de Kooning agreed. He chose a drawing he had recently completed — one he was fond of — and gave it to Rauschenberg.

Over the course of the next two months, Rauschenberg slowly, meticulously erased the drawing, taking off layers of grease pencil, charcoal, graphite, and ink. He went through forty erasers. All that remained in the end were a few faint traces of the original sketch. With the help of his friend Jasper Johns, he then carefully matted and framed the work, and Johns wrote a label for it, inscribing the title, artist, and date so precisely that they appeared to have been printed out by a machine:

ERASED de KOONING DRAWING
ROBERT RAUSCHENBERG
1953

“The simple, gilded frame and understated inscription are integral parts of the finished artwork,” writes a curator at the San Francisco Museum of Modern Art, which acquired the work in 1998, “offering the sole indication of the psychologically loaded act central to its creation.” Even a work of erasure demands a frame, Rauschenberg understood, a boundary establishing its place in the world. Erasure cries out for inscription. We want to know the marks were there before they weren’t.

Erasive is an exceptionally uncommon word. It was coined in the seventeenth century but has rarely been used since. Word-processing and messaging spellcheckers underline it with suspicion. Its rarity testifies to our discomfort with, as the SFMOMA writer terms it, the “psychologically loaded act” of erasure. But, thanks to the rise of what tech companies have cheerfully branded “generative AI,” the word seems certain to be used more often in the years to come. Our condition demands it. Behind every act of AI generation lie many acts of erasure. We have entered the erasive age.

Although we assume that media is fundamentally inscriptive, a means of preserving and transmitting human-made marks of one sort or another, communication systems have always also entailed erasure. What they erase are the spatiotemporal boundaries that in nature fix speech to speaker. A person says something, and if there are others in earshot, they hear it. Otherwise it’s gone. But that same person writes those same words down on a sheet of paper, or enters them into a computer network, and the words can travel through space and persist through time. Much of the value of media, cultural and financial, has always stemmed from its power to erase the material world’s physical constraints on the flow of speech, the flow of information.

So long as erasure served our desire to transmit our own marks and receive the marks made by others, we didn’t worry about it. We celebrated it — the death of distance! the transcendence of time! — just as we celebrate other technologies that free us, or at least shield us, from the world’s frictions and constraints. We want our marks, and the marks of others, to flow freely through space and time. We want the speech of distant people to arrive in our mailbox, to issue forth from our radio and TV, to hang on the walls of a museum, to appear on the screen of our phone. Take away such freedom of movement, return us to the original communication system of mouth and ear, and you take away knowledge, culture, entertainment, pretty much the entirety of modernity.

Erasure is good for business. The more that media has erased the world, the more dependent society has become on the systems and services of media companies and the more profits those companies have earned. That’s why people like Mark Zuckerberg have been so eager to promote the benefits of “frictionlessness” in communication and social relations. What we failed to appreciate is that the pursuit of profit would lead the companies beyond the erasure of spatiotemporal boundaries. They would seek to erase the greatest source of friction in their operations: their reliance on human creativity and expression. They would seek to replace the human source of the information they transmit — speakers and their speech — with highly efficient machines capable of creating “content” cheaply and on demand.

In creating tradable derivatives of human speech, AI erases the human voice, the human hand. First, it turns the works of culture into numbers, then it compresses those numbers into a generalized statistical model. Of the originals only traces remain. If Rauschenberg sought to show that erasure can be a generative act, AI bots have the opposite goal: to show that generation can be an erasive act. Fulfilling de Kooning’s fears, generation turns destructive…

… The more we draw on AI to shape our perception and understanding of the world, to structure our thoughts and words, to express ourselves, the more complicit we become in erasing culture, the past, others, ourselves. Eventually, should we continue down the path, even the memory of what’s been erased will be erased. No frame, no matting, no inscription. Only the empty revelation of erasure…

Generation as destruction: “The Erasive Age,” and entry in Carr’s on-going series, Dead Speech, on the cultural and economic consequences of AI.

Pair with Rob Horning on AI’s commodification of language: “The reified mind.”

See also Henry Farrell‘s “Large language models are cultural technologies. What might that mean?” and “A.I. Is Coming for Culture“, from Joshua Rothman.

* “The Seven Social Sins are:

Wealth without work.
Pleasure without conscience.
Knowledge without character.
Commerce without morality.
Science without humanity.
Worship without sacrifice.
Politics without principle.”

– From a sermon given by Frederick Lewis Donaldson in Westminster Abbey, 1925

###

As we husband our humanity, we might recall that it was on this date in 410 that the three day Sack of Rome by the Barbarian Visigoths, led by Alaric, ended.  

Rome was at the time no longer the capital of the Western Roman Empire (it had moved to Mediolanum and then to Ravenna); but it remained the Empire’s spiritual and cultural center, “the eternal city.”  And it had not fallen to an enemy in almost 800 years (the Gauls sacked Rome in 387 BCE). As St. Jerome, living in Bethlehem at the time, wrote: “The City which had taken the whole world was itself taken.”

A 15th-century depiction of the Sack of Rome (with anachronistic details)

source

Written by (Roughly) Daily

August 27, 2025 at 1:00 am

“In science, it happens every few years that something that was previously considered a mistake suddenly reverses all views, or that an inconspicuous and despised idea becomes the ruler of a new realm of thought.”*…

A robotic hand holding a globe, symbolizing the relationship between technology and Earth, with a digital network background.

Today’s post is, in essence, the recommendation of the current issue of of a publication referenced here often, Noema. It’s editor, Nathan Gardels, previews its contents…

When a concept that organizes our reality is replaced by an entirely different and incommensurate worldview, it is called a “paradigm shift.”

The theme of this edition of Noema was conceived in early 2024. At that time, we had in mind the epochal shift from the paradigm of globalization, in which markets, trade and technology cross borders, to “the Planetary,” where we recognize that the whole Earth system embeds and entangles human civilization in its habitat.

This deeper awareness has been enabled by the emergence of a technological exoskeleton of satellites, sensors and cloud computation that expands the heretofore limited scope of human understanding of the world, repositioning our place in the natural order. Neither above nor apart from nature, we have now come to realize we are part and parcel of one interdependent organism comprised of multiple intelligences striving for sustainable equilibrium.

The disclosure of climate change as a destabilizing consequence of human endeavor was enabled in the first place by planetary-scale computation. This capacity holds out the evolutionary prospect that human, machine and Earth intelligence might one day merge into a kind of planetary sapience that restores and maintains the ecological balance.

As we have written often in Noema, this conceptual reorientation would entail a redefinition of what realism means in geopolitics. This new condition calls not for the old “realpolitik” that seeks to secure the interests of nation-states against each other but for a “Gaiapolitik” aimed at securing a livable biosphere for all.

As logically compelling as this case for planetary realism may be, the paradigm shift underway is going in the opposite direction. Instead of the global interconnectivity forged in recent decades maturing into a planetary perspective, it is breaking up into a renewed nationalism more emphatically sovereigntist than before the advent of globalization.

In short, the prevailing political temperament around the world today is out of sync with the planetary imperative. This does not diminish its reality but, for the moment, eclipses and derails its emergence as the conscious organizing principle of human civilization.

The paradigm shift we are witnessing today not only marks a move away from a planetary awareness but also signals the last sigh of liberal universalism as the dominant governing philosophy of the postwar order since 1945.

The rules-based liberal international order, underwritten and guaranteed for decades by American might, has been consigned to the ash can of history by the summary defection of its founding architect from its terms and premises.

Under President Donald Trump and his allies, America has effectively joined the revisionist powers of China and Russia by baldly asserting sovereigntist self-interest unencumbered by rules that also encompass the interests of others.

Tariff walls, outright trade wars and unraveling alliances are supplanting the expansive web of global commerce, Western unity and cultural cross-fertilization that characterized times only recently. In a further break from the established order, Team Trump openly contemplates its own Anschluss of other people’s territory in Greenland, the Panama Canal and even Canada, instead of expressing outrage at China’s desire to take Taiwan, Russia’s bloody attempt to seize Ukraine or Israel’s increasing occupation of the Palestinian territories.

As Francis Fukuyama and Niall Ferguson discuss in a collage of commentary in this Noema edition, these developments portend the return to a world not unlike that of the 19th century, when the great powers carved out exclusive domains of influence.

The obvious great powers that would constitute a world apportioned in this way are China and Russia, both grasping at Eurasia, plus the United States and India. Whether Europe falls within the American sphere of influence depends on its capacity to cohere as a continental entity and find its identity as an alternative within a West that is fracturing under the strain of America’s revisionist turn.

Since the future appears to be taking us back to the 19th century, one cannot say we are in “uncharted territory.” On the contrary, we’ve been down this path before and know how it led to world wars that the global rules-based order, for all its well-known faults, was meant to avoid repeating.

On the American home front, and increasingly elsewhere in the West, it appears the “strong gods” of family, faith and nation are prevailing over the culturally liberal sentiments of an open society.

When there is no common agreement on what constitutes the good life, culture is politicized. As Alexandre Lefebvre argues in Noema, who gets to define “the good life” has become the central political question of our time. As in China, Russia, Iran or Turkey, governing authorities in the West are increasingly seeking to assign the moral substance of their vision to the state in place of the neutral proceduralism of liberal regimes that, at least in theory, embrace the diversity of all values without favor.

As the ascendant traditionalists see it, this rights-based liberalism grants a kind of converse moral substance to the state by virtue of the permissive openness it invites, nourishes and protects.

In many ways, liberalism was bound to fail just as Marxism did, and for the same reason. Marxism lacked a theory of politics that accommodated diverse constituencies because it assumed the universality of the interests of one class. Similarly, liberalism has falsely assumed its own universality, believing that there can be a consensus on only one conception of “the good life.” In reality, where some see declaring gender identity as the positive freedom to pursue self-realization, others see it as the corrosion of traditional Christian morality.

Like the British philosopher John Gray, Lefebvre suggests that the liberalism of the future may well entail a constitutionally grounded “modus vivendi” of autonomous jurisdictions as one way to keep the civil peace in diverse societies.

What is stunning in this context is how rapidly the America that elected Trump has tilted toward illiberal democracy under his tumultuous reign. Team Trump has robustly pursued retribution against political enemies, scorned universities as “the enemy,” moved to dismantle the administrative state and climate policies, demeaned the judicial system and cultivated crony corruption. Moreover, in the Orwellian name of free speech, Trump insists on ideological conformity across the board, from college students to corporate law firms.

To base the idea of democracy solely on elections invites this kind of illiberalism because it implies that majoritarian rule is all that is necessary for legitimacy. But, as the American founding fathers well understood, the will of the majority does not embrace all interests in a society, which must be protected equally. That is the reason for constitutional rule as the founding principle of a liberal polity.

In constitutional theory, the imposition of limitations and restraints — the “negative” — is what prevents the majority from absolute domination. It is the negative that makes the Constitution and the “positive” that makes government. One is the power of acting, the other the power of amending or arresting action. The two combined make a constitutional government.

It is this governing arrangement that made America great. The biggest danger of Making America Great Again is that a movement that believes it is the embodiment of the will of the majority will cast aside any constraints on its power as a contrivance by the elites of the ancien régime to keep the masses down.

In Niall Ferguson’s contribution to Noema, the historian raises the specter that “history was always against any republic lasting 250 years. This republic is in its late republican phase, with the intimations of empire much more visible.”…

… As politicized cultural battles and the churning geopolitical economy further unfold, a paradigm shift of a significance similar to planetary awareness is taking place that will redefine what it means to be human.

Across the sciences, we are coming to understand the self-organizing principle of “computation” as the building block of all forms of budding intelligence, from primitive cells to generative AI. This process involves learning from the environment, assembling information and arranging it by sharing functional instructions through “copying and pasting” code, so that an organism can develop, reproduce and sustain itself.

As Google’s Blaise Agüera y Arcas and James Manyika write in this issue, “computing existed in nature long before we built the first ‘artificial computers.’ … Understanding computing as a natural phenomenon will enable fundamental advances not only in computer science and AI, but also in physics and biology.”

More than half a century ago, they note, pioneering computer scientists had the intuition that organic and inorganic intelligence follow the same set of rules for development. “John von Neumann,” write the authors, “realized that for a complex organism to reproduce, it would need to contain instructions for building itself, along with a machine for reading and executing those instructions.” The technical requirements for that “universal constructor” in nature — the tape-like instructions of DNA — “correspond precisely to the technical requirements for the earliest computers.”

“Life,” they continue, “is computational because its existence over time depends on growth, healing or reproduction, and computation itself must evolve to support these essential functions.”

Grasping the correspondence with natural computation and learning from it, they believe, will render AI “brainlike” as it further evolves along the path from mimicking neural computation to predictive intelligence, general intelligence and, ultimately, collective intelligence. “Brains, AI agents and societies can all become more capable through increased scale. However, size alone is not enough. Intelligence is fundamentally social, powered by cooperation and the division of labor among many agents.”

In short, as philosopher of technology Tobias Rees also argues in this issue, the evolution of computation as a symbiosis of human and machine will cause us to rethink what it means to be human as, for the first time in history, a “more than human” intelligence emerges on our planet.

These contradictions and crosscurrents of the profound paradigm shifts we are living through all at once mark what future historians will surely describe as the Age of Upheaval…

FWIW, I worry that the diagnosis of our current political/cultural morass is maybe not dark enough. And as to AI, I’m no wide-eyed believer in the current cycle of hype. Indeed, I worry that AI could contribute to our social ills in the short term both by increasing and amplying the atomization and misinformation that we suffer and by challenging the economy if, as seems all too plausible, current over-enthusisam/over investment occasions a crash. That said, I honor the wisdom of Roy Amara: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

In any case, every link above is eminently worth clicking/reading; better yet, buy the issue.

All change: “Paradigm Shifts,” from @noemamag.com‬.

[Image above: source]

* Robert Musil, The Man Without Qualities

###

As we buckle up, we might recall that it was on ths date in 1944 that IBM dedicated the first program-controlled calculator, the Automatic Sequence Controlled Calculator (known best as the Harvard Mark I)– one of the earliest, if not the earliest, general-purpose electromechanical computers, and the one that laibd the base for subsequent development… and thus a catalyst for the string of developments– technical, social, and political with which we’re wrestling now.

Designer Howard Aiken had enlisted IBM as a partner in 1937; company chairman Thomas Watson Sr. personally approved the project and its funding. On completion it was put to work on a set war-related tasks, including calculations– overseen by John von Neumann— for the Manhattan Project

The Mark I was the industry’s largest electromechanical calculator… and it was large: 51 feet long, 8 feet high, and 2 feet deep; it weighed about 9,445 pounds  The basic calculating units had to be synchronized and powered mechanically, so they were operated by a 50-foot (15 m) drive shaft coupled to a 5 horsepower electric motor, which served as the main power source and system clock. It could do 3 additions or subtractions in a second; a multiplication took 6 seconds; a division took 15.3 seconds; and a logarithm or a trigonometric function took over a minute… ridiculously slow by today’s standards, but a huge advance in its time.

Two men working with a large, complex control panel featuring numerous wires and lights, indicative of early computing technology.

source

Written by (Roughly) Daily

August 7, 2025 at 1:00 am

“I’ve been discovering, much to my dismay, that I’m not a criminal mastermind or anything. I’m just brute force and my powers in no way include super-intelligence, which kind of pisses me off.”*…

A young boy with short hair, wearing a collared shirt, is intently reading a book with a focused expression in a dimly lit setting.

How do we accomodate ourselves to the prospect of an intelligence far greater than our own? In a consideration of J.D. Beresford’s The Hampdenshire Wonder (the first recognized appearance of the concept in modern Englis-language literature), Ted Chiang unspools the intellectual and cultural history of this now-prevalant trope…

J.D. Beresford’s The Hampdenshire Wonder is generally considered to be the first fictional treatment of superhuman intelligence, or “superintelligence.” This is a familiar trope for readers of science fiction today, but when the novel was originally published in 1911 it was anything but. What intellectual soil needed to be tilled before this idea could sprout?

At least since Plato, Western thought has clung to the idea of a Great Chain of Being, also known as the scala naturae, a system of classification in which plants rank below animals; humans rank above animals but below angels; and angels rank above humans but below God. There was no implied movement to this hierarchy; no one expected that plants would turn into animals given enough time, or that humans would turn into angels.

But by the 1800s, naturalists like Lamarck were questioning the assumption that species were immutable; they suggested that over time organisms actually grew more complex, with the human species as the pinnacle of the process. Darwin brought these speculations into public consciousness in 1859 with On the Origin of Species, and while he emphasized that evolution branches in many directions without any predetermined goal in mind, most people came to think of evolution as a linear progression.

Only then, I think, was it possible to conceive of humanity as a point on a line that could keep extending, to imagine something that would be more than human without being supernatural.

Darwin’s half-cousin, Francis Galton, was the first to suggest the idea that mental attributes like intelligence could be quantified. Galton published a volume called Hereditary Genius in 1869, and during the 1880s and ’90s he measured people’s reaction times as a way of gauging their mental ability, pioneering what we now call the field of psychometrics. By 1905, Alfred Binet had introduced a questionnaire to measure children’s intelligence; such questionnaires would evolve into IQ tests. The validity of psychometrics is quite controversial nowadays, as people disagree about what “intelligence” means and to what extent it can be measured. Some modern cognitive scientists do not consider the term intelligence particularly useful, instead preferring to use more specific terms like executive function, attentional control, or theory of mind. In the future “intelligence” may be regarded as a historical curiosity, like phlogiston, but until we develop a more precise vocabulary, we continue to use the term. Our contemporary notion of intelligence first gained currency around the time that Beresford was writing, and one can see how that converged with the idea of the superhuman in The Hampdenshire Wonder.

The titular character of The Hampdenshire Wonder is a boy named Victor Stott…

… Victor is born with an enormous head but an ordinary body, which disappoints his athletic father but also points to certain assumptions we have about the relationship between the mental and the physical. Beresford could have made Victor both an athlete and a genius, but he opted instead to follow a trope perhaps originated by Wells: the idea that evolution is pushing humanity toward a giant-brained phenotype, which is itself implicitly premised on the idea that mental ability and physical ability are in opposition to one another. This has remained a common trope in science fiction, although there are occasional depictions of mental and physical ability going hand in hand…

[Chiang traces the development of the “superintelligence,” the problems it raises, and the ways that they are treated in The Hampdenshire Wonder and elsewhere– “whatever your wisdom, you have to live in a world of comparative ignorance, a world which cannot appreciate you, but which can and will fall back upon the compelling power of the savage—the resort to physical, brute force.”…]

… In 1993 [Vernor] Vinge [here] argued that progress in computer technology would inevitably lead to a machine form of superintelligence. He proposed the term “the singularity” to describe the date—in the next few decades—beyond which events would be impossible to imagine. Since then, the technological singularity has largely replaced biological superintelligence as a trope in science fiction. More than that, it has become a trope in the Silicon Valley tech industry, giving rise to a discourse that is positively eschatological in tone. Superintelligence lies on the other side of a conceptual event horizon. When considered as a purely fictional idea, it imposes a limit on the kind of narratives one can tell about it. But when you start imagining it as something that could exist in reality, it becomes an end to human narratives altogether.

The Hampdenshire Wonder does posit a kind of eschatological scenario, but of a completely different order. After Victor’s downfall, Challis recounts the conclusion he came to after a conversation he’d had with the child, revealing a profound terror about the finiteness of knowledge:

Don’t you see that ignorance is the means of our intellectual pleasure? It is the solving of the problem that brings enjoyment—the solved problem has no further interest. So when all is known, the stimulus for action ceases; when all is known there is quiescence, nothingness. Perfect knowledge implies the peace of death

… The idea that the search for understanding will inevitably lead to a kind of cognitive heat death is an interesting one. I don’t believe it and I doubt any scientist believes it, so it’s curious that Beresford—clearly an admirer of scientists—apparently did. Challis talks about the need for mysteries that elude explanation, which is a surprisingly anti-intellectual stance to find in a novel about superintelligence. While there is arguably a strain of anti-intellectualism in stories where superintelligent characters bring about their own downfall, those can just as easily be understood as warnings about hubris, a literary device employed as far back as the first recorded literature, “The Epic of Gilgamesh.” But The Hampdenshire Wonder, in its final pages, is making an altogether different claim: The pursuit of knowledge itself is ultimately self-defeating.

Nowadays we associate the word “prodigy” with precocious children, but in centuries past the word was used to describe anything monstrous. Victor Stott clearly qualifies as a prodigy in the modern sense, but he qualifies in the older sense too: Not only does he frighten the ignorant and superstitious, he induces a profound terror in the educated and intellectual. Seen in this light, the first novel about superintelligence is actually a work of horror SF, a cautionary tale about the dangers of knowing too much…

Superintelligence and its discontents, from @ted-chiang.bsky.social‬ in @literaryhub.bsky.social‬.

Another powerful (and not unrelated) piece from Chiang: “Will A.I. Become the New McKinsey?

Kelly Thompson, The Girl Who Would Be King

###

As we wrestle with reason, we might wish a Joyeux Anniversaire to silk weaver Joseph Marie Jacquard; he was born on this date in 1752.  Jacquard’s 1805 invention of the programmable power loom, controlled by a series of punched “instruction” cards and capable of weaving essentially any pattern, ignited a technological revolution in the textile industry… indeed, it set off a chain of revolutions: it inspired Charles Babbage in the design of his “Difference Engine” (the ur-computer), and later, Herman Hollerith, who used punched cards in the “tabulator” that he created for the 1890 Census… and in so doing, pioneered the use of those cards for computer input… which is to say that Jacquard helped create the preconditions for AI (among all of the other things that computers can do).

Portrait of Joseph Marie Jacquard, a 19th-century inventor known for creating the programmable power loom.

source