“The economic system is, in effect, a mere function of social organization”*…

The AI race is, of course, afoot. But while most headlines focus on the new capabilities and benchmarks achieved by competing developers, Jeremy Shapiro reminds us that the winners in this race won’t necessarily be the most objectively capable, but rather the players who most effectively integrate the technology into their organizations, economies, and societies…
Artificial intelligence has rapidly become a central arena of geopolitical competition. The United States government frames AI as a strategic asset on par with energy or defense and seeks to press its apparent lead in developing the technology. The European Union lags in platform power but seeks influence over AI through regulation, labor protections, and rule-setting. China is racing to catch up and to deploy AI at scale, combining heavy state investment with administrative control and surveillance.
Each of these rivals fears falling behind. Losing the AI race is widely understood to mean slower growth, military disadvantage, technological dependence, and diminished global influence. As a result, governments are pouring money into chips, data centers, and national AI champions, while tightening export controls and treating compute capacity as a strategic resource. But this familiar race narrative obscures a deeper danger. AI is not just another general-purpose technology. It is a force capable of reshaping the very meaning of work, income, and social status. The states that lose control of these social effects may find that technological leadership offers little geopolitical advantage.
History suggests that societies unable to absorb disruptive economic change become politically volatile, strategically erratic, and ultimately weaker competitors. The central question, then, is not only who builds the most powerful AI systems, but who can integrate them into society without triggering a societal backlash or an institutional breakdown.
Karl Polanyi’s The Great Transformation, published in 1944, explains why the capacity to “socially embed” new market forces determines national strength. By “embeddedness,” Polanyi meant that markets have historically been subordinate to social and political institutions, rather than governing them. The nineteenthcentury idea of what he called a “self-regulating market” was historically novel precisely because it sought to “disembed” the economy from society and organize social life around price and competition rather than social obligation. As Polanyi put it in his most succinct formulation, “instead of economy being embedded in social relations, social relations are embedded in the economic system.”
Writing in the shadow of the Great Depression, Polanyi argued that the attempt in the nineteenth century to create a self-regulating market society that treated labor, land, and money as commodities generated social dislocation so severe that it provoked authoritarian backlash and geopolitical collapse. Stable orders, he insisted, required markets to be re-embedded in social and political institutions. Where they were not, societies sought protection by other means, which often translated into support for fascist or communist regimes that promised to tame the market. Today, it often means electing populist leaders who promise to break the entire existing order, both domestic and international.
Polanyi insisted that the idea of a “self-adjusting market implied a stark utopia” because such a system could not exist “for any length of time without annihilating the human and natural substance of society.” The interwar gold standard, for example, disciplined states in the name of efficiency, but it did so by transmitting economic shocks directly into social life. When democratic governments proved unable to shield their populations, they either abandoned the liberal economic order or turned authoritarian (or both)…
[Shapiro considers the history of the 20th century, in particular the rise of Nazi Gernmany, sketches the state of play in the AI arena, considers the challenge of embedding the changes that AI will bring in The U.S., Europe, and China, then teases out the ways in which the “industrial revolution” is different from it predecessors (in particular, the mobility of capital, the services (as opposed to manufacturing)-heavy character of employment today, and the accelerating pace of tech deelopment. He concludes…]
… Geopolitical competition in the AI age will not take place solely in clean rooms or data centers. It will also involve the less visible realm of social institutions: labor markets, communities, social protections, and political legitimacy. Polanyi teaches us that markets are powerful only when societies can bear them. When they cannot, markets provoke their own undoing and often in rather spectacular fashion.
The West’s success in the Cold War owed much to its ability to reconcile capitalism with social protection. If the AI age is another “great transformation,” the same lesson applies. Chips matter. Data matters. But the ultimate source of power may be the capacity to re-embed technological change in society without sacrificing cohesion.
That is not a liberal-progressive distraction from geopolitical competition. It is its hidden core.
“The Next Great Transformation,” from @jyshapiro.bsky.social and @open-society.bsky.social.
For a complementary perspective (with special focus on the interaction between labor and the supply side of the economy) pair with: “Brave New World- a third industrial divide?” from @thunen.bsky.social in @phenomenalworld.bsky.social.
And see also: “AI and the Futures of Work,” from Johannes Kleske (@jkleske.bsky.social). A response to dramatic predictions of AI’s impact– most recently, Matt Shumer‘s viral “Something Big Is Happening“: it’s a possible future, Kleske suggests. but only one possibe future– and one that, while plausible, isn’t likely (at least outside the rarified atmsphere of coding, in which Shumer operates). In a way that echoes Shapiro’s piece above, Kleske suggests that individuals need to better understand the technology in order to retain/regain some agency, and societies need the same kind of rekindled resistance to act clearly and with purpose in re-embedding AI, and markets, in society. Not the other way around… Resonant with the thinking of Tim O’Reilly and Mike Loukides featured here before: “The best way to predict the future is to invent it“; and with Ted Chiang‘s “ChatGPT Is a Blurry JPEG of the Web” and “Will A.I. Become the New McKinsey?” And then there’s the ever-illuminating Rusty Foster (riffing on Gideon Lewis-Kraus‘ recent New Yorker piece): “A. I. Isn’t People.”
For a look at a high-value, trust-based use case for AI that seems to avoid the objections to AGI (and speak to Shapiro’s points), see “The Middle Game: Routers at the Edge,” from Byrne Hobart.
But back to AGI… as Nicholas Carr observes, we might understand Bosrtrom’s “paperclip maximizer” “not as a thought experiment but as a fable. It’s not really about AIs making paperclips. It’s about people making AIs. Look around. Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence? Are we not trapped in an “AI maximizer” scenario?”
###
As we digest development, we might recall that it was on this date in 1962 that an early precondition for the revolution underway was first achieved: telephone and television signals were first relayed in space via the communications satellite Echo 1– basically a big metallic balloon that simply bounced radio signals off its surface. Simple, but effective.
Forty thousand pounds (18,144 kg) of air was required to inflate the sphere on the ground; so it was inflated in space. While in orbit it only required several pounds of gas to keep it inflated.
Fun fact: the Echo 1 was built for NASA by Gilmore Schjeldahl, a Minnesota inventor probably better remembered as the creator of the plastic-lined airsickness bag.

“It is what you read when you don’t have to that determines what you will be when you can’t help it”*…
… What we read– and, librarian Carlo Iacono argues, how we read.
Our inabilty to focus isn’t a failing. It’s a design problem, and the answer isn’t getting rid of our screen time…
Everyone is panicking about the death of reading. The statistics look damning: the share of Americans who read for pleasure on an average day has fallen by more than 40 per cent over the past 20 years, according to research published in iScience this year. The OECD calls the 2022 decline in educational outcomes ‘unprecedented’ across developed nations. In the OECD’s latest adult-skills survey, Denmark and Finland were the only participating countries where average literacy proficiency improved over the past decade. Your nephew speaks in TikTok references. Democracy itself apparently hangs by the thread of our collective attention span.
This narrative has a seductive simplicity. Screens are destroying civilisation. Children can no longer think. We are witnessing the twilight of the literate mind. A recent Substack essay by James Marriott proclaimed the arrival of a ‘post-literate society’ and invited us to accept this as a fait accompli. (Marriott does also write for The Times.) The diagnosis is familiar: technology has fundamentally degraded our capacity for sustained thought, and there’s nothing to be done except write elegiac essays from a comfortable distance.
I spend my working life in a university library, watching how people actually engage with information. What I observe doesn’t match this narrative. Not because the problems aren’t real, but because the diagnosis is wrong.
The declinist position rests on a category error: treating ‘screen culture’ as a unified phenomenon with inherent cognitive properties. As if the same device that delivers algorithmically curated rage-bait and also the complete works of Shakespeare is itself the problem rather than how we decide to use it…
[… observing that “people who ‘can’t focus’ on traditional texts can maintain extraordinary concentration when working across modes, he argues that “we haven’t become post-literate. We’ve become post-monomodal. Text hasn’t disappeared; it’s been joined by a symphony of other channels.”…]
… What troubles me most about the declinist position is not its diagnosis but its conclusion. The commentators who lament the post-literate society often identify the same villains I do. They recognise that technology companies are, in Marriott’s words, ‘actively working to destroy human enlightenment’, that tech oligarchs ‘have just as much of a stake in the ignorance of the population as the most reactionary feudal autocrat.’
And then they surrender. As Marriott says: ‘Nothing will ever be the same again. Welcome to the post-literate society.’
This is the move I cannot follow. To name the actors responsible and then treat the outcome as inevitable is to provide them cover. If the crisis is a force of nature, ‘screens’ destroying civilisation like some technological weather system, then there’s nothing to be done but write elegiac essays from a comfortable distance. But if the crisis is the product of specific design choices made by specific companies for specific economic reasons, then those choices can be challenged, regulated, reversed.
The fatalism, however beautifully expressed, serves the very interests it condemns. The technology companies would very much like us to believe that what they’re doing to human attention is simply the inevitable result of technological progress rather than something they’re doing to us, something that could, with sufficient political will, be stopped.
Your inability to focus isn’t a moral failing. It’s a design problem. You’re trying to think in environments built to prevent thinking. You’re trying to sustain attention in spaces engineered to shatter it. You’re fighting algorithms explicitly optimised to keep you scrolling, not learning.
The solution isn’t discipline. It’s architecture. Build different defaults. Create different spaces. Establish different rhythms. Make depth as easy as distraction currently is. Make thinking feel as natural as scrolling currently does.
What if, instead of mourning some imaginary golden age of pure text, we got serious about designing for depth across all modes? Every video could come with a searchable transcript. Every article could offer multiple entry points for different levels of attention. Our devices could recognise when we’re trying to think and protect that thinking. Schools could teach students to translate between modes the way they once taught translation between languages.
Books aren’t going anywhere. They remain unmatched for certain kinds of sustained, complex thinking. But they’re no longer the only game in town for serious ideas. A well-crafted video essay can carry philosophical weight. A podcast can enable the kind of long-form thinking we associate with written essays. An interactive visualisation can reveal patterns that pages of description struggle to achieve.
The future belongs to people who can dance between all modes without losing their balance. Someone who can read deeply when depth is needed, skim efficiently when efficiency matters, listen actively during a commute, and watch critically when images carry the argument. This isn’t about consuming more. It’s about choosing consciously.
We stand at an inflection point. We can drift into a world where sustained thought becomes a luxury good, where only the privileged have access to the conditions that enable deep thinking. Or we can build something unprecedented: a culture that preserves the best of print’s cognitive gifts while embracing the possibilities of a world where ideas travel through light, sound and interaction.
The choice isn’t between books and screens. The choice is between intentional design and profitable chaos. Between habitats that cultivate human potential and platforms that extract human attention.
The civilisations that thrive won’t be the ones that retreat into text or surrender to the feed. They’ll be the ones that understand a simple truth: every idea has a natural form, and wisdom lies in matching the mode to the meaning. Some ideas want to be written. Others need to be seen. Still others must be heard, felt or experienced. The mistake is forcing all ideas through a single channel, whether that channel is a book or a screen.
Your great-grandchildren won’t read less than you do. They’ll read differently, as part of a richer symphony of sense-making. Whether that symphony sounds like music or noise depends entirely on the choices we make right now about the shape of our tools, the structure of our schools, and the design of our days.
The elegant lamenters offer a eulogy. I’m more interested in a fight…
Reunderstanding reading: “Books and screens,” from @carloiacono.bsky.social in @aeon.co.
* Oscar Wilde
###
As we turn the page, we might note that we’ve been here before, and celebrate the emergence of a design, an innovation, a technology that took on a life of its own and changed reading and… well, everything: this day in 1455 is the traditionally-given date of the publication of the Gutenberg Bible, the first Western book printed from movable type.
(Lest we think that there’s actually anything new under the sun, we might recall that The Jikji— the world’s oldest known extant movable metal type printed book– was published in Korea in 1377; and that Bi Sheng created the first known moveable type– out of wood– in China in 1040.)

“All one wants to do is make a small, finished, polished, burnished, beautiful object”*…

… and if we don’t make them, we can collect them.
Scott Teplin reports (in Paul Lukas‘ nifty newsletter, Inconspicuous Consumption) on one remarkable example…
My family and I recently vacationed in Mexico City. Nestled in the heart of the city’s vibrant Roma Norte neighborhood is a hidden gem that nearly escaped our itinerary: The Object Museum (or MODO, short for Museo del Objeto del Objeto, or “the Object of the Object” [see also here]).
To be honest, I was initially hesitant to step inside. Having grown up in Wisconsin, I developed a healthy distaste for the cluttered crap heaps featured in the infamous tourist trap the House on the Rock, and I worried that MODO might be more of the same. However, curiosity eventually won out, and I wandered in one afternoon to discover what turned out to be one of the most delightful museum experiences I’ve ever had.
The museum is a dedicated homage to the “object (i.e., the point) of the object,” showcasing vast collections of everyday items. Originally conceived as the private obsession of the mansion’s resident, Bruno Newman [here], who spent over 40 years collecting packaging and advertising, it has evolved into something of a localized record of material culture. Plus it’s just a well-curated collection of cool stuff…
A celebration of the commonplace. See much more at: “A Museum Devoted to Everyday Items,” from @steplin.bsky.social.
* John Banville
###
As we regard the routine, we might celebrate: today is National Margarita Day.
“Never tell me the odds!”*…
How likely is it that one will be born on a Leap Day? That one will find a pearl in an oyster? That one will solve Wordle on the first guess? That one will die on a tornado? That two people will share the same fingerprint?
The good folks at R74n (@r74n.com) have these probabilities– and so many more: “What Are The Odds?”
(Image above– and tutorial on the odds ratio: source)
* Han Solo (Harrison Ford) in Star Wars: Episode V– The Empire Strikes Back
###
As we place our bets, we might spare a thought for Harvey Kurtzman; he died on this date in 1993. A cartoonist and editor, he is best know for writing and editing the parodic comic book Mad from 1952 until 1956. Kurtzman scripted every story in the first twenty-three issues. (The New York Times‘ obituary for Kurtzman in 1993, alluding to the role of publisher William Gaines, said Kurtzman had “helped found Mad Magazine.” This prompted an angry response to the newspaper from Art Spiegelman, who complained that awarding Kurtzman partial credit for starting Mad was “like saying Michelangelo helped paint the Sistine Chapel just because some Pope owned the ceiling.”)
Kurtzman, who mentored many younger cartoonists (including Terry Gilliam and Robert Crumb), is considered, with cartoonists like Will Eisner, Jack Kirby, and Carl Barks, one of the defining creators of the Golden Age of American comic books. The prestigious Harvey Awards (for achievement in comic books) are named in his honor.
“A good photograph is knowing where to stand”*…

Today’s post– commemorating the 124th birthday of a man who knew exactly where to stand– reverses (Roughly) Daily‘s usual format, opening with the almanac entry…
We might send thoughtfully-composed birthday greetings to Ansel Adams; he was born on this date in 1902. A photographer who specialized in landscapes, especially in black-and-white photos of the American West, he was hugely influential both in photography and in environmentalism.
Adams helped found Group f/64, an association of photographers advocating “pure” photography which favored sharp focus and the use of the full tonal range of a photograph; was a key advisor in establishing the photography department at the Museum of Modern Art in New York, and a founder of the photography journal Aperture.
His love of photography was born when, at age 12, he visited Yosemite and took his first shots. He became a life-long advocate for environmental conservation, a commitment deeply intertwined with his photographic practice. At one point, he contracted with the United States Department of the Interior to make photographs of national parks. For his work and his persistent advocacy, which helped expand the National Park system, he was awarded the Presidential Medal of Freedom in 1980.
Visit the Ansel Adams Gallery to see more of Adams’ signature lanscape and natural wonder work.
Adams, c. 1950 (source)
###
On the occasion of Adams’ birthday, we might note that, working photographer that he was, he took commercial assignments from time to time– assignments focused on subjects not usually associated with Adams. Two of them are especially interesting…
A collection of photos taken for Fortune Magazine in Los Angeles in the run-up to World War II documented the lives of workers in Los Angeles’ booming aviation industry…
More at “Ansel Adams’ Photos of Pre-War Los Angeles.”
And then, from the early 1960s, photos taken by Adams for Stanford’s PACE Program…
“Once it was a rich, sleepy school with rich, sleepy students; now it aims to be the ‘Harvard of the West’.” That was how Time magazine described Stanford University in the fall of 1962. The publication had been reporting on Stanford’s PACE program, a massive fundraising effort that the school launched to strive toward the kind of prominence that its founders Leland and Jane Stanford had originally envisioned. The core drive behind PACE, an acronym for Plan of Action for a Challenging Era, was for Stanford to transcend its “sleepy” backwater reputation (the “rich” part would remain) and emerge as a potential Western rival to the Ivy League universities on the East Coast.
When it came to PACE’s promotional materials for wooing donors, Stanford’s planning department hired Ansel Adams to produce the visuals. Adams was already well known and highly accomplished at the time, having shot the majority of his masterpiece landscapes depicting the natural grandeur of the American West. But in the early 1960s, he was also still a for-hire photographer trying to make a living in the Bay Area. According to archival letters, Adams and his team of photographers were contracted for $3,000 to produce a series of images from around the Stanford campus over a period of two months in early 1961.
The PACE program ultimately proved to be a resounding success, to the tune of $114 million in fundraising (nearly $1.1 billion today), which became foundational to Stanford’s present-day status as an ultra-elite university. In parallel fashion, Adams would eventually be considered the great American photographer of his era, an exceedingly rare household name in the world of photography, and a visual artist still highly celebrated in museums and pricey galleries around the world. However, his series of Stanford photographs was never recorded in his otherwise meticulous photo log and fell into deep obscurity, becoming all but never-before-seen images by the general public and unknown to even his biographers and archivists…
More at “Lost California photos from Ansel Adams.”
* Ansel Adams













You must be logged in to post a comment.