Posts Tagged ‘AI’
“The economic system is, in effect, a mere function of social organization”*…

The AI race is, of course, afoot. But while most headlines focus on the new capabilities and benchmarks achieved by competing developers, Jeremy Shapiro reminds us that the winners in this race won’t necessarily be the most objectively capable, but rather the players who most effectively integrate the technology into their organizations, economies, and societies…
Artificial intelligence has rapidly become a central arena of geopolitical competition. The United States government frames AI as a strategic asset on par with energy or defense and seeks to press its apparent lead in developing the technology. The European Union lags in platform power but seeks influence over AI through regulation, labor protections, and rule-setting. China is racing to catch up and to deploy AI at scale, combining heavy state investment with administrative control and surveillance.
Each of these rivals fears falling behind. Losing the AI race is widely understood to mean slower growth, military disadvantage, technological dependence, and diminished global influence. As a result, governments are pouring money into chips, data centers, and national AI champions, while tightening export controls and treating compute capacity as a strategic resource. But this familiar race narrative obscures a deeper danger. AI is not just another general-purpose technology. It is a force capable of reshaping the very meaning of work, income, and social status. The states that lose control of these social effects may find that technological leadership offers little geopolitical advantage.
History suggests that societies unable to absorb disruptive economic change become politically volatile, strategically erratic, and ultimately weaker competitors. The central question, then, is not only who builds the most powerful AI systems, but who can integrate them into society without triggering a societal backlash or an institutional breakdown.
Karl Polanyi’s The Great Transformation, published in 1944, explains why the capacity to “socially embed” new market forces determines national strength. By “embeddedness,” Polanyi meant that markets have historically been subordinate to social and political institutions, rather than governing them. The nineteenthcentury idea of what he called a “self-regulating market” was historically novel precisely because it sought to “disembed” the economy from society and organize social life around price and competition rather than social obligation. As Polanyi put it in his most succinct formulation, “instead of economy being embedded in social relations, social relations are embedded in the economic system.”
Writing in the shadow of the Great Depression, Polanyi argued that the attempt in the nineteenth century to create a self-regulating market society that treated labor, land, and money as commodities generated social dislocation so severe that it provoked authoritarian backlash and geopolitical collapse. Stable orders, he insisted, required markets to be re-embedded in social and political institutions. Where they were not, societies sought protection by other means, which often translated into support for fascist or communist regimes that promised to tame the market. Today, it often means electing populist leaders who promise to break the entire existing order, both domestic and international.
Polanyi insisted that the idea of a “self-adjusting market implied a stark utopia” because such a system could not exist “for any length of time without annihilating the human and natural substance of society.” The interwar gold standard, for example, disciplined states in the name of efficiency, but it did so by transmitting economic shocks directly into social life. When democratic governments proved unable to shield their populations, they either abandoned the liberal economic order or turned authoritarian (or both)…
[Shapiro considers the history of the 20th century, in particular the rise of Nazi Gernmany, sketches the state of play in the AI arena, considers the challenge of embedding the changes that AI will bring in The U.S., Europe, and China, then teases out the ways in which the “industrial revolution” is different from it predecessors (in particular, the mobility of capital, the services (as opposed to manufacturing)-heavy character of employment today, and the accelerating pace of tech deelopment. He concludes…]
… Geopolitical competition in the AI age will not take place solely in clean rooms or data centers. It will also involve the less visible realm of social institutions: labor markets, communities, social protections, and political legitimacy. Polanyi teaches us that markets are powerful only when societies can bear them. When they cannot, markets provoke their own undoing and often in rather spectacular fashion.
The West’s success in the Cold War owed much to its ability to reconcile capitalism with social protection. If the AI age is another “great transformation,” the same lesson applies. Chips matter. Data matters. But the ultimate source of power may be the capacity to re-embed technological change in society without sacrificing cohesion.
That is not a liberal-progressive distraction from geopolitical competition. It is its hidden core.
“The Next Great Transformation,” from @jyshapiro.bsky.social and @open-society.bsky.social.
For a complementary perspective (with special focus on the interaction between labor and the supply side of the economy) pair with: “Brave New World- a third industrial divide?” from @thunen.bsky.social in @phenomenalworld.bsky.social.
And see also: “AI and the Futures of Work,” from Johannes Kleske (@jkleske.bsky.social). A response to dramatic predictions of AI’s impact– most recently, Matt Shumer‘s viral “Something Big Is Happening“: it’s a possible future, Kleske suggests. but only one possibe future– and one that, while plausible, isn’t likely (at least outside the rarified atmsphere of coding, in which Shumer operates). In a way that echoes Shapiro’s piece above, Kleske suggests that individuals need to better understand the technology in order to retain/regain some agency, and societies need the same kind of rekindled resistance to act clearly and with purpose in re-embedding AI, and markets, in society. Not the other way around… Resonant with the thinking of Tim O’Reilly and Mike Loukides featured here before: “The best way to predict the future is to invent it“; and with Ted Chiang‘s “ChatGPT Is a Blurry JPEG of the Web” and “Will A.I. Become the New McKinsey?” And then there’s the ever-illuminating Rusty Foster (riffing on Gideon Lewis-Kraus‘ recent New Yorker piece): “A. I. Isn’t People.”
For a look at a high-value, trust-based use case for AI that seems to avoid the objections to AGI (and speak to Shapiro’s points), see “The Middle Game: Routers at the Edge,” from Byrne Hobart.
But back to AGI… as Nicholas Carr observes, we might understand Bosrtrom’s “paperclip maximizer” “not as a thought experiment but as a fable. It’s not really about AIs making paperclips. It’s about people making AIs. Look around. Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence? Are we not trapped in an “AI maximizer” scenario?”
###
As we digest development, we might recall that it was on this date in 1962 that an early precondition for the revolution underway was first achieved: telephone and television signals were first relayed in space via the communications satellite Echo 1– basically a big metallic balloon that simply bounced radio signals off its surface. Simple, but effective.
Forty thousand pounds (18,144 kg) of air was required to inflate the sphere on the ground; so it was inflated in space. While in orbit it only required several pounds of gas to keep it inflated.
Fun fact: the Echo 1 was built for NASA by Gilmore Schjeldahl, a Minnesota inventor probably better remembered as the creator of the plastic-lined airsickness bag.

“To-day I think / Only with scents”*…

We’ve considered before smell, the unsung hero of the senses. Today, Kaja Šeruga explains how scientists using chemistry, archival records, and AI are reviving the aromas of old libraries, mummies and battlefields…
We often learn about the past visually — through oil paintings and sepia photographs, books and buildings, artifacts displayed behind glass. And sometimes we get to touch historical objects or listen to recordings. But rarely do we use our sense of smell — our oldest, most primal way of learning about the environment — to experience the distant past.
Without access to odor, “you lose that intimacy that smell brings to the interaction between us and objects,” saysanalytical chemist Matija Strlič. As lead scientist of the Heritage Science Laboratory at the University of Ljubljana in Slovenia and previously deputy director of the Institute for Sustainable Heritage at University College London, Strlič has devoted his career to interdisciplinary research in the field of heritage science. Much of his work focused on the preservation and reconstruction of culturally significant scents.
Reconstructed scents can enhance museum and gallery exhibits, says Inger Leemans, a cultural historian at the Royal Netherlands Academy of Arts and Sciences. Smell can provide a more inviting entry point, especially for uninitiated visitors, because there’s far less formalized language for describing smell than for interpreting visual art or displays. Since there’s no “right way” of talking about scent, she says, “your own knowledge is as good as the others’.”
Despite their potential to enrich our understanding of history and art, smells are rarely conserved with the same care as buildings or archaeological artifacts. But a small group of researchers, including Strlič and Leemans, is trying to change that — combining chemistry, ethnography, history and other disciplines to document and preserve olfactory heritage…
Read on for the fascinating details: “Recreating the smells of history,” from @knowablemag.bsky.social.
* Edward Thomas, “Digging“
###
As we take a whiff, we might recall that it was on this date in 1924 that Coco Chanel agreed with the Wertheimer brothers Pierre and Paul, directors of the perfume house Bourjois, to create a new corporate entity, Parfums Chanel, Its signature product was Chanel No. 5. She had been selling small quanitites of the scent in her boutique since 1921.
Traditionally, fragrances worn by women had fallen into two basic categories. Respectable women favored the essence of a single garden flower while sexually provocative indolic perfumes heavy with animal musk or jasmine were associated with women of the demi-monde. Chanel sought a new scent that would appeal to the flapper and celebrate the seemingly liberated feminine spirit of the 1920s. Her scent was formulated by chemist and perfumer Ernest Beaux, who designed an unprecedented olfactory architecture, a bouquet of 80 scents whose precious notes were blended with high proportions of aldehydes, organic compounds that carry a crisp, soapy, and floral citrusy scent. In late 1920, when presented with small glass vials containing sample scents numbered 1 to 5 and 20 to 24 for her assessment, she chose the fifth vial. Chanel told Beaux, “I present my dress collections on the fifth of May, the fifth month of the year and so we will let this sample number five keep the name it has already, it will bring good luck.”
The first promotion for Chanel No. 5 appeared in The New York Times on December 16, 1924– a small ad for Parfums Chanel announcing the Chanel line of fragrances available at Bonwit Teller, an upscale department store. The fragrance, of course, become a fave. An Andy Warhol subject and worn by everyone from Marilyn Monroe and Catherine Deneuve to Mad Men’s Peggy Olson, the perfume, is a foundational part of fragrance history… and still sells a bottle every 30 seconds.
“The new media are not ways of relating to us the ‘real’ world; they are the real world and they reshape what remains of the old world at will.”*…
There is a vortex of forces shaping the future of journalism. Censorship, both direct and indirect, is on the rise in the U.S. and around the world. Concentration of media ownership is homogenizing coverage and creating “news deserts.”
At the same time, new technology and new applications of that technology are reshaping the Fourth estate. The Reuters Institute at Oxford surveyed 280 digital leaders from 51 countries and territories to learn what they are seeing– and planning. From the Executive Summary…
We are still at the early stages of another big shift in technology (Generative AI) which threatens to upend the news industry by offering more efficient ways of accessing and distilling information at scale. At the same time, creators and influencers (humans) are driving a shift towards personality-led news, at the expense of media institutions that can often feel less relevant, less interesting, and less authentic. In 2026 the news media are likely to be further squeezed by these two powerful forces.
Understanding the impact of these trends, and working out how to combat them, will be high up the ‘to do list’ of media executives this year, despite the unevenly distributed pace of change across countries and demographics.
Existential challenges abound. Declining engagement for traditional media combined with low trust is leading many politicians, businessmen, and celebrities to conclude that they can bypass the media entirely, giving interviews instead to sympathetic podcasters or YouTubers. This Trump 2.0 playbook – now widely copied around the world – often comes bundled with a barrage of intimidating legal threats against publishers and continuing attempts to undermine trust by branding independent media and individual journalists as ‘fake news’. These narratives are finding fertile ground with audiences – especially younger ones – that prefer the convenience of accessing news from platforms, and have weaker connections with traditional news brands. Meanwhile search engines are turning into AI-driven answer engines, where content is surfaced in chat windows, raising fears that referral traffic for publishers could dry up, undermining existing and future business models.
Despite these difficulties many traditional news organisations remain optimistic about their own business – if not about journalism itself. Publishers will be focused this year on re-engineering their businesses for the age of AI, with more distinctive content and a more human face. They will also be looking beyond the article, investing more in multiple formats especially video and adjusting their content to make it more ‘liquid’ and therefore easier to reformat and personalise. At the same time, they’ll be continuing to work out how best to use Generative AI themselves across newsgathering, packaging, and distribution. It’s a delicate balancing act but one that – if they can pull it off – holds out the promise of greater efficiency and more relevant and engaging journalism.
These are the main findings from our industry survey:
- Only slightly more than a third (38%) of our sample of editors, CEOs, and digital executives say they are confident about the prospects for journalism in the year ahead – that’s 22pp lower than four years ago. Stated concerns relate to politically motivated attacks on journalism, loss of USAID money that previously supported independent media in many parts of the world, and significant declines in traffic to many online news sites.
- By contrast, around half (53%) say they are confident about their own business prospects, similar to last year’s figure. Upmarket subscription-based publishers with strong direct traffic can see a path to long-term profitability, even as those that remain dependent on advertising and print worry about sharp declines in revenue and the potential impact of AI powered search on the bottom line.
- Publishers expect traffic from search engines to decline by more than 40% over the next three years – not quite ‘Google Zero’ but a substantial impact none the less. Data sourced for this report from analytics provider Chartbeat shows that aggregate traffic to hundreds of news sites from Google search has already started to dip, with publishers that rely on lifestyle content saying they have been particularly affected by the roll out of Google’s AI overviews. This comes after substantial falls in referral traffic to news sites from Facebook (-43%) and X, formerly Twitter (-46%) over the last three years.
- In response, publishers say it will be important to focus on more original investigations and on the ground reporting (+91 percentage point difference between ‘more’ and ‘less’), contextual analysis and explanation (+82) and human stories (+72). By contrast, they plan to scale back service journalism (-42), evergreen content (-32), and general news (-38), which many expect to become commoditised by AI chatbots. At the same time, they think it will be important to invest in more video (+79) – including ‘watch tabs’ – more audio formats (+71) such as podcasts but a bit less in text output.
- In terms of off-platform strategies, YouTube will be the main focus for publishers this year with a net score of +74, up substantially on last year. Other video-led platforms such as TikTok (+56) and Instagram (+41) are also key priorities – along with working out how to navigate distribution through AI platforms (+61) such as OpenAI’s ChatGPT, Google’s Gemini and Perplexity. Google Discover remains a critical (+19), if slightly volatile, source of referral traffic, while some publishers are looking to find new audiences via newsletter platforms such as Substack (+8). By contrast, publishers will be deprioritising effort spent on old-style Google SEO (-25) – as well as traditional social networks Facebook (-23) and X (-52)
- Last year we predicted the emergence of ‘agentic AI’, but this year we can expect to start to see real-world impact of these more advanced technologies. Some sources suggest that there will soon be more bots than people reading publisher websites, as tools like Huxe and OpenAI’s Pulse offer personalised news briefings at scale. Three-quarters of our respondents (75%) expect ‘agentic tools’ to have a ‘large’ or ‘very large’ impact on the news industry in the near future.
- Alongside the traffic disruption from AI, news executives also see opportunities to build new revenue from licensing content (or a share of advertising revenue) within chatbots. Around a fifth (20%) of publisher respondents – mainly from upmarket news companies – expect future revenues to be substantial, with half (49%) saying that they expect a minor contribution. A further fifth (20%), mostly made up of local publishers, public broadcasters, or those from smaller countries, say they do not expect any income from AI deals.
- More widely, subscription and membership remain the biggest revenue focus (76%) for publishers, ahead of both display (68%) and native advertising (64%). Online and physical events (54%) are also becoming more important as part of a diversified revenue strategy. Reliance on philanthropic and foundation support (18%) has declined this year, after cuts of media support budgets in the United States and elsewhere.
- Meanwhile news organisations’ use of AI technologies continues to increase across all categories, with back-end automation considered ‘important’ this year by the vast majority (97%) of publisher respondents, many of whom integrated pilot systems into content management systems in the last year. Newsgathering cases (82%) are now the second most important, with faster coding and product development (81%) also gaining traction.
- Over four in ten (44%) survey respondents say that their newsroom AI initiatives are showing ‘promising’ results, but a similar proportion (42%) describe them as ‘limited’. Two-thirds of respondents (67%) say they have not saved any jobs so far as a result of AI efficiencies. Around one in seven (16%) say they have slightly reduced staff numbers but a further one in ten (9%) have added new roles/cost.
- The rise of news creators and influencers is a concern for publishers in two ways. More than two-thirds (70%) of our respondents are concerned that they are taking time and attention away from publisher content. Four in ten (39%) worry that they are at risk of losing top editorial talent to the creator ecosystem, which offers more control and potentially higher financial rewards.
- Responding to the increased competition and a shift of trust towards personalities, three-quarters (76%) of publisher respondents say they will be trying to get their staff to behave more like creators this year. Half (50%) said they would be partnering with creators to help distribute content, around a third (31%) said they would be hiring creators, for example to run their social media accounts. A further 28% are looking to set up creator studios and facilitate joint ventures.
More widely, could 2026 be the year when AI company stock valuations come down to earth with a bump, amid concerns about whether their trillion-dollar bets will pay back their investors? Meanwhile the amount of low-quality AI automated content, including so-called ‘pink slime’ sites, looks set to explode, with platforms struggling to distinguish this from legitimate news.
We can expect more public concern about the role of big tech in our lives. This may include individual acts of ‘Appstinence’ and other forms of digital detox and a desire for more IRL (In Real Life) connection. Governments will also come under pressure to do more to protect young and other vulnerable groups online, even in the United States.
The creator economy will continue to surge, fuelled by investments from video platforms and streamers. At the top end creators will look more like Hollywood moguls with big budgets and their own studio complexes. Within news, we’ll also see the emergence of bigger, more robust, creator-led companies delivering significant revenues as well as value to audiences – offering ever greater competition for traditional journalism…
Read the report in full: “Journalism, media, and technology trends and predictions 2026,” from @reutersinstitute.bsky.social.
* Marshall McLuhan
###
As we ponder the prospects of the press, we might type a birthday note to John Baskerville, a pioneering English printer and typefounder, who was born on this date in 1706. Among Baskerville’s publications in the British Museum’s collection are Aesop’s Fables (1761), the Bible (1763), and the works of Horace (1770)– many printed on a stock he invented, “wove paper”, which was considerably smoother than “laid paper”, allowing for sharper printing results. And as for his fonts, Baskerville’s creations (including the famous “Baskerville,” a predecessor to the very similar Times New Roman) were so successful that his competitors resorted to claims that they damaged the eyes.

“A Wikipedia article is a process, not a product”*…
A quarter of a century ago Jimmy Wales, Wikipedia‘s founder, articulated its vision– one into which it has impressively grown: “Imagine a world in which every single person on the planet is given free access to the sum of all human knowledge. That’s what we’re doing.”
On the ocassion of its birthday this month, Caitlin Dewey takes stock…
Happy birthday to Wikipedia, which is now old enough to rent a car without extra charges … but faces new (and newly urgent) threats from AI and political polarization. As a palate cleanser, should those bum you out (the second, in particular, is very grim/good), may I then suggest this “entirely non-comprehensive list of life principles” learned from 20 years of editing Wikipedia. [Scientific American / Financial Times / The Wikipedian]…
From her wonderful newsletter, Links I Would Gchat You If We Were Friends. All three are eminently worth reading.
* Clay Shirky, who went on to observe that “Wikipedia is forcing people to accept the stone-cold bummer that knowledge is produced and constructed by argument rather than by divine inspiration,” but at the same time that: “We have lived in this world where little things are done for love and big things for money. Now we have Wikipedia. Suddenly big things can be done for love.”
###
As we treasure– and support— treasures, we might recall that it was on this date in 1885 that LaMarcus Adna Thompson received the first patent for a true “switchback railroad”– or , as we know it, a roller coaster. Thompson had designed the ride in 1881, and opened it on Coney Island in 1884. (The “hot dog” had been invented, also at Coney Island, in 1867, so was available to trouble the stomachs of the very first coaster riders.)






You must be logged in to post a comment.