(Roughly) Daily

Posts Tagged ‘society

“Privacy is rarely lost in one fell swoop. It is usually eroded over time, little bits dissolving almost imperceptibly until we finally begin to notice how much is gone.”*…

… And now, indeed, we’re beginning to notice. Hana Lee Goldin surveys the state of play– who’s buying our personal information, what they’re using it for, and how the system works behind the screen– and considers our options…

Sometime in the mid-2000s, most of us started handing over pieces of ourselves to the internet without giving the exchange a second thought. We created email accounts, signed up for social media, bought things online, downloaded apps, swiped loyalty cards, connected fitness trackers, stored photos in the cloud, and agreed to terms of service that almost none of us have ever read in full. We did this thousands of times over two decades and counting, and each interaction felt small enough to be inconsequential.

But the accumulation is enormous. More than 6 billion people now use the internet, and each one makes an estimated 5,000 digital interactions per day. Most of those interactions happen without our conscious awareness: a GPS ping, a page load, an app opening, a browser cookie refreshing, a device checking in with a cell tower. The average person in 2010 made an estimated 298 digital interactions per day. In fifteen years, that number multiplied more than sixteenfold. Those digital interactions produce records that can persist indefinitely, stored, copied, indexed, bought, sold, and combined with other records to build profiles of extraordinary detail.

If we’ve been online since the late 1990s or early 2000s, our data footprint can include social media accounts we’ve created, online purchases we’ve made, forums we’ve posted in, loyalty cards we’ve used, and apps we’ve installed going back decades. Some of that information lives on platforms we’ve long forgotten. Some of it was collected by companies that have since been acquired or dissolved, with our data potentially passing to successor entities we’ve never heard of. The digital life most of us have been living for 15 to 25 years has produced a layered, evolving archive that only grows more valuable to the people who buy and sell it as time goes on.

Most of us sense that something is off about all of this. In a 2023 survey, Pew Research found that roughly eight in ten Americans feel they have little to no control over the data companies collect about them, 71% are concerned about government data use, and 67% say they understand little to nothing about what companies are doing with their personal information. The concern is real and widespread. And so is the feeling of helplessness: 60% of Americans believe it’s impossible to go through daily life without having their data tracked. The unease is there. What’s missing is a clear picture of what’s happening on the other side of the transaction…

[Goldin explains what data is being collected and shared, and by whom; how the data is managed and trafficked; how its being used (by insurance and financial companies, employers and landlords, retailers, AI companies, governments, and criminals); and how “inferred” data is used to augment the “hard” data. It’s chilling. She then puts the issue into context, and discusses we we can– and cannot– do about it…]

… The philosopher Helen Nissenbaum has a framework for what’s happening here: contextual integrity. The idea is that privacy isn’t about secrecy. We share information willingly all the time, when the context fits. We tell our doctor about a health condition because we expect that information to stay within the medical relationship. We search for symptoms on a health website because we assume that search won’t follow us into an insurance application. In the current data economy, that’s exactly the kind of boundary that dissolves, because the company collecting the data and the company buying it are operating in completely different contexts.

This is an information literacy problem as much as a privacy problem. Information literacy is usually framed around consumption: evaluating sources, questioning claims, recognizing bias in what we read and watch. But every time we interact with a digital service, we’re also producing information: generating a record that will be read, interpreted, scored, and acted on by organizations we may never interact with directly. Many of us have gotten better at questioning the information that comes at us: checking sources, noticing bias, and recognizing when something is trying to sell us a conclusion. But we haven’t developed equivalent habits around the information that flows from us: where it goes after we hand it over, who reads the record, what incentives they have, and what conclusions they draw. The gap between what we think we’re consenting to and what we’ve agreed to in practice is where the real exposure lives, and the system is designed to keep that gap invisible.

One of the reasons the “so what” question is hard to answer with action is that opting out of data collection often means opting out of participation. Declining a social media platform’s terms of service means not using the platform. Refusing location permissions can mean losing access to navigation, ride-sharing, weather, and delivery apps. Choosing not to create an account can mean paying more, seeing less, or being locked out of services that have become essential infrastructure for work, communication, healthcare, banking, and education.

The architecture of digital consent treats data sharing as a binary: agree to the terms or don’t use the product. There’s rarely a middle option that allows us to use a service while limiting what data gets collected and where it goes. The result is that the “choice” to share data often functions as a condition of entry into daily life rather than an informed negotiation. We’re not handing over data because we’ve weighed the tradeoff and decided it’s fair. We’re handing it over because the alternative is exclusion from services we rely on.

This is the structural context behind the Pew Research Center finding that more than half of Americans believe it’s impossible to go through daily life without being tracked. For many of us, it isn’t possible, at least not without significant inconvenience or sacrifice. The question isn’t whether we can avoid data collection entirely, because for the vast majority of people who participate in modern life, the answer is no. The question is whether we can make more informed decisions within the constraints we’re operating in, and whether the system can be pushed – through regulation, through market pressure, through better tools – toward something more transparent.

California’s Delete Act, which took effect in January 2026, is the strongest example of what’s emerging. It created a platform called DROP (Delete Request and Opt-Out Platform) that lets California residents submit a single deletion request to every registered data broker in the state. Brokers are required to process those requests, maintain suppression lists to prevent re-collection, and check the platform regularly for new requests. The European Union’s GDPR provides similar individual rights, and a handful of other U.S. states have enacted their own privacy laws with varying levels of protection. But the coverage is uneven: what’s available to a California or EU resident may not extend to someone in a state without comparable legislation.

Some services now automate parts of the opt-out process, submitting removal requests to dozens of brokers on our behalf. These can’t erase the data trail entirely, but they can narrow what’s actively available for sale.

Beyond deletion, there are smaller choices that reduce how much new data we generate. We can audit which apps have permission to track our location or access our contacts, since a surprising amount of behavioral data comes from apps that don’t need those permissions to function. We can treat “sign in with Google” and “sign in with Facebook” buttons as what they are: data-sharing agreements that can link a new service to an existing profile. And we can glance at the first few lines of a privacy policy before agreeing, looking for some version of “we may share your information with our partners,” where “partners” just means anyone willing to pay.

Most of us don’t read privacy policies, and the policies aren’t built to be read. They average thousands of words of dense legal language filled with terms like “legitimate interest,” “data processor,” and “de-identified data.” Studies consistently put them at a late high school to early college reading level (grade 12 to 14), but the difficulty goes beyond reading level: the concepts are abstract, the volume of agreements we encounter is enormous, and the design of the consent process itself pushes us through as fast as possible. Pre-checked boxes, auto-scrolling agreement windows, “accept all” buttons positioned prominently while “customize settings” options sit behind additional clicks. These are dark patterns, design choices that make the path of least resistance the path of maximum data sharing.

The result is a gap between the moment we share a piece of information and the moment that information shapes a decision about our lives. We don’t connect the app to the insurance premium or the loyalty card to the rental application because the chain of custody between them is long, complex, and designed to stay out of view.

The same critical thinking we’ve learned to apply to the information flowing toward us (checking sources, questioning claims, looking for bias) applies to the information flowing from us: who’s collecting this, what will they do with it, who else will see it, and what did we agree to? The difference is that in the data economy, we’re the product being evaluated, and the questions are being asked about us rather than by us.

So can we get it back? Not entirely. Data that’s already been collected, copied, sold, and processed across multiple systems can’t be fully recalled. What we can do is reduce what’s actively available for sale, slow the flow of new data going forward, and take advantage of legal tools that didn’t exist a few years ago. The archive of our past digital lives is too distributed to undo, but the file is still being written, and we have more say over the next page than we did over the last twenty years of them.

So what if they have our data? The tradeoff extends well beyond better ads. It reaches into the prices we’re charged, the credit we’re offered, the jobs we’re considered for, the insurance premiums we pay, the AI systems trained on our behavior, the accuracy of the profiles used to make decisions about our lives, and the degree to which government agencies can monitor our movements without a warrant. Every new service we sign up for, every permission we grant, and every terms-of-service agreement we accept adds another layer to that file. We can’t close the file entirely, but we can make more informed decisions about what goes into it next…

Eminently worth reading in full: “So What if They Have My Data?

See also: “Why Do We Care So Much About Privacy?” (source of the image above) in which Louis Menand suggests that our concern should be with the “weaponization” of data…

Daniel J. Solove, Nothing to Hide: The False Tradeoff Between Privacy and Security

###

As we reinforce our rights, we might recall that it was on this date in 1996 that the internet-as-we’ve-come-to-know-it broke big into the mainstream: Yahoo! launched the national campaign that asked “Do You Yahoo?” advertising its web-based search service on national television. The campaign was created by ad agency Black Rocket and Yahoo Marketing Head Karen Edwards (whose many awards for the work include a seat in the Advertising Hall of Achievement).

An early spot from the campaign…

Written by (Roughly) Daily

April 25, 2026 at 1:00 am

“The present is pregnant with the future”*…

The estimable Tim O’Reilly uses scenario planning to create an insightful look at AI, our futures, and the choices that will define them…

We all read it in the daily news. The New York Times reports that economists who once dismissed the AI job threat are now taking it seriously. In February, Jack Dorsey cut 40% of Block’s workforce, telling shareholders that “intelligence tools have changed what it means to build and run a company.” Block’s stock rose 20%. Salesforce has shed thousands of customer support workers, saying AI was already doing half the work. And a Stanford study found that software developers aged 22 to 25 saw employment drop nearly 20% from its peak, while developers over 26 were doing fine.

But how are we to square this news with a Vanguard study that found that the 100 occupations most exposed to AI were actually outperforming the rest of the labor market in both job growth and wages, and a rigorous NBER study of 25,000 Danish workers that found zero measurable effect of AI on earnings or hours?

Other studies could contribute to either side of the argument. For example, PwC’s 2025 Global AI Jobs Barometer, analyzing close to a billion job ads across six continents, found that workers with AI skills earn a 56% wage premium, and that productivity growth has nearly quadrupled in the industries most exposed to AI.

This is exactly the kind of contradictory, uncertain landscape that scenario planning was designed for. Scenario planning doesn’t ask you to predict what the future will be. It asks you to imagine divergent possible futures and to develop a strategy that improves your odds of success across all of them. I’ve used it many times at O’Reilly and have written about it before with COVID and climate change as illustrative examples. The argument between those who say AI will cause mass unemployment and those who insist technology always creates more jobs than it destroys is a debate that will only be resolved by time. Both sides have evidence. Both are probably right at some level. And both framings are not terribly helpful for anyone trying to figure out what to do next…

[O’Reilly explains the scenario approach, then applies it to our future with AI (see the image above), astutely assessing the conflicting signals that we’ve experiencing; he explores the “robust strategy” for our uncertian future (strategic choices that make sense regardless of which future unfolds); then he concludes…

… I’ll return to the theme that I sounded in my book WTF? What’s the Future and Why It’s Up To Us.

Every time a company uses AI to do what it was already doing with fewer people, it is making a choice for the lower half of the scenario grid. Every time a company uses AI to do something that wasn’t previously possible, to serve a customer who wasn’t previously served, to solve a problem that wasn’t previously solvable, it is making a choice for the upper half. These choices compound, for good or ill. An economy that uses AI primarily for efficiency will slowly hollow itself out.

Looking at the news from the future, both sets of signals are present. The question is which will dominate. AI will give us both the Augmentation Economy and the Displacement Crisis, in different measures in different places, depending on the choices we make.

Scenario planning teaches us that we don’t have to predict which future we’ll get. We do have to prepare for a very uncertain future. But the robust strategy, the one that works across every quadrant, is to focus on doing more, not just doing the same with less, and to find ways that human taste still matters in what is created. As long as there is unmet demand, as long as there are problems we haven’t solved and people we haven’t served, AI will augment human work rather than replacing it. It’s only when we stop looking for new things to do that the machines come for the jobs…

Eminently worth reading in full. Indeed, speaking as a long-time scenario planner, your correspondent can only wish that everyone who wields “scenarios” applies the approach as appropriately, adriotly, and acutely as Tim has: “Scenario Planning for AI and the ‘Jobless Future‘,” from @timoreilly.bsky.social.

* Voltaire

###

As we take the long view, we might send formative birthday greetings to Mark Pinsker; he was born on this date in 1923. A mathematician, he made impoprtant contributions to the fields of information theory, probability theory, coding theory, ergodic theory, mathematical statistics, and communication networks. This work, which helped lay the foundation for AI-as-we-know-it, earned him the IEEE Claude E. Shannon Award in 1978, and the IEEE Richard W. Hamming Medal in 1996, among other honors.

source

“Everything is destroyed by its own particular vice: the destructive power resides within”*…

Government graft in the U. S. has a long (and unbroken) history; but there have been especially corrupt periods, for instance in the Jacksonian era and the Gilded Age… and again today.

Profiteering and insider trading, “pay-to-play”/influence peddling, foreign emoluments, conflicts of interest, regulatory and policy favors, purchased pardons (and commutations)– we’ve got it all, and at epic levels.

The estimable Cory Doctorow uses a telling comparison to drill down on one of the dominant strands: Trump’s (ironic) campaign to fight (what he identifies as) corruption…

… It’s a story about boss-politics anti-corruption, in which anti-corruption is pursued to corrupt ends.

From 2012-2015, Xi Jinping celebrated his second term as the leader of China with a mass purge undertaken in the name of anti-corruption. Officials from every level of Chinese politics were fired, and many were imprisoned. This allowed Xi to consolidate his control over the CCP, which culminated in a rule-change that eliminated term-limits, paving the way for Xi to continue to rule China for so long as he breathes and wills to power.

Xi’s purge exclusively targeted officials in his rivals’ power-base, kneecapping anyone who might have blocked his power-grab. But just because Xi targeted his rivals’ princelings and foot-soldiers, it doesn’t mean that Xi was targeting the innocent. A 2018 paper by an economist (Peter Lorentzen, USF) and a political scientist (Xi Lu, NUS) concluded that Xi’s purge really did target corrupt officials.

The authors reached this conclusion by referencing the data published in the resulting corruption trials, which showed that these officials accepted and offered bribes and feathered their allies’ nests at public expense.

In other words, Xi didn’t cheat by framing innocent officials for crimes they didn’t commit. The way Xi cheated was by exclusively targeting his rivals’ allies. Lorentzen and Lu’s paper make it clear that Xi could easily have prosecuted many corrupt officials in his own power base, but he left them unmolested.

This is corrupt anti-corruption. In an environment in which everyone in power is crooked, you can exclusively bring legitimate prosecutions, and still be doing corruption. You just need to confine your prosecutions to your political enemies, whether or not they are more guilty than your allies (think here of the GOP dragging the Clintons into Epstein depositions).

14 years later, Xi’s anti-corruption purges continue apace, with 100 empty seats at this year’s National People’s Congress, whose former occupants are freshly imprisoned or awaiting trial.

I don’t know the details of all 100 prosecutions, but China absolutely has a corruption problem that goes all the way to the upper echelon of the state. I find it easy to believe that the officials Xi has targeted are guilty – and I also wouldn’t be surprised to hear that they are all supporters of Xi’s internal rivals for control of the CCP.

As the Epstein files demonstrate, anyone hoping to conduct a purge of America’s elites could easily do so without having to frame anyone for crimes they didn’t commit (remember, Epstein didn’t just commit sex crimes – he was also a flagrant financial criminal and he implicated his network in those crimes).

It’s not just Epstein. As America’s capital classes indulge their incestuous longings with an endless orgy of mergers, it’s corporate Habsburg jaws as far as the eye can see. These mergers are all as illegal as hell, but if you fire a mouthy comedian, you can make serious bank.

And if you pay the right MAGA chud podcaster a million bucks, he’ll grease your $14b merger through the DoJ.

And once these crooks merge to monopoly, they embark on programs of lawlessness that would shame Al Capone, but again, with the right podcaster on your side, you can keep on “robbing them blind, baby!”

The fact that these companies are all guilty is a foundational aspect of Trumpism. Boss-politics antitrust – and anti-corruption – doesn’t need to manufacture evidence or pretexts to attack Trump’s political rivals. When everyone is guilty, you have a target-rich environment for extorting bribes.

Just because the anti-corruption has legit targets, it doesn’t follow that the whole thing isn’t corrupt…

On the practice of selective enforcement and prosecution: “Corrupt anticorruption,” from @pluralistic.net.web.brid.gy.

For thoughts on what we can do about all of this, see “Building political integrity to stamp out corruption: three steps to cleaner politics” (source of the image above)

Menander

###

As we decide on disinfectants, we might recall that it was on this date in 37 CE, following the death of Tiberius, that the Roman Senate annulled Tiberius’ will and confirmed Caligula, his grandnephew, as the third Roman emperor.  (Tiberius had willed that the reign should be shared by his nephew [and adopted son] Germanicus and Germanicus’ son, Caligula.)

While he has been remembered as the poster boy for profligacy and corruption, Caligula (“Little Boots”) is generally agreed to have been a temperate ruler through the first six months of his reign.  His excesses after that– cruelty, self-dealing, extravagance, sexual perversity– are “known” to us via sources increasingly called into question.

Still, historians agree that Caligula did work hard to increase the unconstrained personal power of the emperor at the expense of the countervailing Principate; and he oversaw the construction of notoriously luxurious dwellings for himself.  In 41 CE, members of the Roman Senate and of Caligula’s household attempted a coup to restore the Republic.  They enlisted the Praetorian Guard, who killed Caligula– the first Roman Emperor to be assassinated (Julius Caesar was assassinated, but was Dictator, not Emperor).  In the event, the Praetorians thwarted the Republican dream by appointing (and supporting) Caligula’s uncle Claudius as the next Emperor.

 source

Written by (Roughly) Daily

March 18, 2026 at 1:00 am

“Here’s the church, here’s the steeple, open the doors, and see all the people”*…

It’s Sunday, and war is raging (again) in the Middle East. This time around, the strains of fundamentalist Christian thought are hard to miss in the justifications of the role of the U.S. in the conflict. The widely-circulated reports of troops being briefed that the war in Iran is meant to hasten the Biblical End Times may or may not be true. But it seems clear that the millennial contingent in Trump’s movement is all in on an apocalypse. (And here.) As the right-wing site Media Matters reports, “Christian media figures have claimed that the Iran war could signal ‘the second coming’ or the ‘End Times’ and said ‘we are watching incredible prophecy in this time come to pass’.”

Tal Lavin has reached back to the work he did for his book Wild Faith to help us understand…

As chaos and violence break out across the Middle East in a war led by the US with Israel as junior partner, I wanted to revisit my research on Christian apocalyptic prophecy… about the evangelical Christians eagerly looking forward to the end of the world—and influencing foreign policy to bring it closer. It’s difficult to conceive of willful courting of disaster for religious reasons, but decades of modern Christian prophecy eagerly foresee mass bloodshed in the Middle East as a prelude to Christ’s triumphant return. Evangelicals of this stripe form a crucial part of Trump’s base and governing coalition…

Eminently worth reading in full: “Yearning for the Apocalypse,” from @swordsjew.bsky.social.

And lest we think that this inveighling is in any way unprecedented, Matthew Avery Sutton, reminds us that there’s a long history of politics using religion (and vice versa). In an excerpt from his new book, Chosen Land: How Christianity Made America and Americans Remade Christianity, he tells the story of Reconstuction, during which churches were mobilized on both sides of the divide-that-never-went-away…

… In the aftermath of the Civil War, federal leaders sought help from Christian groups… as they sought to reassert their power across the entire United States. The US Army had won on the battlefields, and now governing authorities and their protestant collaborators sought to secure the peace. They aimed to reconstruct the nation, to rebuild Americans’ shattered sense of their nation’s exceptional history and manifest destiny, and to reinvigorate their commitment to the United States’ Christian mission. But to succeed, policymakers knew they needed to limit dissent—including religious dissent.

Christian activists played key roles in every part of postwar reconstruction. In the South, Black ministers and White missionaries welcomed the formerly enslaved into the faith and worked with them to establish independent social and political lives. Defeated Southern Whites launched a multi-generation effort to defend their treason by reimagining the causes of the Civil War and God’s role in it. In the West, a series of Indian wars led to the US government’s creation of a comprehensive reservation system, where government-sponsored missionaries sought to Christianize tribes and “civilize” their children. In Utah Territory the US government cracked down on the Church of Jesus Christ of Latter-day Saints and its impressive theocracy, seeking to quell religious dissent.

Across the nation, Reconstruction policies provided new opportunities for church leaders in collaboration with the government to impose their ideas and values on the land and its peoples. Protestant activists believed that they alone had the tools and expertise to integrate Black and Native peoples, former Confederates, and religious dissenters into the body politic, while bringing healing and reconciliation to all Americans on their terms. Rocked by the split over slavery and then the war, they worked to build unity by identifying common threats and enemies and organizing Christians against them. Their actions demonstrated that after the conflict, just as before, the free exercise clause did not apply to all equally. But minority groups constantly challenged the power of mainstream Christian leaders…

… Only about one-third of enslaved Americans considered themselves Christian at the start of the Civil War. But in the Reconstruction era Black church going skyrocketed. And just about all of those who converted chose to attend Black-led churches. The days of Southern Black Christians submitting to second-class treatment in the house of the Lord had ended. In urban areas, African Americans could usually join churches that Black activists had founded before the war. In rural areas, they had fewer options. They sometimes had to settle for makeshift meetings in vacant buildings or arrange outdoor services until they could build rudimentary houses of worship.

Black clergy became some of the strongest advocates for full equality and rights in the postwar South. Seeing Jesus as a liberator, they aimed to make the egalitarianism of the gospel and the Declaration’s line that “all men are created equal” the reality in the United States. Many engaged directly in politics, understanding that while slavery might have ended, securing political equality required vigilance…

… Black ministers’ political engagement made them targets of violence. Members of the Ku Klux Klan, a [Protestant-led] terrorist organization founded by Southern Whites shortly after the war, burned down churches and threatened Black activists. A journalist testified to the US Senate about his interview with a minister. While “he had been preaching on the circuit,” Klansmen dragged the preacher from bed in the middle of the night and “beat him severely.” They “told him that if he returned to the county he would suffer for it.” This was one example of many. As racial violence escalated in the South, serving as a minister proved dangerous…

… Historian, sociologist, and Black activist W.E.B. Du Bois summarized in 1903 the role that churches played in Black life, especially in the postwar South. “The Negro church of today is the social centre of Negro life in the United States,” he wrote, “and the most characteristic expression of African character.” Postwar Black churches, as Du Bois understood, represented the heart of Black efforts to secure social, political, and religious equality. Church leaders had engineered the Christian faith into a tool of liberation, which made them a threat to the White Christians of the South and much of the rest of the United States.

In addition to working to suppress Black political and religious power, many Southern Whites launched a quasi-religious campaign to reshape the memory of the Civil War. Rather than acknowledge their deep investment in slavery, they recast the conflict as a tragic clash between two honorable forces—the North fighting to preserve the Union, and the South struggling to defend local autonomy and states’ rights. The authors of this revisionist account reduced slavery to a secondary issue, incidental to the “real” causes of the war. As a result, by war’s end, many White Southerners felt they had no reason to repent, no moral reckoning to face, and no obligation to embrace Black equality or suffrage. For them, the war had simply preserved the Union and, almost as an afterthought, ended slavery. Nothing more.

Christianity became central to this new Southern narrative. In defeat, White Southerners cast themselves in the role of Christ, imagining their suffering as redemptive. They claimed they had sacrificed for the greater good of the nation, their values—chivalric protection of White women, paternalistic care for those they enslaved, and Christian devotion—positioned them as the rightful moral leaders of the country. In their view, God had chosen them to guide the nation toward righteousness, but first he had humbled and purified them through the bloodshed of war…

Also eminently worth reading in full: How Christianity Was Used By the Powerful and the Marginalized to Shape Post-Civil War America,” from @literaryhub.bsky.social.

We are reminded why our founding fathers– so many of them, Deists— so wisely insisted on freedom of religion and separation of church and state.

Apposite: “The ‘Straight White American Jesus’ podcast covers the history, philosophy, theology, and politics of Christian nationalism” (from Boing Boing)

Also, (under the general heading “things aren’t always what they seem”): “The Iran War’s Most Precious Commodity Isn’t Oil,” (gift article from Bloomberg)

And finally: only vaguely related, but fascinating: “Preached Whales“– (landlocked) Central European pulpits shaped like fish, whales, and boats.

classic children’s fingerplay rhyme

###

As we celebrate separation, we might recall that it was on this date in 1965 that “Subterranean Homesick Blues” by Bob Dylan was released.

Johnny’s in the basement, mixin’ up the medicine / I’m on the pavement, thinkin’ about the government…

The opening sequence of D. A. Pennebaker‘s Dont Look Back (the apostrophe is absent in the title… and yes, that’s Allen Ginsberg in the background)

“The economic system is, in effect, a mere function of social organization”*…

A statue in the likeness of a police officer stands watch over a smart highway in Jinan, China, on April 18, 2024

The AI race is, of course, afoot. But while most headlines focus on the new capabilities and benchmarks achieved by competing developers, Jeremy Shapiro reminds us that the winners in this race won’t necessarily be the most objectively capable, but rather the players who most effectively integrate the technology into their organizations, economies, and societies…

Artificial intelligence has rapidly become a central arena of geopolitical competition. The United States government frames AI as a strategic asset on par with energy or defense and seeks to press its apparent lead in developing the technology. The European Union lags in platform power but seeks influence over AI through regulation, labor protections, and rule-setting. China is racing to catch up and to deploy AI at scale, combining heavy state investment with administrative control and surveillance.

Each of these rivals fears falling behind. Losing the AI race is widely understood to mean slower growth, military disadvantage, technological dependence, and diminished global influence. As a result, governments are pouring money into chips, data centers, and national AI champions, while tightening export controls and treating compute capacity as a strategic resource. But this familiar race narrative obscures a deeper danger. AI is not just another general-purpose technology. It is a force capable of reshaping the very meaning of work, income, and social status. The states that lose control of these social effects may find that technological leadership offers little geopolitical advantage.

History suggests that societies unable to absorb disruptive economic change become politically volatile, strategically erratic, and ultimately weaker competitors. The central question, then, is not only who builds the most powerful AI systems, but who can integrate them into society without triggering a societal backlash or an institutional breakdown.

Karl Polanyi’s The Great Transformation, published in 1944, explains why the capacity to “socially embed” new market forces determines national strength. By “embeddedness,” Polanyi meant that markets have historically been subordinate to social and political institutions, rather than governing them. The nineteenthcentury idea of what he called a “self-regulating market” was historically novel precisely because it sought to “disembed” the economy from society and organize social life around price and competition rather than social obligation. As Polanyi put it in his most succinct formulation, “instead of economy being embedded in social relations, social relations are embedded in the economic system.”

Writing in the shadow of the Great Depression, Polanyi argued that the attempt in the nineteenth century to create a self-regulating market society that treated labor, land, and money as commodities generated social dislocation so severe that it provoked authoritarian backlash and geopolitical collapse. Stable orders, he insisted, required markets to be re-embedded in social and political institutions. Where they were not, societies sought protection by other means, which often translated into support for fascist or communist regimes that promised to tame the market. Today, it often means electing populist leaders who promise to break the entire existing order, both domestic and international.

Polanyi insisted that the idea of a “self-adjusting market implied a stark utopia” because such a system could not exist “for any length of time without annihilating the human and natural substance of society.” The interwar gold standard, for example, disciplined states in the name of efficiency, but it did so by transmitting economic shocks directly into social life. When democratic governments proved unable to shield their populations, they either abandoned the liberal economic order or turned authoritarian (or both)…

[Shapiro considers the history of the 20th century, in particular the rise of Nazi Gernmany, sketches the state of play in the AI arena, considers the challenge of embedding the changes that AI will bring in The U.S., Europe, and China, then teases out the ways in which the “industrial revolution” is different from it predecessors (in particular, the mobility of capital, the services (as opposed to manufacturing)-heavy character of employment today, and the accelerating pace of tech deelopment. He concludes…]

… Geopolitical competition in the AI age will not take place solely in clean rooms or data centers. It will also involve the less visible realm of social institutions: labor markets, communities, social protections, and political legitimacy. Polanyi teaches us that markets are powerful only when societies can bear them. When they cannot, markets provoke their own undoing and often in rather spectacular fashion.

The West’s success in the Cold War owed much to its ability to reconcile capitalism with social protection. If the AI age is another “great transformation,” the same lesson applies. Chips matter. Data matters. But the ultimate source of power may be the capacity to re-embed technological change in society without sacrificing cohesion.

That is not a liberal-progressive distraction from geopolitical competition. It is its hidden core.

The Next Great Transformation,” from @jyshapiro.bsky.social and @open-society.bsky.social.

For a complementary perspective (with special focus on the interaction between labor and the supply side of the economy) pair with: “Brave New World- a third industrial divide?” from @thunen.bsky.social in @phenomenalworld.bsky.social.

And see also: “AI and the Futures of Work,” from Johannes Kleske (@jkleske.bsky.social). A response to dramatic predictions of AI’s impact– most recently, Matt Shumer‘s viral “Something Big Is Happening“: it’s a possible future, Kleske suggests. but only one possibe future– and one that, while plausible, isn’t likely (at least outside the rarified atmsphere of coding, in which Shumer operates). In a way that echoes Shapiro’s piece above, Kleske suggests that individuals need to better understand the technology in order to retain/regain some agency, and societies need the same kind of rekindled resistance to act clearly and with purpose in re-embedding AI, and markets, in society. Not the other way around… Resonant with the thinking of Tim O’Reilly and Mike Loukides featured here before: “The best way to predict the future is to invent it“; and with Ted Chiang‘s “ChatGPT Is a Blurry JPEG of the Web” and “Will A.I. Become the New McKinsey?” And then there’s the ever-illuminating Rusty Foster (riffing on Gideon Lewis-Kraus‘ recent New Yorker piece): “A. I. Isn’t People.”

For a look at a high-value, trust-based use case for AI that seems to avoid the objections to AGI (and speak to Shapiro’s points), see “The Middle Game: Routers at the Edge,” from Byrne Hobart.

But back to AGI… as Nicholas Carr observes, we might understand Bosrtrom’s “paperclip maximizer” “not as a thought experiment but as a fable. It’s not really about AIs making paperclips. It’s about people making AIs. Look around. Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence? Are we not trapped in an “AI maximizer” scenario?”

###

As we digest development, we might recall that it was on this date in 1962 that an early precondition for the revolution underway was first achieved: telephone and television signals were first relayed in space via the communications satellite Echo 1– basically a big metallic balloon that simply bounced radio signals off its surface.  Simple, but effective.

Forty thousand pounds (18,144 kg) of air was required to inflate the sphere on the ground; so it was inflated in space.  While in orbit it only required several pounds of gas to keep it inflated.

Fun fact: the Echo 1 was built for NASA by Gilmore Schjeldahl, a Minnesota inventor probably better remembered as the creator of the plastic-lined airsickness bag.

200px-Echo-1

source