(Roughly) Daily

Posts Tagged ‘social media

“I tend to think that most fears about A.I. are best understood as fears about capitalism”*…

Further to Wednesday‘s and yesterday‘s posts (on to other topics again after this, I promise), a powerful piece from Patrick Tanguay (in his always-illuminating Sentiers newsletter).

He begins with a consideration of Peter Wolfendale’s “Geist in the machine

… Wolfendale argues that the current AI debate recapitulates an 18th-century conflict between mechanism and romanticism. On one side, naive rationalists (Yudkowsky, Bostrom, much of Silicon Valley) assume intelligence is ultimately reducible to calculation; throw enough computing power at the problem and the gap between human and machine closes. On the other, popular romantics (Bender, Noë, many artists) insist that something about human cognition, whether it’s embodiment, meaning, or consciousness, can never be mechanised. Wolfendale finds both positions insufficient. The rationalists reduce difficult choices to optimisation problems, while the romantics bundle distinct capacities into a single vague essence.

His alternative draws on Kant and Hegel. He separates what we loosely call the “soul” into three capacities: wisdom (the metacognitive ability to reformulate problems, not just solve them), creativity (the ability to invent new rules rather than search through existing ones), and autonomy (the capacity to question and revise our own motivations). Current AI systems show glimmers of the first two but lack the third entirely. Wolfendale treats autonomy as the defining feature of personhood: not a hidden essence steering action, but the ongoing process of asking who we want to be and revising our commitments accordingly. Following Hegel he calls this Geist, spirit as self-reflective freedom.

Wolfendale doesn’t ask whether machines can have souls; he argues we should build them, and that the greater risk lies in not doing so. Machines that handle all our meaningful choices without possessing genuine autonomy would sever us from the communities of mutual recognition through which we pursue truth, beauty, and justice. A perfectly optimised servant that satisfies our preferences while leaving us unchanged is, in his phrase, “a slave so abject it masters us.” Most philosophical treatments of AI consciousness end with a verdict on possibility. Wolfendale ends with an ethical imperative: freedom is best preserved by extending it.

I can’t say I agree, unless “we”… end up with a completely different relationship to our technology and capital. However, his argument all the way before then is a worthy reflection, and pairs well with the one below and another from issue No.387. I’m talking about Anil Seth’s The mythology of conscious AI, where he argues that consciousness probably requires biological life and that silicon-based AI is unlikely to achieve it. Seth maps the biological terrain that makes consciousness hard to replicate; Wolfendale maps the philosophical terrain that makes personhood worth pursuing anyway, on entirely different grounds. Seth ends where the interesting problem begins for Wolfendale: even if machines can’t be conscious, the question of whether they can be autonomous persons, capable of self-reflective revision, remains open:

Though GenAI systems can’t usually compete with human creatives on their own, they are increasingly being used as imaginative prosthetics. This symbiosis reveals that what distinguishes human creativity is not the precise range of heuristics embedded in our perceptual systems, but our metacognitive capacity to modulate and combine them in pursuit of novelty. What makes our imaginative processes conscious is our ability to self-consciously intervene in them, deliberately making unusual choices or drawing analogies between disparate tasks. And yet metacognition is nothing on its own. If reason demands revision, new rules must come from somewhere. […]

[Hubert Dreyfus] argues that the comparative robustness of human intelligence lies in our ability to navigate the relationships between factors and determine what matters in any practical situation. He claims that this wouldn’t be possible were it not for our bodies, which shape the range of actions we can perform, and our needs, which unify our various goals and projects into a structured framework. Dreyfus argues that, without bodies and needs, machines will never match us. […]

This is the basic link between self-determination and self-justification. For Hegel, to be free isn’t simply to be oneself – it isn’t enough to play by one’s own rules. We must also be responsive to error, ensuring not just that inconsistencies in our principles and practices are resolved, but that we build frameworks to hold one another mutually accountable. […]

Delegating all our choices to mere automatons risks alienating us from our sources of meaning. If we consume only media optimised for our personal preferences, generated by AIs with no preferences of their own, then we will cease to belong to aesthetic communities in which tastes are assessed, challenged and deepened. We will no longer see ourselves and one another as even passively involved in the pursuit of beauty. Without mutual recognition in science and civic life, we might as easily be estranged from truth and right – told how to think and act by anonymous machines rather than experts we hold to account…

Tanguay then turns to “The Prospect of Butlerian Jihad” by Liam Mullally, in which Mullally uses…

… Herbert’s Dune and the Butlerian Jihad [here] as a lens for what he sees as a growing anti-tech “structure of feeling” (Raymond Williams’s term): the diffuse public unease about AI, enshittification, surveillance, and tech oligarchs that has not yet solidified into coherent politics. The closest thing to a political expression so far is neo-Luddism, which Mullally credits for drawing attention to technological exploitation but finds insufficient. His concern is that the impulse to reject technology wholesale smuggles in essentialist assumptions about human nature, a romantic defence of “pure” humanity against the corruption of machines. He traces this logic back to Samuel Butler’s 1863 essay Darwin Among the Machines, which framed the human-technology relationship as a zero-sum contest for supremacy, and notes that Butler’s framing was “explicitly supremacist,” written from within colonial New Zealand and structured by the same logic of domination it claimed to resist.

The alternative Mullally proposes draws on Bernard Stiegler’s concept of “originary technicity”: the idea that human subjectivity has always been constituted in part by its tools, that there is no pre-technological human to defend. [see here] If that’s right, then opposing technology as such is an “ontological confusion,” a fight against something that is already part of what we are. The real problem is not machines but the economic logic that shapes their development and deployment. Mullally is clear-eyed about this: capital does not have total command over its technologies, and understanding how they work is a precondition for contesting them. He closes by arguing that the anti-tech structure of feeling is “there for the taking,” but only if it can be redirected. The fights ahead are between capital and whatever coalition can form against it, not between humanity and machines. Technology is a terrain in that conflict; abandoning it means losing before the contest begins.

Wolfendale’s Geist in the Machine above arrived at a parallel conclusion from a different direction: where Mullally argues that rejecting technology means defending a false vision of the human, Wolfendale argues that refusing to extend autonomy to machines risks severing us from the self-reflective freedom that makes us persons in the first place. Both reject the romantic position, but for different reasons:

To the extent that neo-Luddites bring critical attention to technology, they are doing useful work. But this anti-tech sentiment frequently cohabitates with something uneasy: the treatment of technology as some abstract and impenetrable evil, and the retreat, against this, into essentialist views of the human. […]

If “humanity” is not a thing-in-itself, but historically, socially and technically mutable, then the sphere of possibility of the human and of our world becomes much broader. Our relationship to the non-human — to technology or to nature — does not need to be one of control, domination and exploitation. […]

As calls for a fight back against technology grow, the left needs to carefully consider what it is advocating for. Are we fighting the exploitation of workers, the hollowing out of culture and the destruction of the earth via technology, or are we rallying in defence of false visions of pure, a-technical humanity? […]

The anti-tech structure of feeling is there for the taking. But if it is to lead anywhere, it must be taken carefully: a fightback against technological exploitation will be found not in the complete rejection of technology, but in the short-circuiting of one kind of technology and the development of another.

As Max Read (scroll down) observes:

… if we understand A.I. as a product of the systems that precede it, I think it’s fair to say ubiquitous A.I.-generated text is “inevitable” in the same way that high-volume blogs were “inevitable” or Facebook fake news pages were “inevitable”: Not because of some “natural” superiority or excellence, but because they follow so directly from the logic of the system out of which they emerge. In this sense A.I. is “inevitable” precisely because it’s not revolutionary…

The question isn’t if we want a relationship with technology; it’s what kind of relationship we want. We’ve always (at least since we’ve been a conscious species) co-existed with, and been shaped by, tools; we’ve always suffered the “friction” of technological transition as we innovate new tools. As yesterday’s post suggested (in its defense of the open web in the face on a voracious attack from powerful LLM companies), “what matters is power“… power to shape the relationship(s) we have with the technologies we use. That power is currently in the hands of a relatively few companies, all concerned above all else with harvesting as much money as they can from “uses” they design to amplify that engagement and ease that monetization. It doesn’t, of course, have to be this way.

We’ve lived under modern capitalism for only a few hundred years, and under the hyper-global, hyper-extractive regime we currently inhabit for only a century-and-a-half or so, during which time, in fits and starts, it has grown ever more rapcious. George Monbiot observed that “like coal, capitalism has brought many benefits. But, like coal, it now causes more harm than good.” And Ursula Le Guin, that “we live in capitalism. Its power seems inescapable. So did the divine right of kings.” In many countries, “divine right” monarchy has been replaced by “constitutional monarchy.” Perhaps it’s time for more of the world to consider “constitutional capitalism.” We could start by learning from the successes and failures of Scandinavia and Europe.

Social media, AI, quantum computing– on being clear as to the real issue: “Geist in the machine & The prospect of Butlerian Jihad,” from @inevernu.bsky.social.

Apposite: “The enclosure of the commons inaugurates a new ecological order. Enclosure did not just physically transfer the control over grasslands from the peasants to the lord. It marked a radical change in the attitudes of society toward the environment.”

(All this said, David Chalmers argues that there’s one possibility that might change everything: “Could a Large Language Model be Conscious?” On the other hand, the ARC Prize Foundation suggests, we have some time: a test they devised for benchmarking agentic intelligence recently found that “humans can solve 100% of the environments, in contrast to frontier AI systems which, as of March 2026, score below 1%”… :)

Ted Chiang (gift article; see also here and here and here)

###

As we keep our eyes on the prize, we might spare a thought for a man who wrestled with a version of these same issues in the last century, Pierre Teilhard de Chardin; he died on this date in 1955.  A Jesuit theologian, philosopher, geologist, and paleontologist, he conceived the idea of the Omega Point (a maximum level of complexity and consciousness towards which he believed the universe was evolving) and developed Vladimir Vernadsky‘s concept of noosphere.  Teilhard took part in the discovery of Peking Man, and wrote on the reconciliation of faith and evolutionary theory.  His thinking on both these fronts was censored during his lifetime by the Catholic Church (in particular for its implications for “original sin”); but in 2009, they lifted their ban.

source

“Distracted from distraction by distraction”*…

A man in a formal outfit sits in front of a laptop while looking toward a screen displaying a social media interface with a yellow emoji.

Don Moynihan argues that here has been a shift in the character– the instincts, the motivations, and thus the patterns of decision and action– of our government…

One of the strangest moments to emerge from the U.S. kidnapping of Nicolás Maduro was the flurry of images posted by President Trump on Truth Social. It felt a bit like a student who can’t decide which spring break photos look cutest, so they just upload them all.

The intent seemed to be to create an iconic image reminiscent of the White House Situation Room during the raid that killed Osama bin Laden—a gathering of stoic men (no girls allowed!) staring grimly at some unseen screen. The message: “Look how serious and important our work is!” Yet, the staged nature of these photos undermines that effect, leaving the whole scene feeling less like history in the making and more like an amateur theater production of a Broadway classic.

In one image, the Director of the CIA, the Chair of the Joint Chiefs of Staff, and the Secretary of Defense are grouped around a laptop. Behind them, unmistakably, a screen displays a feed from X—complete with a prominent yellow emoji. In other pictures, “Venezuela” appears to be in the search box.

Three men in professional attire are gathered around a laptop, with one man sitting focused on the screen, another standing and looking off-camera, and a third man seated, observing. A computer screen displays a social media interface in the background.

With the best intelligence systems in the world at their fingertips, they were checking X in the midst of the mission? Combined with the curtains separating some section of Mar‑A‑Lago from the rest of the President’s resort, the images create an almost surreal air. It felt as if a group of twelve-year-old boys in a basement had been handed control of the most lethal military in history—and were using it to boost their online brands.

Trump is undoubtedly the American president who has most effectively wielded social media: drawing attention, reshaping norms, and fueling conspiracy theories. The successful use of social media, for example, turned avowed MAGA isolationists into enthusiastic colonial imperialists overnight.

But I want to suggest that what we are witnessing from the Trump administration is not just skillful manipulation of social media—it’s something more profoundly worrying. Today, we live in a clicktatorship, ruled by a LOLviathan. Our algothracy is governed by poster brains.

It’s worth remembering that social media operates like a drug, feeding us dopamine and rewiring our brains’ reward pathways. The fundamentally unhealthy dynamics are worsened by the fact that standing out online often demands being awful—channeling negative emotions like anger and outrage, usually based on misinformation or conspiracy theories.

None of this is new. Indeed, there is a booming political science literature on the effects of social media on voter behavior. Chris Hayes and others have written persuasively about the how toxic attention farming is for us personally and for our democracy. But I want to make the case that we should also consider how social media it is affecting how policymakers use public power.

What I’m arguing is that the Trump administration isn’t just using social media to shape a narrative. Many of its members are deeply addicted to it. We would be concerned if a senior government official was an alcoholic or drug addict, knowing it could impair judgment and decisionmaking. But we should be equally concerned about Pete Hegseth and Elon Musk’s social media compulsions—just as much as their alcohol or ketamine use, respectively.

Overexposure to online engagement has cooked the brains of some of the most powerful people in the world. This is not exclusively an American phenomenon. President Yoon Suk Yeol seemed to have genuinely believed online conspiracy theories about election fraud, motivating his declaration of martial law and triggering a constitutional crisis, and his eventual arrest, in Korea.

But in the US government, poster brain feels endemic. The Trump administration is made up of a cabinet of posters. For many, that’s how they won Trump’s attention. The head of the FBI, for example, is a podcaster—that’s his main qualifier for the job.

They view the world through a social media lens in a way that is plausibly corrupting their judgment and undermining their performance. Lets think through how poster brain can affect how people in government operate…

[Moynihan explores, with illustrative examples, online bubbles, conflicts between professional and online indentities, the degradation of professional norms and work practices, and the altering of decision-making to be responsive to social media– to create content]

I’m just scratching the surface here. Pick any federal agency, and you can find examples of poster brains making important decisions. This trend is likely to only get worse as digital natives enter key government roles. And there are likely a host of other ways these patterns are undermining the professional behavior of people in government that I have not identified. In particular, the Trump administration represents the intersection of poster brain, personalism, and authoritarianism that seems especially toxic…

… The bottom line is that it we need to take more seriously how social media has rewired the brains—and behavior—of those running our country.

Eminently worth reading in full: What happens to government when everything is content? “Life Under a Clicktatorship,” from @donmoyn.bsky.social.

See also: “The Trump-Flavored Content Administration,” from @cooperlund.online, and “How ICE Makes Raids Go Viral,” from @taylorlorenz.bsky.social.

And a bit orthogonal, but apposite: “The year of technoligarchy,” from @molly.wiki.

* T.S. Eliot, Burnt Norton

###

As we recommit to real life, we might recall that today in National Static Electicity Day.

A close-up image of a glowing plasma globe with tendrils of electric light branching out, creating a vibrant display of purple and blue colors.

source

Written by (Roughly) Daily

January 9, 2026 at 1:00 am

“Attention is the rarest and purest form of generosity”*…

An illustration depicting a large black fish with an open mouth, consuming smaller red fish, accompanied by the text 'what price media consolidation?'

… But that most valuable of gifts is being hijacked, subverted/converted into a commodity, and used to mold not just consumer behavior, but society-as-a-whole. We live in an attention economy, and its media/tech ownership landscape is becoming ever more consoldiated.

Kyla Scanlon unpacks the way in which concentrated ownership of media and tech and their automated manipulation reshape democracy…

It’s nearly impossible not to get lost in the news right now. I was at a wedding last week, and every conversation eventually drifted back to the same subject: the World We Are in and All That is Happening. The ground feels like it’s moving faster than anyone can feasibly keep up with.

Some people think the shift is progress. Others see collapse. Either way, the line between digital and physical life is increasingly blurry. What happens online is real life. What we consume is what we become.

Plenty of thinkers have circled this before – Postman, Debord, Huxley, Orwell on media; Machiavelli, Tocqueville, Thucydides, Gibbon on human corruptibility during times of uncertainty. The convergence of endless information and a ragebait economy creates the perfect environment for splintering how we understand the world and how we understand each other.

The deeper problem is this: we no longer trust institutions to provide truth, fairness, or mobility. Once, they were scaffolding that helped us climb from raw data to wisdom. And when that scaffolding gives out, people adapt: some over-perform in the status race (because you have to) and others defect from obligations altogether (why would I work for institutions if they don’t work for me).

There are a few ways to picture our distorted information ecosystem.

  • The DIKW Pyramid (Data → Information → Knowledge → Wisdom): raw posts and clicks at the bottom, trending content in the middle, shared truths above that, and finally wisdom, the rare ability to see causes instead of just symptoms.
  • Or the Ladder of Inference: we start with data, add meaning, make assumptions – and our beliefs tend to affect what data we select. Bots and algorithms hijack that ladder, nudging us toward polarized beliefs before we realize what’s happening.

Taken together, we can combine them into what we might call a hierarchy of information:

  • Raw data: the endless stream of posts, likes, bot spam
  • Information: headlines, hashtags, trending things
  • Knowledge: the narratives we share and fight over.
  • Understanding: recognizing what might not be real (or is hyperreal)
  • Wisdom: systemic analysis, the ability to see causes instead of just symptoms.

Right now, we’re stuck sloshing around in the middle layers of the hierarchy: drowning in outrage, fighting over partisan hot takes, rarely reaching understanding, almost never wisdom.

Chaos always has an architect. And if we want to make sense of American democracy today, we need to understand who those architects are, and how they profit from confusion.

This polarization rests on media concentration.The Telecommunications Act of 1996 was sold as a way to increase competition in media and telecommunications, but in reality, it did quite the opposite. Within five years, four firms controlled ~85% of US telephone infrastructure. That deregulated spine carried today’s consolidation of the entire media environment – not just telephones. Newspapers. Social media. TV stations.

We have the increasing concentration of media ownership, the financialization of attention, and the transformation of information from a public good into a private commodity to be bought, sold, and manipulated…

[Scanlon characterizes and explains the concentration, examines its impacts, and unpacks the roles of bots…]

When manufacturing consensus is both cheap to produce and valuable to those who benefit from confusion, you get industrial-scale manipulation.

Truth becomes whatever can capture the most attention in the shortest amount of time. Traditional journalism, with its slow fact-checking and institutional processes, can’t compete with bot-amplified outrage. Democratic deliberation, which requires shared facts and good faith dialogue, becomes nearly impossible when the information environment is designed to maximize conflict.

We’re living in a speculation economy where perception drives value more than fundamentals. Look at the stock market: Nvidia gained $150 billion in value based the back of a $100 billion OpenAI investment (which OpenAI will use to buy more Nvidia chips). Ten companies pass hundreds of billions back and forth, and the S&P jumps like it’s measuring something real.

It’s all memes wearing suits. Meme stocks and Dogecoin at least looked like jokes; now the same speculative energy runs through the corporate core. Attention, perception, and narrative drive valuation more than production or profit.

We’ve built a world where the hierarchy of information has flipped upside down.

At the bottom, bots flood us with raw noise. In the middle, outrage and team narratives harden into “knowledge.” At the top, the ladders to wisdom like journalism, schools, civic discourse, shared institutions are weakened. The scaffolding that once helped us climb no longer holds.

The traditional solutions – fact-checking, media literacy, content moderation – assume we’re dealing with a content problem when we’re actually facing an infrastructure problem. You can’t fact-check your way out of a system designed to reward misinformation. You can’t educate your way around algorithms optimized for polarization. You can’t moderate your way past economic incentives that make confusion profitable.

Recognizing this as a market structure problem rather than an information problem changes everything. Instead of focusing on individual bad actors or specific false claims, you start thinking about the underlying systems that make manipulation both profitable and scalable.

The information wars are economic policy, determining how we allocate attention, structure incentives, and organize the flow of information that shapes every other market and political decision we make. I don’t think it’s useful to get on a Substack soapbox about this – but we need to take (1) the power of media seriously and (2) those trying to influence it extremely seriously. There is a way to get to the top of the information hierarchy! We don’t have to be stuck in these middle layers…

Follow the money: “Who’s Getting Rich Off Your Attention?” from @kyla.bsky.social

For more on how the Telecommunications Act of 1996 helped set all of this in motion, see: “On Jimmy Kimmel: It’s Time to Destroy the Censorship Machine and Repeal the Telecommunications Act of 1996” from @matthewstoller.bsky.social.

For more on thoughts on why companies are behaving in the ways they are: “Why Corporate America Is Caving to Trump” and “Media consolidation is shaping who folds under political pressure — and who could be next.”

And lest we think that this came out of nowhere: “David Foster Wallace Tried to Warn Us About these Eight Things.”

[Image above: source]

Simone Weil

###

As we reclaim recognition, we might recall that on this date in 1452 an earlier information revolution began: Johannes Gutenberg started work on his Bible (which was completed and published in 1455). An inventor and craftsman, Gutenberg created the movable-type printing press, enabling a much faster (and cheaper) printing process. (Movable type was already in use in East Asia, but was slower and used for smaller jobs.) His Bible was his first major work, and his most impactful.

The printing press later spread across the world, leading to an information revolution– the unprecedented mass-spread of literature throughout Europe. It had a profound impact on the development of the Renaissance, Reformation, and Humanist movements.

A close-up view of an open Gutenberg Bible displayed in a museum, showcasing text on aged paper and illustrating the early printing technique.
Gutenberg Bible in the New York Public Library (source)

“Algorithms are the culprits, influencers are the accomplices, language is the weapon, and readers are the victims”*…

Close-up of computer code displayed on a screen, featuring programming syntax and function definitions.

On the occasion of the publication of his new book, Algospeak: How Social Media is Transforming the Future of Language, Adam Aleksic — aka the “Etymology Nerd” — talks with Liz Mineo about how social media algorithms are transforming language…

… I’m a big believer that the medium is the message. The way the information is being diffused is going to affect how we communicate. For example, with the arrival of writing, there was this big shift away from us telling stories with rhyme and meter. Plato said that writing was going to make us worse at remembering things. With the printing press, information is diffused more quickly, and more people have the ability to be literate, but there are still gatekeepers, which is affecting who gets to tell the story. And then the internet allows us to lose the gatekeepers; anybody can tell the story now, and that’s another paradigm shift in language. Algorithms are a new paradigm shift because the centralization of the internet that occurred in the late 2010s, coupled with how these algorithms push content through personalized recommendation feeds, are changing how we understand the very act of communication…

… Algorithms are shaping the way we speak. Platforms’ priorities play an important role in organizing and shaping how our language develops. The algorithm pushes more trends, creates more in-groups that then create new language. New trending words are amplified by social media; creators replicate words that they know are going viral, because it helps them go more viral, and then they push the words more into existence. This is the cycle that we’re constantly in. I think it’s because of the algorithm, which amplifies trends, that we’re getting more rapid language change than before. The biggest takeaway from my book is that algorithms are deeply affecting our society right now, and we should be paying attention to them…

When I say algorithms are the culprits, I mean that they are, in this metaphor, responsible for the perpetuation of slang at this speed, and influencers are being accomplices because we’re playing a part. The algorithm doesn’t do anything by itself; it doesn’t come up with the words or spread the words by itself. It’s humans who are doing that, with our own ideas of what the algorithm is or should be, and that pushes the words faster than otherwise. Eventually, those words enter your vocabulary, and that, I guess, makes you the victim…

What concerns you about the way social media and its algorithms are changing language?

As a linguist, I have no concerns because language is the means by which humans connect with one another. As a cultural critic, I’m pretty concerned by the way in which language is more commodified than ever before, and I’m concerned that certain groups are influencing our language more than other groups, like incels. Words that are part of the incel vocabulary like “pilled,” “maxxing,” or “sigma” are very popular. For example, if I like burritos, I can say, “I’m so burrito-pilled,” or if I want to eat more burritos, I can say “I’m burrito-maxxing.” The fact that we are using these words is an indicator that this culture is influencing us, and it also indicates that the way ideas spread and percolate in the online space can be dangerous. Incels are incredibly misogynistic and have a worldview that causes them to dehumanize other people. They have been able to spread their ideology because of the nature of the internet right now. If we pay attention to how language is changing, we should also pay attention to how culture is changing.

As a linguist, I’m very excited to see that language is developing faster than before. To me, language is almost a form of resistance. Every single new meme that emerges is a reactive cultural force to the over-organization of society. This summer, the term “clanker,” which is a speculative slur for artificial intelligence, became very popular. In March, we saw “Italian Brain Rot,” a meme that uses AI subversively to generate ridiculous cartoon characters. Both of these memes create a commentary about our current state of technological progress. A lot of memes and slang words are emerging in reflection to our current cultural moment. There’s something really beautiful about that…

Our viral vocabulary,” from @etymology.substack.com.web.brid.gy (TotH to J O’D)

Apposite: “Understanding the new economics of attention” (gift article from The Economist)

(Image above: source)

* Adam Aleksic, Algospeak: How Social Media is Transforming the Future of Language

###

As we pause to parse, we might note that today is International Talk Like a Pirate Day… that’s to say, a day on which to speak in English with a stereotypical West Country accent.

Created in 1995 by John Baur and Mark Summers of Albany, Oregon, it has since been adopted as an official holiday by the Pastafarianism movement.

Two men dressed as pirates, one pointing a pistol and the other holding a rifle, against a white background.
“Cap’n Slappy” and “Ol’ Chumbucket”, the founders of Talk Like a Pirate Day (source)

Written by (Roughly) Daily

September 19, 2025 at 1:00 am

“Badger hates Society, and invitations, and dinner, and all that sort of thing”*…

Americans are now spending more time alone– mostly at home– than ever. It’s changing our personalities, our politics, and even our relationship to reality. While the pandemic certainly enforced some of that isolation; the post-COVID world remains extraordinarily atomized.

In a bracing essay, Derek Thompson, explores the emergence of this wide-spread isolation, unpacking its drivers, enumerating its considerable (personal and civic) costs, musing on the possible impact of AI, and pondering what might lead to a return to sociability…

… “I have a view that is uncommon among social scientists, which is that moral revolutions are real and they change our culture,” Robert Putnam [author of Bowling Alone] told me. In the early 20th century, a group of liberal Christians, including the pastor Walter Rauschenbusch, urged other Christians to expand their faith from a narrow concern for personal salvation to a public concern for justice. Their movement, which became known as the Social Gospel, was instrumental in passing major political reforms, such as the abolition of child labor. It also encouraged a more communitarian approach to American life, which manifested in an array of entirely secular congregations that met in union halls and community centers and dining rooms. All of this came out of a particular alchemy of writing and thinking and organizing. No one can say precisely how to change a nation’s moral-emotional atmosphere, but what’s certain is that atmospheres do change. Our smallest actions create norms. Our norms create values. Our values drive behavior. And our behaviors cascade.

The anti-social century is the result of one such cascade, of chosen solitude, accelerated by digital-world progress and physical-world regress. But if one cascade brought us into an anti-social century, another can bring about a social century. New norms are possible; they’re being created all the time. Independent bookstores are booming—the American Booksellers Association has reported more than 50 percent growth since 2009—and in cities such as New York City and Washington, D.C., many of them have become miniature theaters, with regular standing-room-only crowds gathered for author readings. More districts and states are banning smartphones in schools, a national experiment that could, optimistically, improve children’s focus and their physical-world relationships. In the past few years, board-game cafés have flowered across the country, and their business is expected to nearly double by 2030. These cafés buck an 80-year trend. Instead of turning a previously social form of entertainment into a private one, they turn a living-room pastime into a destination activity. As sweeping as the social revolution I’ve described might seem, it’s built from the ground up by institutions and decisions that are profoundly within our control: as humble as a café, as small as a new phone locker at school…

On how we spend our time and what that yields: “The Anti-Social Century,” from @dkthomp.bsky.social in @theatlantic.com (gift article).

See also: “You’re Being Alienated From Your Own Attention,” from @chrislhayes.bsky.social (also in @theatlantic.com, also a gift article)

* Kenneth Grahame, The Wind in the Willows

###

As we call a friend, we might recall that it was on this date in 1915 that Alexander Graham Bell placed the first transcontinental phone call, from New York to San Francisco, where the Panama–Pacific International Exposition celebrations were underway and his assistant, his assistant Thomas Augustus Watson stood by. Bell repeated his famous first telephonic words, “Mr. Watson, come here. I want you,” to which Watson this time replied “It will take me five days to get there now!” Bell’s call officially initiated AT&T’s transcontinental service.

Alexander Graham Bell, about to call San Francisco from New York. (source)

And, on ths date 45 years later, in 1959, The first non-stop transcontinental commercial jet trip was made by an American Airlines Boeing 707, from Los Angeles to New York. The sleek silver plane made the flight in airline official time of 4 hours and 3 minutes, half the usual scheduled time for the prop-driven DC- 7Cs then in regular use on that route.

source