(Roughly) Daily

“I tend to think that most fears about A.I. are best understood as fears about capitalism”*…

Further to Wednesday‘s and yesterday‘s posts (on to other topics again after this, I promise), a powerful piece from Patrick Tanguay (in his always-illuminating Sentiers newsletter).

He begins with a consideration of Peter Wolfendale’s “Geist in the machine

… Wolfendale argues that the current AI debate recapitulates an 18th-century conflict between mechanism and romanticism. On one side, naive rationalists (Yudkowsky, Bostrom, much of Silicon Valley) assume intelligence is ultimately reducible to calculation; throw enough computing power at the problem and the gap between human and machine closes. On the other, popular romantics (Bender, Noë, many artists) insist that something about human cognition, whether it’s embodiment, meaning, or consciousness, can never be mechanised. Wolfendale finds both positions insufficient. The rationalists reduce difficult choices to optimisation problems, while the romantics bundle distinct capacities into a single vague essence.

His alternative draws on Kant and Hegel. He separates what we loosely call the “soul” into three capacities: wisdom (the metacognitive ability to reformulate problems, not just solve them), creativity (the ability to invent new rules rather than search through existing ones), and autonomy (the capacity to question and revise our own motivations). Current AI systems show glimmers of the first two but lack the third entirely. Wolfendale treats autonomy as the defining feature of personhood: not a hidden essence steering action, but the ongoing process of asking who we want to be and revising our commitments accordingly. Following Hegel he calls this Geist, spirit as self-reflective freedom.

Wolfendale doesn’t ask whether machines can have souls; he argues we should build them, and that the greater risk lies in not doing so. Machines that handle all our meaningful choices without possessing genuine autonomy would sever us from the communities of mutual recognition through which we pursue truth, beauty, and justice. A perfectly optimised servant that satisfies our preferences while leaving us unchanged is, in his phrase, “a slave so abject it masters us.” Most philosophical treatments of AI consciousness end with a verdict on possibility. Wolfendale ends with an ethical imperative: freedom is best preserved by extending it.

I can’t say I agree, unless “we”… end up with a completely different relationship to our technology and capital. However, his argument all the way before then is a worthy reflection, and pairs well with the one below and another from issue No.387. I’m talking about Anil Seth’s The mythology of conscious AI, where he argues that consciousness probably requires biological life and that silicon-based AI is unlikely to achieve it. Seth maps the biological terrain that makes consciousness hard to replicate; Wolfendale maps the philosophical terrain that makes personhood worth pursuing anyway, on entirely different grounds. Seth ends where the interesting problem begins for Wolfendale: even if machines can’t be conscious, the question of whether they can be autonomous persons, capable of self-reflective revision, remains open:

Though GenAI systems can’t usually compete with human creatives on their own, they are increasingly being used as imaginative prosthetics. This symbiosis reveals that what distinguishes human creativity is not the precise range of heuristics embedded in our perceptual systems, but our metacognitive capacity to modulate and combine them in pursuit of novelty. What makes our imaginative processes conscious is our ability to self-consciously intervene in them, deliberately making unusual choices or drawing analogies between disparate tasks. And yet metacognition is nothing on its own. If reason demands revision, new rules must come from somewhere. […]

[Hubert Dreyfus] argues that the comparative robustness of human intelligence lies in our ability to navigate the relationships between factors and determine what matters in any practical situation. He claims that this wouldn’t be possible were it not for our bodies, which shape the range of actions we can perform, and our needs, which unify our various goals and projects into a structured framework. Dreyfus argues that, without bodies and needs, machines will never match us. […]

This is the basic link between self-determination and self-justification. For Hegel, to be free isn’t simply to be oneself – it isn’t enough to play by one’s own rules. We must also be responsive to error, ensuring not just that inconsistencies in our principles and practices are resolved, but that we build frameworks to hold one another mutually accountable. […]

Delegating all our choices to mere automatons risks alienating us from our sources of meaning. If we consume only media optimised for our personal preferences, generated by AIs with no preferences of their own, then we will cease to belong to aesthetic communities in which tastes are assessed, challenged and deepened. We will no longer see ourselves and one another as even passively involved in the pursuit of beauty. Without mutual recognition in science and civic life, we might as easily be estranged from truth and right – told how to think and act by anonymous machines rather than experts we hold to account…

Tanguay then turns to “The Prospect of Butlerian Jihad” by Liam Mullally, in which Mullally uses…

… Herbert’s Dune and the Butlerian Jihad [here] as a lens for what he sees as a growing anti-tech “structure of feeling” (Raymond Williams’s term): the diffuse public unease about AI, enshittification, surveillance, and tech oligarchs that has not yet solidified into coherent politics. The closest thing to a political expression so far is neo-Luddism, which Mullally credits for drawing attention to technological exploitation but finds insufficient. His concern is that the impulse to reject technology wholesale smuggles in essentialist assumptions about human nature, a romantic defence of “pure” humanity against the corruption of machines. He traces this logic back to Samuel Butler’s 1863 essay Darwin Among the Machines, which framed the human-technology relationship as a zero-sum contest for supremacy, and notes that Butler’s framing was “explicitly supremacist,” written from within colonial New Zealand and structured by the same logic of domination it claimed to resist.

The alternative Mullally proposes draws on Bernard Stiegler’s concept of “originary technicity”: the idea that human subjectivity has always been constituted in part by its tools, that there is no pre-technological human to defend. [see here] If that’s right, then opposing technology as such is an “ontological confusion,” a fight against something that is already part of what we are. The real problem is not machines but the economic logic that shapes their development and deployment. Mullally is clear-eyed about this: capital does not have total command over its technologies, and understanding how they work is a precondition for contesting them. He closes by arguing that the anti-tech structure of feeling is “there for the taking,” but only if it can be redirected. The fights ahead are between capital and whatever coalition can form against it, not between humanity and machines. Technology is a terrain in that conflict; abandoning it means losing before the contest begins.

Wolfendale’s Geist in the Machine above arrived at a parallel conclusion from a different direction: where Mullally argues that rejecting technology means defending a false vision of the human, Wolfendale argues that refusing to extend autonomy to machines risks severing us from the self-reflective freedom that makes us persons in the first place. Both reject the romantic position, but for different reasons:

To the extent that neo-Luddites bring critical attention to technology, they are doing useful work. But this anti-tech sentiment frequently cohabitates with something uneasy: the treatment of technology as some abstract and impenetrable evil, and the retreat, against this, into essentialist views of the human. […]

If “humanity” is not a thing-in-itself, but historically, socially and technically mutable, then the sphere of possibility of the human and of our world becomes much broader. Our relationship to the non-human — to technology or to nature — does not need to be one of control, domination and exploitation. […]

As calls for a fight back against technology grow, the left needs to carefully consider what it is advocating for. Are we fighting the exploitation of workers, the hollowing out of culture and the destruction of the earth via technology, or are we rallying in defence of false visions of pure, a-technical humanity? […]

The anti-tech structure of feeling is there for the taking. But if it is to lead anywhere, it must be taken carefully: a fightback against technological exploitation will be found not in the complete rejection of technology, but in the short-circuiting of one kind of technology and the development of another.

As Max Read (scroll down) observes:

… if we understand A.I. as a product of the systems that precede it, I think it’s fair to say ubiquitous A.I.-generated text is “inevitable” in the same way that high-volume blogs were “inevitable” or Facebook fake news pages were “inevitable”: Not because of some “natural” superiority or excellence, but because they follow so directly from the logic of the system out of which they emerge. In this sense A.I. is “inevitable” precisely because it’s not revolutionary…

The question isn’t if we want a relationship with technology; it’s what kind of relationship we want. We’ve always (at least since we’ve been a conscious species) co-existed with, and been shaped by, tools; we’ve always suffered the “friction” of technological transition as we innovate new tools. As yesterday’s post suggested (in its defense of the open web in the face on a voracious attack from powerful LLM companies), “what matters is power“… power to shape the relationship(s) we have with the technologies we use. That power is currently in the hands of a relatively few companies, all concerned above all else with harvesting as much money as they can from “uses” they design to amplify that engagement and ease that monetization. It doesn’t, of course, have to be this way.

We’ve lived under modern capitalism for only a few hundred years, and under the hyper-global, hyper-extractive regime we currently inhabit for only a century-and-a-half or so, during which time, in fits and starts, it has grown ever more rapcious. George Monbiot observed that “like coal, capitalism has brought many benefits. But, like coal, it now causes more harm than good.” And Ursula Le Guin, that “we live in capitalism. Its power seems inescapable. So did the divine right of kings.” In many countries, “divine right” monarchy has been replaced by “constitutional monarchy.” Perhaps it’s time for more of the world to consider “constitutional capitalism.” We could start by learning from the successes and failures of Scandinavia and Europe.

Social media, AI, quantum computing– on being clear as to the real issue: “Geist in the machine & The prospect of Butlerian Jihad,” from @inevernu.bsky.social.

Apposite: “The enclosure of the commons inaugurates a new ecological order. Enclosure did not just physically transfer the control over grasslands from the peasants to the lord. It marked a radical change in the attitudes of society toward the environment.”

(All this said, David Chalmers argues that there’s one possibility that might change everything: “Could a Large Language Model be Conscious?” On the other hand, the ARC Prize Foundation suggests, we have some time: a test they devised for benchmarking agentic intelligence recently found that “humans can solve 100% of the environments, in contrast to frontier AI systems which, as of March 2026, score below 1%”… :)

Ted Chiang (gift article; see also here and here and here)

###

As we keep our eyes on the prize, we might spare a thought for a man who wrestled with a version of these same issues in the last century, Pierre Teilhard de Chardin; he died on this date in 1955.  A Jesuit theologian, philosopher, geologist, and paleontologist, he conceived the idea of the Omega Point (a maximum level of complexity and consciousness towards which he believed the universe was evolving) and developed Vladimir Vernadsky‘s concept of noosphere.  Teilhard took part in the discovery of Peking Man, and wrote on the reconciliation of faith and evolutionary theory.  His thinking on both these fronts was censored during his lifetime by the Catholic Church (in particular for its implications for “original sin”); but in 2009, they lifted their ban.

source

“The original idea of the web was that it should be a collaborative space where you can communicate through sharing information”*…

From yesterday’s post on the possible (and promising, but also potentially painful) future of computing to a pressing predicament we face today. The estimable Anil Dash on the threats to the open web…

You must imagine Sam Altman holding a knife to Tim Berners-Lee’s throat.

It’s not a pleasant image. Sir Tim is, rightly, revered as the genial father of the World Wide Web. But, all the signs are pointing to the fact that we might be in endgame for “open” as we’ve known it on the Internet over the last few decades.

The open web is something extraordinary: anybody can use whatever tools they have, to create content following publicly documented specifications, published using completely free and open platforms, and then share that work with anyone, anywhere in the world, without asking for permission from anyone. Think about how radical that is.

Now, from content to code, communities to culture, we can see example after example of that open web under attack. Every single aspect of the radical architecture I just described is threatened, by those who have profited most from that exact system.

Today, the good people who act as thoughtful stewards of the web infrastructure are still showing the same generosity of spirit that has created opportunity for billions of people and connected society in ways too vast to count while —not incidentally— also creating trillions of dollars of value and countless jobs around the world. But the increasingly-extremist tycoons of Big Tech have decided that that’s not good enough.

Now, the hectobillionaires have begun their final assault on the last, best parts of what’s still open, and likely won’t rest until they’ve either brought all of the independent and noncommercial parts of the Internet under their control, or destroyed them. Whether or not they succeed is going to be decided by decisions that we all make as a community in the coming months. Even though there have always been threats to openness on the web, the stakes have never been higher than they are this time.

Right now, too many of the players in the open ecosystem are still carrying on with business as usual, even though those tactics have been failing to stop big tech for years. I don’t say this lightly: it looks to me like 2026 is the year that decides whether the open web as we know it will survive at all, and we have to fight like the threat is existential. Because it is…

[Dash details the treats– largely, but not entirely driven by AI and its purveyors. He concludes…]

… The threat to the open web is far more profound than just some platforms that are under siege. The most egregious harm is the way that the generosity and grace of the people who keep the web open is being abused and exploited. Those people who maintain open source software? They’re hardly getting rich — that’s thankless, costly work, which they often choose instead of cashing in at some startup. Similarly, volunteering for Wikipedia is hardly profitable. Defining super-technical open standards takes time and patience, sometimes over a period of years, and there’s no fortune or fame in it.

Creators who fight hard to stay independent are often choosing to make less money, to go without winning awards or the other trappings of big media, just in order to maintain control and authority over their content, and because they think it’s the right way to connect with an audience. Publishers who’ve survived through year after year of attacks from tech platforms get rewarded by… getting to do it again the next year. Tim Berners-Lee is no billionaire, but none of those guys with the hundreds of billions of dollars would have all of their riches without him. And the thanks he gets from them is that they’re trying to kill the beautiful gift that he gave to the world, and replace it with a tedious, extortive slop mall.

So, we’re in endgame now. They see their chance to run the playbook again, and do to Wikipedians what Uber did to cab drivers, to get users addicted to closed apps like they are to social media, to force podcasters to chase an algorithm like kids on TikTok. If everyone across the open internet can gather together, and see that we’re all in one fight together, and push back with the same ferocity with which we’re being attacked, then we do have a shot at stopping them.

At one time, it was considered impossibly unlikely that anybody would ever create open technologies that would ever succeed in being useful for people, let alone that they would become a daily part of enabling billions of people to connect and communicate and make their lives better. So I don’t think it’s any more unlikely that the same communities can summon that kind of spirit again, and beat back the wealthiest people in the world, to ensure that the next generation gets to have these same amazing resources to rely on for decades to come.

Alright, if it’s not hopeless, what are the concrete things we can do? The first thing is to directly support organizations in the fight. Either those that are at risk, or those that are protecting those at risk. You can give directly to support the Internet Archive, or volunteer to help them out. Wikipedia welcomes your donation or your community participation. The Electronic Frontier Foundation is fighting for better policy and to defend your rights on virtually all of these issues, and could use your support or provides a list of ways to volunteer or take action. The Mozilla Foundation can also use your donations and is driving change. (And full disclosure — I’m involved in pretty much all of these organizations in some capacity, ranging from volunteer to advisor to board member.) That’s because I’m trying to make sure my deeds match my words! These are the people whom I’ve seen, with my own eyes, stay the hand of those who would hold the knife to the necks of the open web’s defenders. [Further full disclosure: so is your correpondent, and so have I.]

Beyond just what these organizations do, though, we can remember how much the open web matters. I know from my time on the board of Stack Overflow that we got to see the rise of an incredibly generous community built around sharing information openly, under open licenses. There are very few platforms in history that helped more people have more economic mobility than the number of people who got good-paying jobs as coders as a result of the information on that site. And then we got to see the toll that extractive LLMs had when they took advantage of that community without any consideration for the impact it would have when they trained models on the generosity of that site’s members without reciprocating in kind.

The good of the web only exists because of the openness of the web. They can’t just keep on taking and taking without expecting people to finally draw a line and saying “enough”. And interestingly, opportunities might exist where the tycoons least expect it. I saw Mike Masnick’s recent piece where he argued that one of the things that might enable a resurgence of the open web might be… AI. It would seem counterintuitive to anyone who’s read everything I’ve shared here to imagine that anything good could come of these same technologies that have caused so much harm.

But ultimately what matters is power. It is precisely because technologies like LLMs have powers that the authoritarians have rushed to try to take them over and wield them as effectively as they can. I don’t think that platforms owned and operated by those bad actors can be the tools that disrupt their agenda. I do think it might be possible that the creative communities that built the web in the first place could use their same innovative spirit to build what could be, for lack of a better term, called “good AI“. It’s going to take better policy, which may be impossible in the short term at the federal level in the U.S., but can certainly happen at more local levels and in the rest of the world. Though I’m skeptical about putting too much of the burden on individual users, we can certainly change culture and educate people so that more people feel empowered and motivated to choose alternatives to the big tech and big AI platforms that got us into this situation. And we can encourage harm reduction approaches for the people and institutions that are already locked into using these tools, because as we’ve seen, even small individual actions can get institutions to change course.

Ultimately I think, if given the choice, people will pick home-cooked, locally-grown, heart-felt digital meals over factory-farmed fast food technology every time…

Unless we act, it’s “Endgame for the Open Web,” from @anildash.com. Eminently worth reading in full.

Tim Berners-Lee… who should know.

###

As we protect what’s precious, we might send carefully-calculated birthday greetings to a man whose work helped lay the foundation for both the promise and the peril unpacked in the article linked above above: J. Presper Eckert; he was born on this day in 1919. An electrical engineer, he co-designed (with John Mauchly) the first general purpose computer, the ENIAC (see here and here) for the U.S. Army’s Ballistic Research Laboratory. He and Mauchy went on to found the Eckert–Mauchly Computer Corporation, at which they designed and built the first commercial computer in the U.S., the UNIVAC.

Three men interacting with a large vintage computer console, with tape reels in the background.
Eckert (standing and gesturing) and Mauchy (at the console), demonstrating the UNIVAC to Walter Cronkite (source)

Written by (Roughly) Daily

April 9, 2026 at 1:00 am

“Quantum computation is … nothing less than a distinctly new way of harnessing nature”*…

As the tools in the world around us change, the world– and we– change with them. The onslaught of AI is the change that seems to be grabbing most of our mindshare these days… and with reason. But there are, of course, other changes (in biotech, in materials science, et al.) that are also going to be hugely impactful.

Today, a look at the computing technology stalking up behind AI: quantum computing. As enthusiasts like David Deutsch (author of the quote above) argue, it can have tremendous benefits, perhaps especially in our ability to model (and thus better understand) our reality.

But quantum computing will, if/when it arrives, also present huge challenges to us as individuals and as societies– perhaps most prominently in its threat to the ways in which we protect our systems and our information: We’ve felt pretty safe for decades, secure in the knowledge that we could lose passwords to phising or hacks, but that it would take the “classical” computers we have 1 billion years to break today’s RSA-2048 encryption. A quantum computer could crack it in as little as a hundred seconds.

The technology has been “somewhere on the horizon” for 30 years… so not something that has seemed urgent to confront. But progress has accelerated; a recent Google paper reports on a programming and architectural breakthrough that greatly reduces the computing resources necessary to break classical cryptography… putting the prospect of “Q-Day” (the point at which quantum computers become powerful enough to break standard encryption methods (RSA, ECC), endangering global digital security) much closer, which would put everything from crypto-wallets to our e-banking accounts at risk.

Charlie Wood brings us up to speed…

Some 30 years ago, the mathematician Peter Shor took a niche physics project — the dream of building a computer based on the counterintuitive rules of quantum mechanics — and shook the world.

Shor worked out a way for quantum computers to swiftly solve a couple of math problems that classical computers could complete only after many billions of years. Those two math problems happened to be the ones that secured the then-emerging digital world. The trustworthiness of nearly every website, inbox, and bank account rests on the assumption that these two problems are impossible to solve. Shor’s algorithm proved that assumption wrong.

For 30 years, Shor’s algorithm has been a security threat in theory only. Physicists initially estimated that they would need a colossal quantum machine with billions of qubits — the elements used in quantum calculations — to run it. That estimate has come down drastically over the years, falling recently to a million qubits. But it has still always sat comfortably beyond the modest capabilities of existing quantum computers, which typically have just hundreds of qubits.

However, two different groups of researchers have just announced advances that notably reduce the gap between theoretical estimates and real machines. A star-studded team of quantum physicists at the California Institute of Technology went public with a design for a quantum computer that could break encryption with only tens of thousands of qubits and said that it had formed a company to build the machine. And researchers at Google announced that they had developed an implementation of Shor’s algorithm that is ten times as efficient as the best previous method.

Neither company has the hardware to break encryption today. But the results underscore what some quantum physicists had already come to suspect: that powerful quantum computers may be years away, rather than decades. “If you care about privacy or you have secrets, then you better start looking for alternatives,” said Nikolas Breuckmann, a mathematical physicist at the University of Bristol, who did not work on either of the papers.

While the new results may provide a jolt for the policymakers and corporations that guard our digital infrastructure, they also signal the rapid progress that physicists have made toward building machines that will let them more thoroughly explore the quantum world.

“We’re going to actually do this,” said Dolev Bluvstein, a Caltech physicist and CEO of the new company, Oratomic…

[Wood unpacks the history of the development of the technology and explores the challenges that remain; he concludes…]

… If any group succeeds at building a quantum computer that can realize Shor’s algorithm, it will mark the end an era — specifically, the “Noisy Intermediate Scale Quantum” era, as Preskill dubbed the pre-error-correction period in a 2018 paper. Each researcher has a vision for what to pursue first with a machine in the new “fault-tolerant” era.

[Robert] Huang said he would start by running Shor’s algorithm, just to prove that the device works. After that, he said he would try to use it to speed up machine learning — an application to be detailed in coming work.

Most of the architects building quantum computers, whether at Oratomic or other startups, are physicists at heart. They’re interested in physics, not cryptography. Specifically, they’re interested in all the things a computer fluent in the language of quantum mechanics could teach them about the quantum realm, such as what sort of materials might become superconductors even at warm temperatures. Preskill, for his part, would like to simulate the quantum nature of space-time.

The Caltech group knows it has years of work ahead before any of its dreams have a chance of coming true. But the researchers can’t wait to get started. “Pick a cooler life quest than building the world’s first quantum computer with your friends!” said a jubilant Bluvstein, reached by phone shortly before their paper went live, before rushing off to celebrate…

Eminently worth reading in full: “New Advances Bring the Era of Quantum Computers Closer Than Ever,” from @walkingthedot.bsky.social in @quantamagazine.bsky.social.

* David Deutsch, The Fabric of Reality

###

As we prepare, we might take a moment to appreciate just how vastly and deeply the legacy systems challenged by quantum computing run, recalling that on this date in 1959 Mary Hawes, a computer scientist for the Burroughs Corporation held a meeting of computers users, manufacturers, and academics at the University of Pennsylvania aimed at creating a common business oriented programming language. At the meeting, representative Grace Hopper suggested that they ask the Department of Defense to fund the effort to create such a language. Also attending was Charles Phillips who was director of the Data System Research Staff at the DoD and was excited by the possibility of a common language streamlining their operations. He agreed to sponsor the creation of such a language. This was the genesis of what would eventually become the COBOL language.

To this day COBOL is still the most common programming language used in business, finance, and administrative systems for companies and governments, primarily on mainframe systems, with around 200 billion lines of code still in production use… all of which are in question and/or at risk in a world of quantum computing.

source

“The arts are not a way to make a living. They are a very human way of making life more bearable.”*…

Claude Monet, Caricature of Léon Manchon, 1858.

… Still, there are bills to be paid. Mathilde Montpetit (and here) on how the young Claude Monet made bank…

At the age of fifteen, Claude Monet was, by his own account, one of the most successful artists in Le Havre. Crowds would gather in the Norman port city to gawk at the pictures he sold through a framing shop: not paintings of haystacks or of the sea or water lilies, but slightly cruel caricatures of local bigwigs and minor celebrities. He had already learned to commercialize, charging his customers 20 francs (around 200€ in today’s money). “If I had continued”, he claimed to an interviewer in Le Temps almost fifty years later, “I would have been a millionaire.”

Spurred by profits, the young Monet was productive, creating up to seven or eight of these caricatures a day; a small collection of them is now held at the Art Institute of Chicago, most donated by the former mayor Carter Harrison IV (1860–1953). The French art historian Rodolphe Walter has claimed that his caricatures constituted a “clandestine apprenticeship”, the first attempts by a son of Le Havre’s bourgeois shipbuilders to make his way in the art world.

The earliest are anonymous: the identities of The Man in the Small Hat or The Man with the Big Cigar are now lost, although the framing shop devotees may well have been able to name them. Some of the works are imitations, like the 1859 drawing of the French journalist August Vacquerie (1819–1895) that Monet seems to have copied from Nadar (1820–1910), probably the period’s most famous caricaturist.

Monet’s own 1858 caricature of Léon Manchon, the treasurer of Le Havre’s Société des amis des arts, captures his subject’s appearance but also, in the background, both his love of the arts and his work as a notary. Most fantastical is the 1858 caricature of Jules Didier (1831–1914), which shows the 1857 winner of the Prix de Rome as a “Butterfly Man” being led on a leash by a dog. Monet scholars remain divided as to the symbolic meaning of the iconography, though more obviously derisive is the drawing of a dejected fellow applicant to an 1858 Le Havre art subsidy, Henri Cassinelli. Monet has captioned it “Rufus Croutinelli”: a slightly forced pun on “croute”, meaning a daub of paint. Monet didn’t receive the subsidy either.

Sixty-year-old Monet’s claims about how he could have made his young fortune probably had more to do with his later difficulties in selling Impressionism than the actual fortunes to be made in portraits-charge, but it was the roughly 2,000 francs (20,000€) from selling these caricatures that allowed him to, against his father’s wishes, move to Paris and begin training as an artist. (He also received a pension from his wealthy aunt Marie-Jeanne Lecadre, with whom he had been living since his mother’s death in 1857.)

Perhaps it helped him in other ways as well. In the Le Temps interview, Monet claimed that it was while admiring his admirers at the framing shop window that he first encountered the work of his mentor Eugène Boudin (1824–1898), whose paintings were also hung there. Boudin would later take him en plein air for the first time. Perhaps, too, there’s something in the quickness of the caricature that speaks to what Impressionism would become — a desire to capture not just the literal appearance of a thing, but its true essence…

Doing Impressions: Monet’s Early Caricatures (ca. late 1850s)” from @mathildegm.bsky.social in @publicdomainrev.bsky.social.

Re: the other end of Monet’s career, readers in (or visiting) the Bay Area might appreciate “Monet and Venice,” over a hundred works– mostly the fruits of Monet’s only visit to the City of Canals, but spiced with Venetian views from artists including Renoir, Sargent, and Canaletto– on display at the de Young Museum in San Francisco through July 26.

* Kurt Vonnegut

###

As we cherish cartoons, we might might send pointedly-insightful birthday greetings to Peter Fluck; he was born on this date in 1941. An artist, caricaturist, and puppeteer, he was half of the partnership known as Luck and Flaw (with Roger Law), creators of the epochal British satirical TV puppet show Spitting Image.

The show ran from 1984 through 1996. (It was revived, with a different crew, in 2020.) Here’s a BBC appreciation of the original…

Written by (Roughly) Daily

April 7, 2026 at 1:00 am

“It is not the fact of liberty but the way in which liberty is exercised that ultimately determines whether liberty itself survives”*…

As the U.S. curdles and Ukraine twists in the wind, a look back.

In the summer of 1941, World War II has been raging for almost two years; still, of course, the U.S.– while it had emerged as the “armory” of the Allies– was a non-combatant. A majority of Americans favored continuing to “to help Britain, even at the risk of getting into the war.” But stoked by isolationists and Nazi sympathizers (like Henry Ford and Father Coughlin), a third of Americans were opposed.

Into this gamy situation, Dorothy Thompson, the first American journalist to be expelled from Nazi Germany, back in 1934, released a powerful– and ultimately very influential– essay in Harpers

It is an interesting and somewhat macabre parlor game to play at a large gathering of one’s acquaintances: to speculate who in a showdown would go Nazi. By now, I think I know. I have gone through the experience many times—in Germany, in Austria, and in France. I have come to know the types: the born Nazis, the Nazis whom democracy itself has created, the certain-to-be fellow-travelers. And I also know those who never, under any conceivable circumstances, would become Nazis.

It is preposterous to think that they are divided by any racial characteristics. Germans may be more susceptible to Nazism than most people, but I doubt it. Jews are barred out, but it is an arbitrary ruling. I know lots of Jews who are born Nazis and many others who would heil Hitler tomorrow morning if given a chance. There are Jews who have repudiated their own ancestors in order to become “Honorary Aryans and Nazis”; there are full-blooded Jews who have enthusiastically entered Hitler’s secret service. Nazism has nothing to do with race and nationality. It appeals to a certain type of mind.

It is also, to an immense extent, the disease of a generation—the generation which was either young or unborn at the end of the last war. This is as true of Englishmen, Frenchmen, and Americans as of Germans. It is the disease of the so-called “lost generation.”

Sometimes I think there are direct biological factors at work—a type of education, feeding, and physical training which has produced a new kind of human being with an imbalance in his nature. He has been fed vitamins and filled with energies that are beyond the capacity of his intellect to discipline. He has been treated to forms of education which have released him from inhibitions. His body is vigorous. His mind is childish. His soul has been almost completely neglected.

At any rate, let us look round the room…

[And so, in a way both enlightening and entertaining, she does, concluding…]

It’s fun—a macabre sort of fun—this parlor game of “Who Goes Nazi?” And it simplifies things—asking the question in regard to specific personalities.

Kind, good, happy, gentlemanly, secure people never go Nazi. They may be the gentle philosopher whose name is in the Blue Book, or Bill from City College to whom democracy gave a chance to design airplanes—you’ll never make Nazis out of them. But the frustrated and humiliated intellectual, the rich and scared speculator, the spoiled son, the labor tyrant, the fellow who has achieved success by smelling out the wind of success—they would all go Nazi in a crisis.

Believe me, nice people don’t go Nazi. Their race, color, creed, or social condition is not the criterion. It is something in them.

Those who haven’t anything in them to tell them what they like and what they don’t—whether it is breeding, or happiness, or wisdom, or a code, however old-fashioned or however modern, go Nazi. It’s an amusing game. Try it at the next big party you go to.

Eminently worth reading in full: “Who Goes Nazi?” from @harpers.bsky.social.

(And in a very effective testament to Thompson’s technique, Rusty Foster– who anchored a recent (R)D— asks “Who Goes AI?“)

See also: “The MAGA Theory of Art,” from Art in America, which reviews the roles that arts and design played in Nazi Germany, then compares them to what’s transpiring today. Also eminently worth reading in full; a sample:

There is a fable that persists in even themost respectable quarters, perhaps because it has retained its power to shock for more than half a century. Get any card-carrying liberal into a sufficiently confessional mood and she will tell you, sotto voce, that there was one domain in which the Nazis were perversely and chillingly formidable: the domain of the aesthetic…

… It is tempting, then, to take one look at the shambolic flailing of the Trump administration—the ham-handed takeover of the Kennedy Center, the tawdry gilding of the Oval Office, the AI slop, the women with too much filler, the men on too many steroids who boast about eating too much meat, the tweets with their erratic capitalization, the general air of carnival grotesquerie—and conclude, as Karl Marx did, that history repeats itself “first as tragedy, then as farce.” 

Of course, there are obvious continuities between MAGA and its antecedent on the Rhine. “Fascism is theater,” Jean Genet wrote of the Nazis, and it is hard to think of a politician with more theatrical flair than Trump, who adores Andrew Lloyd Webber and once harbored ambitions of becoming a Broadway producer. If Hitler fostered “the modern era’s first full-blown media culture,” as the film scholar Eric Rentschler claims, then Trump is surely responsible for the postmodern era’s first full-blown social media bonanza. He has the Führer’s instinct for pageantry, the Führer’s gift for glister and grandiosity.

Trump’s resentments, too, recall those of his forbears. In his study of Nazi art policy, the historian Jonathan Petropoulos writes that art collecting was important to top brass in the party because it served “as a means of assimilation into the traditional elite.” Much to their chagrin, their political ascendency had failed to confer the cultural capital they craved; now they had to seize prestige by other means. The MAGA gentry is more resigned; Trump and his lackeys more or less accept their status as philistines and content themselves with exacting revenge on the gatekeepers, yet their air of wounded arrivism is all too familiar.

Here it may seem that the similarities come to an end… While Trump has hosted motley rallies, and even made one deflating attempt at a military parade, he has yet to produce any of the disciplined displays that so effectively reduced the bodies of their participants to raw geometries. 

Above all, MAGA lacks the aesthetes who are dutifully trotted out as evidence of fascism’s scandalous refinement. Who is the MAGA Hugo Boss, the MAGA Leni Riefenstahl, the MAGA Knut Hamsun, the MAGA Gabriele D’Annunzio, the MAGA Ezra Pound? Mar-a-Lago has more in common with any suburban Cheesecake Factory than it does with the monumental austerities of Albert Speer… 

(Image above: source)

* Dorothy Thompson

###

As we cast our eyes around, we might recall that it was on this date in 1917 that the U.S. entered World War I, formally declaring war against Germany and entering the conflict in Europe, which had been raging since the summer of 1914. It ended in November of 1918– one of the deadliest conflicts in history, resulting in an estimated 15 to 22 million military and civilian casualties and genocide (and via the movement of large numbers of people, a major factor in the catastrophic Spanish flu pandemic that followed).

The Paris Peace Conference of 1919–1920 imposed settlements on the defeated powers. Under the Treaty of Versailles, Germany lost significant territories, was disarmed, and was required to pay large war reparations to the Allies. The dissolution of the Russian, German, Austro-Hungarian, and Ottoman empires led to new national boundaries and the creation of new independent states including Poland, Finland, the Baltic states, Czechoslovakia, and Yugoslavia.

The League of Nations was established to maintain world peace, but failed to manage instability during the interwar period, contributing to the outbreak of World War II in 1939. Indeed, those unresolved tensions in the aftermath of World War I created the conditions for the rise of fascism in Europe (and militarism in Japan).

President Woodrow Wilson asking Congress to declare war on Germany on April 2, 1917… it took four days. (source)