(Roughly) Daily

“You get what you measure”*…

CNBC article headline about the S&P 500 closing at a record high, highlighting a rally among tech giants, with live updates from journalists.

Matt Stoller takes the occasion of Trump’s selection of Kevin Warsh to head the Fed (“an orthodox Wall Street GOP pick, though he is married to the billionaire heiress of the Estee Lauder fortune and was named in the Epstein files. He’s perceived not as a Trump loyalist but as an avatar of capital”) to ponder why public satisfaction with the economy is so low (“if you judge solely by consumer sentiment, Trump’s first term was the third best economy Americans experienced since 1960. Trump’s second term is not only worse than his first, it is the worst economic management ever recorded by this indicator”).

Stoller argues that we’re mesuring the wrong things (or, in some cases, the right things in the wrong ways)…

… the models underpinning how policymakers think about the economy just don’t reflect the realities of modern commerce. The fundamental dynamic is that those models were constructed in an era where America was one discrete economy, with Wall Street and the public tied together by the housing finance system. But today, Americans increasingly live in tiered bubbles that have less and less to do with one another. Warsh will essentially be looking at the wrong indicators, pushing buttons that are mislabeled.

While corporate America is experiencing good times, much of the country is experiencing recessionary conditions. Let’s contrast consumer sentiment indicators with statistics showing an economic boom. Last week, the government came out with stats on real gross domestic product increasing at a scorching 4.4% in the third quarter of last year. There’s higher consumer spending, corporate investment, government spending, and a better trade balance. Inflation, according to the Consumer Price Index, is low at 2.6.% over the past year. And while official numbers aren’t out for the final three months of the year, the Atlanta Fed’s GDPNow forecast shows that it estimates growth at 4.2%. And there are other indicators showing prosperity, from low unemployment to high business formation, which was up about 8% last year, as well as record corporate profits…

… Behavioral economists and psychologists have all sorts of reasons to explain that people don’t really understand the economy particularly well. But in general, when the stats and the public mood conflict, I believe the public is usually correct. Often, there are some weird anomalies with the data used by policymakers. In 2023, I noticed that the consumer price index, the typical measure of inflation, didn’t account for borrowing costs, so the Fed hike cycle, which caused increases in credit card, mortgage, auto loan, payday loans, et al, just wasn’t incorporated. The public wasn’t mad at phantom inflation, they were mad at real inflation that the “experts” didn’t see.

I don’t think that’s the only miscalculation…

[Stoller goes on to explain the ways in which “consumer spending” doesn’t tell us much about consumers anymore, about the painful reality of “spending inequality,” and about the obscure(d) problem of monopoly-driven inflation. He concludes…]

… Finally, there’s a more philosophical point, which I don’t think explains the short-term frustrations people feel, but is directionally correct. Do people actually want what the economy is producing? For most of the 20th century, the answer was yes. When Simon Kuznets invented these measurement statistics in 1934, financial value and the value that Americans placed on products and services were similar. A bigger economy meant things like toilets and electricity spreading across rural America, and cars and food and washing machines.

Today? Well, that’s less clear. According to the Bureau of Labor Statistics, the second fastest growing sector of the economy in terms of GDP growth from 2019-2024 was gambling. Philip Pilkington wrote a good essay last summer on the moral assumptions behind our growth statistics. There is no agreed upon notion of what makes up an economically valuable object or activity, so our stats are inherently subtle moral judgments. Classic moral philosophers like Adam Smith believed in the “use value” of an item, meaning how it could be used, whereas neoclassical economists believed in the “exchange value” of an item, making no judgments about use and are just counting up its market price.

Normal people subscribe on a moral level to use value. Most of us see someone spending money on a gambling addiction as doing something worse than providing Christmas presents for kids, but not because of price. However, our GDP models use the market value basis. Kuznets, presumably, was not amoral, he just thought that our laws would ban immoral activities like gambling, and so use value and market value wouldn’t diverge. But they have.

It’s not just things like gambling or pornography or speculation. A lot of previously unmeasured activity has been turned into data and monetized, which isn’t actually increasing real growth but measuring what already existed. Take the change from meeting someone at a party to using a dating app. One is part of GDP, the other isn’t. Both are real, but only one would show a bigger economy.

Beyond that much of our economy is now based on intangibles – the fastest growing sector was software publishing. Is Microsoft moving to a subscription fee model for Office truly some sort of groundbreaking new product? It’s hard to say, while corporate assets used to be hard things like factories, today much of it is intangibles like intellectual property.

A boomcession, where the rich and corporate America experience a boom while working people feel a recession, is a very unhealthy dynamic. It’s certainly possible to create metrics to measure it, and to help policymakers understand real income growth among different subgroups. You could start looking at real income after non-discretionary consumer spending, or find ways of adjusting for price discrimination.

But I think a better approach is to try to knit us into one society again. The kinds of policymakers who could try to create metrics to understand the different experiences of classes, and ameliorate them, don’t have power. Instead, the people in charge still use models which presume one economy and one relatively uniform set of prices, where “consumer spending” means stuff consumers want.

I once noted a speech in 2016 by then-Fed Chair Janet Yellen in which she expressed surprise that powerful rich firms and small weak ones had different borrowing rates, which affected the “monetary transmission channel” the Fed relied on. Sure it was obvious in the real world, but she preferred theory.

Or they don’t use models at all; Kevin Warsh is not an economist, he’s a lawyer and political operative, and is uninterested in academic theory. He cares about corporate profits and capital formation. That probably won’t work out well either.

At any rate, we have to start measuring what matters again. If we don’t, then we’ll continue to be baffled that normal people hate the economy that looks fine on our charts…

The models used by policymakers to understand wages, economic growth, and consumer spending are misleading. That’s why corporate America is having a party, and everyone else is mad. Eminently worth reading in full: “The Boomcession: Why Americans Hate What Looks Like an Economic Boom,” from @matthewstoller.bsky.social (or @mattstoller.skystack.xyz).

Richard Hamming (and also to the article above, see “Goodhart’s law“)

###

As we ponder the pecuniary, we might recall that it was on this date in 1958 that Benelux Economic Union was founded, creating the seed from the European Economic Community, then the European Union grew.

On that same day, Philadelphia doo wop group The Silhouettes started five weeks at the top of the Billboard R&B chart with their first single, “Get A Job.”

“I call our world Flatland, not because we call it so, but to make its nature clearer to you, my happy readers, who are privileged to live in Space.”*…

A close-up view of a baseball with red stitching against a black background.

Physicists believe a third class of particles – anyons – could exist, but only in 2D. As Elay Shech asks, what kind of existence is that?…

Everything around you – from tables and trees to distant stars and the great diversity of animal and plant life – is built from a small set of elementary particles. According to established scientific theories, these particles fall into two basic and deeply distinct categories: bosons and fermions.

Bosons are sociable. They happily pile into the same quantum state, that is, the same combination of quantum properties such as energy level, like photons do when they form a laser. Fermions, by contrast, are the introverts of the particle world. They flat out refuse to share a quantum state with one another. This reclusive behaviour is what forces electrons to arrange themselves in layered atomic shells, ultimately giving rise to the structure of the periodic table and the rich chemistry it enables.

At least, that’s what we assumed. In recent years, evidence has been accumulating for a third class of particles called ‘anyons’. Their name, coined by the Nobel laureate Frank Wilczek, gestures playfully at their refusal to fit into the standard binary of bosons and fermions – for anyons, anything goes. If confirmed, anyons wouldn’t just add a new member to the particle zoo. They would constitute an entirely novel category – a new genus – that rewrites the rules for how particles move, interact, and combine. And those strange rules might one day engender new technologies.

Although none of the elementary particles that physicists have detected are anyons, it is possible to engineer environments that give rise to them and potentially harness their power. We now think that some anyons wind around one another, weaving paths that store information in a way that’s unusually hard to disturb. That makes them promising candidates for building quantum computers – machines that could revolutionise fields like drug discovery, materials science, and cryptography. Unlike today’s quantum systems that are easily disturbed, anyon-based designs may offer built-in protection and show real promise as building blocks for tomorrow’s computers.

Philosophically, however, there’s a wrinkle in the story. The theoretical foundations make it clear that anyons are possible only in two dimensions, yet we inhabit a three-dimensional world. That makes them seem, in a sense, like fictions. When scientists seek to explore the behaviours of complicated systems, they use what philosophers call ‘idealisations’, which can reveal underlying patterns by stripping away messy real-world details. But these idealisations may also mislead. If a scientific prediction depends entirely on simplification – if it vanishes the moment we take the idealisation away – that’s a warning sign that something has gone wrong in our analysis.

So, if anyons are possible only through two-dimensional idealisations, what kind of reality do they actually possess? Are they fundamental constituents of nature, emergent patterns, or something in between? Answering these questions means venturing into the quantum world, beyond the familiar classes of particles, climbing among the loops and holes of topology, detouring into the strange physics of two-dimensional flatland – and embracing the idea that apparently idealised fictions can reveal deeper truths…

[Shech explains anyons, and considers the various strategies for making sense of them. (They”paraparticles” like anyons don’t actually exit. Or we simply lack the theoretical framwork and experimental work to follow to find them. Or in ultra-thin materials physics, we’ve already found them.) Considering the latter two possibilities, he concludes…]

So, if anyons exist, what kind of existence is it? None of the elementary particles are anyons. Instead, physicists appeal to the notion of ‘quasiparticles’, in which large numbers of electrons or atoms interact in complex ways and behave, collectively, like a simpler object you can track with novel behaviours.

Picture fans doing ‘the wave’ in a stadium. The wave travels around the arena as if it’s a single thing, even though it’s really just people standing and sitting in sequence. In a solid, the coordinated motion of many particles can act the same way – forming a ripple or disturbance that moves as if it were its own particle. Sometimes, the disturbance centres on an individual particle, like an electron trying to move through a material. As it bumps into nearby atoms and other electrons, they push back, creating a kind of ‘cloud’ around it. The electron plus its cloud behave like a single, heavier, slower particle with new properties. That whole package is also treated as a quasiparticle.

Some quasiparticles behave like bosons or fermions. But for others, when two of them trade places, the system’s quantum state picks up a built-in marker that isn’t limited to the two familiar settings. It can take on intermediate values, which means novel quantum statistics. If the theories describing these systems are right, then the quasiparticles in question aren’t just behaving oddly, they are anyons: the third type of particles.

In other words, while none of the elementary particles that physicists have detected are anyons – physicists have never ‘seen’ an anyon in isolation – we can engineer environments that give rise to emergent quasiparticles portraying the quantum statistics of anyons. In this sense, anyons have been experimentally confirmed. But there are different kinds of anyons, and there is still active work being done on the more exotic anyons that we hope to harness for quantum computers.

But even so, are quasiparticles, like anyons, really real? That depends. Some philosophers argue that existence depends on scale. Zoom in close enough, and it makes little sense to talk about tables or trees – those objects show up only at the human scale. In the same way, some particles exist only in certain settings. Anyons don’t appear in the most fundamental theories, but they show up in thin, flat systems where they are the stable patterns that help explain real, measurable effects. From this point of view, they’re as real as anything else we use to explain the world.

Others take a more radical stance. They argue that quasiparticles, fields and even elementary particles aren’t truly real: they’re just useful labels. What really exists is not stuff but structure: relations and patterns. So ‘anyons’ are one way we track the relevant structure when a system is effectively two-dimensional.

Questions about reality take us deep into philosophy, but they also open the door to a broader enquiry: what does the story of anyons reveal about the role of idealisations and fictions in science? Why bother playing in flatland at all?

Often, idealisations are seen as nothing more than shortcuts. They strip away details to make the mathematics manageable, or serve as teaching tools to highlight the essentials, but they aren’t thought to play a substantive role in science. On this view, they’re conveniences, not engines of discovery.

But the story of anyons shows that idealisations can do far more. They open up new possibilities, sharpen our understanding of theory, clarify what a phenomenon is supposed to be in the first place, and sometimes even point the way to new science and engineering.

The first payoff is possibility: idealisation lets us explore a theory’s ‘what ifs’, the range of behaviours it allows even if the world doesn’t exactly realise them. When we move to two dimensions, quantum mechanics suddenly permits a new kind of particle choreography. Not just a simple swap, but wind-and-weave novel rules for how particles can combine and interact. Thinking in this strictly two-dimensional setting is not a parlour trick. It’s a way to see what the theory itself makes possible.

That same detour through flatland also assists us in understanding the theory better. Idealised cases turn up the contrast knobs. In three dimensions, particle exchanges blur into just two familiar options of bosons and fermions. In two dimensions, the picture sharpens. By simplifying the world, the idealisation makes the theory’s structure visible to the naked eye.

Idealisation also helps us pin down what a phenomenon really is. It separates difference-makers from distractions. In the anyon case, the flat setting reveals what would count as a genuine signature, say, a lasting memory of the winding of particles, and what would be a mere lookalike that ordinary bosons or fermions could mimic. It also highlights contrasts with other theoretical possibilities: paraparticles, for example, don’t depend on a two-dimensional world, but anyons seem to. That contrast helps identify what belongs to the essence of anyons and what does not. When we return to real materials, we know what to look for and what to ignore.

Finally, idealisations don’t just help us read a theory – they help write the next one. If experiments keep turning up signatures that seem to exist only in flatland, then what began as an idealisation becomes a compass for discovery. A future theory must build that behaviour into its structure as a genuine, non-idealised possibility. Sometimes, that means showing how real materials effectively enforce the ideal constraint, such as true two-dimensionality. Other times, it means uncovering a new mechanism that reproduces the same exchange behaviour without the fragile assumptions of perfect flatness. In both cases, idealisation serves as a guide for theory-building. It tells us which features must survive, which can bend, and where to look for the next, more general theory.

So, when we venture into flatland to study anyons, we’re not just simplifying – we’re exploring the boundaries where mathematics, matter and reality meet. The journey from fiction to fact may be strange, but it’s also how science moves forward…

Eminently worth reading in full: “Playing in flatland,” from @elayshech.bsky.social in @aeon.co.

Pair with: “Is Particle Physics Dead, Dying, or Just Hard?

* Edwin A. Abbott, Flatland: A Romance of Many Dimensions

###

As we brood over the boundaries of “being” (and knowing), we might spare a thought for Bertand Russell; he died on this date in 1970. A philosopher, logician, mathematician, and public intellectual, he influenced mathematics, logic, and several areas of analytic philosophy.

He was one of the early 20th century’s prominent logicians and a founder of analytic philosophy, along with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Ludwig Wittgenstein. Russell with Moore led the British “revolt against idealism“. Together with his former teacher Alfred North Whitehead, Russell wrote Principia Mathematica, a milestone in the development of classical logic and a major attempt [if ultimately unsuccessful, pace Godel] to reduce the whole of mathematics to logic. Russell’s article “On Denoting” is considered a “paradigm of philosophy.”

A black and white portrait of a distinguished man in a suit, holding a pipe and sitting in a chair, with a serious expression on his face.

source

Written by (Roughly) Daily

February 2, 2026 at 1:00 am

“The gambling known as business looks with austere disfavor upon the business known as gambling”*…

A smartphone displaying an online casino game with a slot machine interface, set on a green casino table with poker chips and playing cards featuring aces.

The quote above, from Ambrose Bierce, was true enough until relatively recently. Business has embraced gaming. When the Supreme Court struck down the federal ban on sports betting in 2018, Americans, who had legally wagered less than $5 billion on sports annually. Last year, they bet $150 billion, most of it online (with the active involvement of leagues and the broadcasters who serve up their games). And now prediction markets are on the scene, widening the apperture for online casino-like wagering to include politics, the Golden Globe awards, the return of Jesus Christ and virtually anything else… which could be a problem.

Indeed, just this past week, Common Sense Media released a report on gambling by young boys that reveals (among other deeply concerning things) that 1 in 3 American boys ages 11-17 are gambling before they can vote. (Full report here.)

Gambling addiction has been an issue in the U.S. for decades. But with the onslaught of new ways to wager, the problem is surging. And as Benjamin Errett (observes in an amusing piece on “McGuffins“– objects, devices, or events necessary to plot and the motivation of characters, but insignificant, unimportant, or irrelevant in itself), it’s a particularly problematic problem…

There’s a compelling argument to be made that money is the true MacGuffin. George Ainslie [here], a psychiatrist and behavioural economist, makes that case in a very readable paper on addiction and regrettable choices. He gets right to the weird thing about gambling as a compulsive behaviour: Spending money for a chance of getting more money (with the likelihood of losing it) is illogically direct. (I too got stuck on this paradox in The Wit’s Guide to Gambling, and some part of my brain is still spinning on the roulette table.) If you simply must have cocaine or hot fudge sundaes or hot cocaine fudge sundaes, the immediate pleasure and later pain are in different modalities. And so Ainslie concludes that money is a MacGuffin because it’s “the object of a hedonic game that is justified by its instrumental believability but which is actually shaped by its production of satisfaction in its own right.” Ergo, capitalism is a Hitchcock movie….

source

Ainsle’s essay, prepared for a conference on addiction, is eminently worth reading and pondering.

Ambrose Bierce

###

As we turn our backs on baccarat, we might recall that it was on this date in 1960 that “Money (That’s What I Want)” by Barrett Strong entered the Billboard Hot 100. Written by Berry Gordy and Janie Bradford, the single was the first hit record by Gordy’s Motown Records (released on Motown’s Tamla label). The song peaked at #23 in April and was the only song recorded by Strong that reached the Hot 100, though Strong went on to write many of Motown’s biggest hits. It was, of course, covered by The Beatles, among many others.

Close-up of a vintage vinyl record label for 'Money (That's What I Want)' by Barrett Strong on the Tamla label.

source

And we might note that today is the first day of a “prefectly square” month…

“Give me a place to stand, and I will move the earth”*…

A colorful illustration depicting a statue labeled 'FAME' amidst a pile of money bags, with a man and child inspecting the bags. In the background, prominent buildings labeled 'LIBRARY' and 'UNIVERSITY' are visible, alongside a scroll displaying a 'Plan of Free Home for Consumptives'. Various characters are interacting in the foreground.

It’s all about leverage… perhaps nowhere more painfully than in the philanthropic sector: so many problems; so little bandwidth!

Dick Tofel (a media advisor who was founding general manager and first employee of ProPublica, and its president from 2013 until 2021) weighs in with a “modest proposal.” It’s largely aimed at his field (public media, writ large), an altogether worthy focus; but the general principal is surely much broadly applicable…

I read a fascinating history over the recent holidays and it made me wonder about whether we ought to be fundamentally rethinking institutional philanthropy in this challenging moment. Because that philanthropy provides critical support to so much of nonprofit journalism, I think the question is worth exploring here this week.

The book is The Radical Fund: How a Band of Visionaries and a Million Dollars Upended America [here] by John Fabian Witt [here], a professor at Yale Law School. It charts the history of the American Fund for Public Service, a progressive foundation (to use our contemporary lingo) that operated in the 1920s and ‘30s, and produced some remarkable results with fairly limited resources (roughly $36 million over its entire run in current dollars).

The American Fund was rocked by conflicts between what we would now call progressives and literal Communists, and it made a few foolish grants, including some funding for Stalin-era Soviet agriculture, but it also accomplished an astonishing number of big things. It provided critical support for the NAACP, from its early anti-lynching campaign to launching the litigation program that culminated in Brown v. Board of Education, and including the earlier first moves toward salary equalization for public school teachers and desegregation of public graduate schools in the South; funded lifelines for Sidney Hillman’s industrial unionization drive that eventually produced the CIO, and for A. Philip Randolph’s pathbreaking Black union, the Brotherhood of Sleeping Car Porters; and supported the defenses of Sacco and Vanzetti, the Scopes “monkey trial” and the Scottsboro Boys.

In all, as Witt concludes, “People and movements touched by the American Fund did more for twentieth-century American liberalism than all the money of the era’s much larger and more famous foundations.”

Here’s what got me to thinking: Over well more than a decade, the American Fund spent only $67,000 (about $1.25 million today), or 3.5% of its total spending, on its own operations—the rest went to gifts and grants. This was possible because the Fund hired essentially no staff, with its work being done by its many impressive directors, including Roger Baldwin, founder of the ACLU, James Weldon Johnson, leader of the NAACP, Norman Thomas, the perennial Socialist Party presidential candidate (he got almost 900,000 votes in 1932), Freda Kirchwey of The Nation and attorney Morris Ernst. Among the giants they consulted were W.E.B. du Bois, Felix Frankfurter and Reinhold Niebuhr.

And here’s what it made me wonder: Especially in this moment of overwhelming needs across the social sector, as the federal government withdraws from so many crucial activities it had undertaken and supported for a half century, should institutional foundations recast themselves in the model of the American Fund, dispensing with their large staffs and instead restocking their boards with leaders who could directly disperse their largess?

Before you object that that’s simply impractical, you need to reckon with the fact that this is actually the operating model of most of what we call “major donors,” wealthy individuals, occasionally with family foundations, some of them making very large grants. Mackenzie Scott is the overwhelmingly largest funder of this sort, but in our own field such funders have included those who sparked Voice of San Diego, ProPublica, the Texas Tribune, the Marshall Project, CalMatters, Mississippi Today, the Flatwater Free Press, Baltimore Banner, Tulsa Flyer and others. The track record for initiatives spurred by institutional foundation funding is, well, a bit less stellar.

The costs of the current model are also much larger than you may imagine. The Ford Foundation, in 2024 alone, spent more than $212 million on its own operations, while making $840 million in grants and gifts (about 20% of the total). Nor is Ford an outlier in this respect: the MacArthur Foundation spent almost $68 million on itself, while paying out $356 million (16%) and the Knight Foundation incurred $32 million in expenses to grant and gift $148 million (18%).

I’m not complaining about these “overhead” rates as such—they are not at all unreasonable by contemporary foundation standards. (The 2024 rate for the Rockefeller Foundation, where I once worked, was 38%!) But for just these three major news funders, the aggregate cost comes to more than $300 million in one year alone. (Of course, news is just one of many things these giants fund.) That total spent on running three foundations is more than half of the rescinded federal support of public broadcasting. The difference between the American Fund’s 3.5% and the 18% median rate for Ford, MacArthur and Knight would be $250 million available for additional grants each year from these three funders alone.

I headlined this column a “modest proposal” because I do not expect it to be adopted, nor perhaps to be taken entirely literally. But I do hope it is directionally provocative. As I have said more than once with respect to public broadcasting, revolutionary changes require an extraordinary response. Essentially every objective of the major institutional foundations is under unprecedented pressure. In that setting, doing business in the usual way may no longer make sense. Looking to the American Fund suggests another path might be possible…

Repurposing overhead: “A Modest Proposal for Big Philanthropy in a Tale from the Past,” from @dicktofel.bsky.social.

For a broad history of philanthropy from the 16th century, see here.

[Image above from “Philanthropy on the Defensive,” also worth a read for a conservative take that inches toward some of the same conclusions…]

* Archimedes (brandishing his lever)

###

Lest we even imagine that philanthropy can do it all, we might recall that it was on this date in 1940 that the first Social Security check– for $22.54– was issued to Ida May Fuller.

The Social Security Program had been created in 1935, with qualification for eligibility (covered earnings) beginning in 1937. So Ms. Fuller, a teacher-turned legal-secretary, had been accumulating credit for three years. She lived to 100 years old and collected a total of $22,888.

An elderly woman wearing glasses holds a check in front of a mailbox, smiling softly.

source

Written by (Roughly) Daily

January 31, 2026 at 1:00 am

“The best way to predict the future is to invent it”*…

A vintage futuristic car driving down a tree-lined road with a man and a woman smiling inside.

Dario Amodei, the CEO of AI purveyor Anthropic, has recently published a long (nearly 20,000 word) essay on the risks of artificial intelligence that he fears: Will AI become autonomous (and if so, to what ends)? Will AI be used for destructive pursposes (e.g., war or terrorism)? Will AI allow one or a small number of “actors” (corporations or states) to seize power? Will AI cause economic disruption (mass unemployment, radically-concentrated wealth, disruption in capital flows)? Will AI indirect effects (on our societies and individual lives) be destabilizing? (Perhaps tellingly, he doesn’t explore the prospect of an economic crash on the back of an AI bubble, should one burst– but that might be considered an “indirect effect,” as AI development would likely continue, but in fewer hands [consolidation] and on the heels of destabilizing financial turbulence.)

The essay is worth reading. At the same time, as Matt Levine suggests, we might wonder why pieces like this come not from AI nay-sayers, but from those rushing to build it…

… in fact there seems to be a surprisingly strong positive correlation between noisily worrying about AI and being good at building AI. Probably the three most famous AI worriers in the world are Sam Altman, Dario Amodei, and Elon Musk, who are also the chief executive officers of three of the biggest AI labs; they take time out from their busy schedules of warning about the risks of AI to raise money to build AI faster. And they seem to hire a lot of their best researchers from, you know, worrying-about-AI forums on the internet. You could have different models here too. “Worrying about AI demonstrates the curiosity and epistemic humility and care that make a good AI researcher,” maybe. Or “performatively worrying about AI is actually a perverse form of optimism about the power and imminence of AI, and we want those sorts of optimists.” I don’t know. It’s just a strange little empirical fact about modern workplace culture that I find delightful, though I suppose I’ll regret saying this when the robots enslave us.

Anyway if you run an AI lab and are trying to recruit the best researchers, you might promise them obvious perks like “the smartest colleagues” and “the most access to chips” and “$50 million,” but if you are creative you might promise the less obvious perks like “the most opportunities to raise red flags.” They love that…

– source

In any case, precaution and prudence in the pursuit of AI advances seems wise. But perhaps even more, Tim O’Reilly and Mike Loukides suggest, we’d profit from some disciplined foresight:

The market is betting that AI is an unprecedented technology breakthrough, valuing Sam Altman and Jensen Huang like demigods already astride the world. The slow progress of enterprise AI adoption from pilot to production, however, still suggests at least the possibility of a less earthshaking future. Which is right?

At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present. Every day, news items land, and if you read them with a kind of soft focus, they slowly add up. Trends are vectors with both a magnitude and a direction, and by watching a series of data points light up those vectors, you can see possible futures taking shape…

For AI in 2026 and beyond, we see two fundamentally different scenarios that have been competing for attention. Nearly every debate about AI, whether about jobs, about investment, about regulation, or about the shape of the economy to come, is really an argument about which of these scenarios is correct…

[Tim and Mike explore an “AGI is an economic singularity” scenario (see also here, here, and Amodei’s essay, linked above), then an “AI is a normal technology” future (see also here); they enumerate signs and indicators to track; then consider 10 “what if” questions in order to explore the implications of the scenarios, honing in one “robust” implications for each– answers that are smart whichever way the future breaks. They conclude…]

The future isn’t something that happens to us; it’s something we create. The most robust strategy of all is to stop asking “What will happen?” and start asking “What future do we want to build?”

As Alan Kay once said, “The best way to predict the future is to invent it.” Don’t wait for the AI future to happen to you. Do what you can to shape it. Build the future you want to live in…

Read in full– the essay is filled with deep insight. Taking the long view: “What If? AI in 2026 and Beyond,” from @timoreilly.bsky.social and @mikeloukides.hachyderm.io.ap.brid.gy.

[Image above: source]

Alan Kay

###

As we pave our own paths, we might send world-changing birthday greetings to a man who personified Alan’s injunction, Doug Engelbart; he was born on this date in 1925.  An engineer and inventor who was a computing and internet pioneer, Doug is best remembered for his seminal work on human-computer interface issues, and for “the Mother of All Demos” in 1968, at which he demonstrated for the first time the computer mouse, hypertext, networked computers, and the earliest versions of graphical user interfaces… that’s to say, computing as we know it, and all that computing enables.