(Roughly) Daily

“Without data, you’re just another person with an opinion”*…

Further, in a fashion, to a post ten days ago...

Arguing for the collection of detailed data in the first U.S. census, James Madison argued that comprehensive information was necessary to ensure legislative decisions were based on, in his words, “facts, instead of assertions and conjectures.” Since then, the importance of data has become obvious across a broader array of government activities (e.g., regulatory and judicial) and has proved a crucial service both to research and commerce. Our government, our economy, our education and research, our agriculture, and so much more depend on government-collected data.

But since the early days of his current term, NOTUS reports, President Trump and his appointees have been systematically eliminating much of that data…

Joy Binion worked for the federal government collecting data on emerging substance abuse trends in emergency rooms across the country. Her work was part of the Drug Abuse Warning Network, which President Donald Trump’s first administration funded at the recommendation of his commission on the opioid crisis.

Six months into Trump’s second term, his administration axed the data collection effort entirely, laying off Binion and her division.

“They flat out eliminated DAWN, which was actually surprising to me, because DAWN was kind of the Trump administration’s baby in 2016 as they really looked toward fighting the opioid epidemic,” Binion told NOTUS, adding that healthcare providers no longer have a comprehensive resource to learn about the new drugs that could require emergency medical responses.

Since retaking office, the Trump administration has transformed how the government collects data, cut access to previously-public data and stopped collecting some data altogether. This overhaul has left significant holes in data on everything from substance useto maternal mortality.

NOTUS spoke to 18 data experts and researchers who rely on federal data who said the breadth of information no longer being collected or distributed by the federal government has been nearly impossible to track. Researchers estimate that well over 3,000 data sets have been removed from public access.

The current reality is that the federal government is no longer a reliable source of widespread data collection… [Here] is only a small sample of the data collection the Trump administration has made changes to:

  • The Department of Agriculture terminated a report on household food security in September, claiming it was “redundant, costly, politicized, and extraneous.” Feeding America said it relied on this survey to guide its programs.
  • The Centers for Disease Control and Prevention stopped releasing data on maternal and infant mortality in April 2025 after the administration placed all of the agency staff managing the Pregnancy Risk Assessment Monitoring System on administrative leave. The data collection resumed in at least some states in July 2025, but recent data contains gaps.
  • Trump directed the Justice Department last year to suspend a Biden-era database tracking misconduct by federal law enforcement officers.
  • The administration removed questions on gender identity from the National Crime Victimization Survey, the National Health Interview Survey and other surveys. Homeless shelters, mental health hotlines and substance use recovery programs all used this data for policymaking and planning.
  • The Department of Homeland Security ended public access in October to its public safety and infrastructure dataset, called Homeland Infrastructure Foundation-Level Data.
  • The National Center for Education Statistics missed a mandated deadline to release its annual report on the condition of the American education system, and the materials released were lacking in data compared to previous years.
  • The Health and Human Services Department’s 2024 National Survey on Drug Use and Health omitted information about drug use based on race and ethnicity. HHS laid off the team that collected the data, though the agency is reportedly working with a contractor to resume its collection.
  • The Internal Revenue Service, the Department of Education and the Substance Abuse and Mental Health Services Administration no longer allow researchers to apply to access and study their data.
  • The Bureau of Labor Statistics produces fewer calculations for its producer price index program and has cut down where it collects data from.

Some of these cuts were made without any public fanfare, like the administration’s decision to end DAWN. In other cases, agencies slipped the news into routine announcements. And occasionally, like when the White House mandated that questions about gender identity be removed from federal surveys, the administration touted the deletions as quelling “gender ideology extremism.” [See also here and here.]

Researchers told NOTUS that the federal government’s reasoning for terminating data collection is flawed. And in some cases, the Trump administration has run afoul of congressional mandates to produce data, including by failing to publish required reports on time and removing reports required by law

More at: “Federal Data Is Disappearing” from @notus.com.

Several not-for-profits (the Internet Archive, libraries, and academic groups) are valiently trying to preserve data sets that have been removed. But they cannot of course preserve data that is never collected…

[Image above: source]

W. Edwards Deming

###

As we drive with our windshields painted over, we might recall that on this date in 2020 ice fisherman Thomas Knight caught a 40 inch, 37.7 pound lake trout on Big Diamond Pond in West Stewartstown, New Hampshire. It is the largest lake trout on record in New England.

source

Written by (Roughly) Daily

February 25, 2026 at 1:00 am

“The economic system is, in effect, a mere function of social organization”*…

A statue in the likeness of a police officer stands watch over a smart highway in Jinan, China, on April 18, 2024

The AI race is, of course, afoot. But while most headlines focus on the new capabilities and benchmarks achieved by competing developers, Jeremy Shapiro reminds us that the winners in this race won’t necessarily be the most objectively capable, but rather the players who most effectively integrate the technology into their organizations, economies, and societies…

Artificial intelligence has rapidly become a central arena of geopolitical competition. The United States government frames AI as a strategic asset on par with energy or defense and seeks to press its apparent lead in developing the technology. The European Union lags in platform power but seeks influence over AI through regulation, labor protections, and rule-setting. China is racing to catch up and to deploy AI at scale, combining heavy state investment with administrative control and surveillance.

Each of these rivals fears falling behind. Losing the AI race is widely understood to mean slower growth, military disadvantage, technological dependence, and diminished global influence. As a result, governments are pouring money into chips, data centers, and national AI champions, while tightening export controls and treating compute capacity as a strategic resource. But this familiar race narrative obscures a deeper danger. AI is not just another general-purpose technology. It is a force capable of reshaping the very meaning of work, income, and social status. The states that lose control of these social effects may find that technological leadership offers little geopolitical advantage.

History suggests that societies unable to absorb disruptive economic change become politically volatile, strategically erratic, and ultimately weaker competitors. The central question, then, is not only who builds the most powerful AI systems, but who can integrate them into society without triggering a societal backlash or an institutional breakdown.

Karl Polanyi’s The Great Transformation, published in 1944, explains why the capacity to “socially embed” new market forces determines national strength. By “embeddedness,” Polanyi meant that markets have historically been subordinate to social and political institutions, rather than governing them. The nineteenthcentury idea of what he called a “self-regulating market” was historically novel precisely because it sought to “disembed” the economy from society and organize social life around price and competition rather than social obligation. As Polanyi put it in his most succinct formulation, “instead of economy being embedded in social relations, social relations are embedded in the economic system.”

Writing in the shadow of the Great Depression, Polanyi argued that the attempt in the nineteenth century to create a self-regulating market society that treated labor, land, and money as commodities generated social dislocation so severe that it provoked authoritarian backlash and geopolitical collapse. Stable orders, he insisted, required markets to be re-embedded in social and political institutions. Where they were not, societies sought protection by other means, which often translated into support for fascist or communist regimes that promised to tame the market. Today, it often means electing populist leaders who promise to break the entire existing order, both domestic and international.

Polanyi insisted that the idea of a “self-adjusting market implied a stark utopia” because such a system could not exist “for any length of time without annihilating the human and natural substance of society.” The interwar gold standard, for example, disciplined states in the name of efficiency, but it did so by transmitting economic shocks directly into social life. When democratic governments proved unable to shield their populations, they either abandoned the liberal economic order or turned authoritarian (or both)…

[Shapiro considers the history of the 20th century, in particular the rise of Nazi Gernmany, sketches the state of play in the AI arena, considers the challenge of embedding the changes that AI will bring in The U.S., Europe, and China, then teases out the ways in which the “industrial revolution” is different from it predecessors (in particular, the mobility of capital, the services (as opposed to manufacturing)-heavy character of employment today, and the accelerating pace of tech deelopment. He concludes…]

… Geopolitical competition in the AI age will not take place solely in clean rooms or data centers. It will also involve the less visible realm of social institutions: labor markets, communities, social protections, and political legitimacy. Polanyi teaches us that markets are powerful only when societies can bear them. When they cannot, markets provoke their own undoing and often in rather spectacular fashion.

The West’s success in the Cold War owed much to its ability to reconcile capitalism with social protection. If the AI age is another “great transformation,” the same lesson applies. Chips matter. Data matters. But the ultimate source of power may be the capacity to re-embed technological change in society without sacrificing cohesion.

That is not a liberal-progressive distraction from geopolitical competition. It is its hidden core.

The Next Great Transformation,” from @jyshapiro.bsky.social and @open-society.bsky.social.

For a complementary perspective (with special focus on the interaction between labor and the supply side of the economy) pair with: “Brave New World- a third industrial divide?” from @thunen.bsky.social in @phenomenalworld.bsky.social.

And see also: “AI and the Futures of Work,” from Johannes Kleske (@jkleske.bsky.social). A response to dramatic predictions of AI’s impact– most recently, Matt Shumer‘s viral “Something Big Is Happening“: it’s a possible future, Kleske suggests. but only one possibe future– and one that, while plausible, isn’t likely (at least outside the rarified atmsphere of coding, in which Shumer operates). In a way that echoes Shapiro’s piece above, Kleske suggests that individuals need to better understand the technology in order to retain/regain some agency, and societies need the same kind of rekindled resistance to act clearly and with purpose in re-embedding AI, and markets, in society. Not the other way around… Resonant with the thinking of Tim O’Reilly and Mike Loukides featured here before: “The best way to predict the future is to invent it“; and with Ted Chiang‘s “ChatGPT Is a Blurry JPEG of the Web” and “Will A.I. Become the New McKinsey?” And then there’s the ever-illuminating Rusty Foster (riffing on Gideon Lewis-Kraus‘ recent New Yorker piece): “A. I. Isn’t People.”

For a look at a high-value, trust-based use case for AI that seems to avoid the objections to AGI (and speak to Shapiro’s points), see “The Middle Game: Routers at the Edge,” from Byrne Hobart.

But back to AGI… as Nicholas Carr observes, we might understand Bosrtrom’s “paperclip maximizer” “not as a thought experiment but as a fable. It’s not really about AIs making paperclips. It’s about people making AIs. Look around. Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence? Are we not trapped in an “AI maximizer” scenario?”

###

As we digest development, we might recall that it was on this date in 1962 that an early precondition for the revolution underway was first achieved: telephone and television signals were first relayed in space via the communications satellite Echo 1– basically a big metallic balloon that simply bounced radio signals off its surface.  Simple, but effective.

Forty thousand pounds (18,144 kg) of air was required to inflate the sphere on the ground; so it was inflated in space.  While in orbit it only required several pounds of gas to keep it inflated.

Fun fact: the Echo 1 was built for NASA by Gilmore Schjeldahl, a Minnesota inventor probably better remembered as the creator of the plastic-lined airsickness bag.

200px-Echo-1

source

“It is what you read when you don’t have to that determines what you will be when you can’t help it”*…

… What we read– and, librarian Carlo Iacono argues, how we read.

Our inabilty to focus isn’t a failing. It’s a design problem, and the answer isn’t getting rid of our screen time…

Everyone is panicking about the death of reading. The statistics look damning: the share of Americans who read for pleasure on an average day has fallen by more than 40 per cent over the past 20 years, according to research published in iScience this year. The OECD calls the 2022 decline in educational outcomes ‘unprecedented’ across developed nations. In the OECD’s latest adult-skills survey, Denmark and Finland were the only participating countries where average literacy proficiency improved over the past decade. Your nephew speaks in TikTok references. Democracy itself apparently hangs by the thread of our collective attention span.

This narrative has a seductive simplicity. Screens are destroying civilisation. Children can no longer think. We are witnessing the twilight of the literate mind. A recent Substack essay by James Marriott proclaimed the arrival of a ‘post-literate society’ and invited us to accept this as a fait accompli. (Marriott does also write for The Times.) The diagnosis is familiar: technology has fundamentally degraded our capacity for sustained thought, and there’s nothing to be done except write elegiac essays from a comfortable distance.

I spend my working life in a university library, watching how people actually engage with information. What I observe doesn’t match this narrative. Not because the problems aren’t real, but because the diagnosis is wrong.

The declinist position rests on a category error: treating ‘screen culture’ as a unified phenomenon with inherent cognitive properties. As if the same device that delivers algorithmically curated rage-bait and also the complete works of Shakespeare is itself the problem rather than how we decide to use it…

[… observing that “people who ‘can’t focus’ on traditional texts can maintain extraordinary concentration when working across modes, he argues that “we haven’t become post-literate. We’ve become post-monomodal. Text hasn’t disappeared; it’s been joined by a symphony of other channels.”…]

… What troubles me most about the declinist position is not its diagnosis but its conclusion. The commentators who lament the post-literate society often identify the same villains I do. They recognise that technology companies are, in Marriott’s words, ‘actively working to destroy human enlightenment’, that tech oligarchs ‘have just as much of a stake in the ignorance of the population as the most reactionary feudal autocrat.’

And then they surrender. As Marriott says: ‘Nothing will ever be the same again. Welcome to the post-literate society.’

This is the move I cannot follow. To name the actors responsible and then treat the outcome as inevitable is to provide them cover. If the crisis is a force of nature, ‘screens’ destroying civilisation like some technological weather system, then there’s nothing to be done but write elegiac essays from a comfortable distance. But if the crisis is the product of specific design choices made by specific companies for specific economic reasons, then those choices can be challenged, regulated, reversed.

The fatalism, however beautifully expressed, serves the very interests it condemns. The technology companies would very much like us to believe that what they’re doing to human attention is simply the inevitable result of technological progress rather than something they’re doing to us, something that could, with sufficient political will, be stopped.

Your inability to focus isn’t a moral failing. It’s a design problem. You’re trying to think in environments built to prevent thinking. You’re trying to sustain attention in spaces engineered to shatter it. You’re fighting algorithms explicitly optimised to keep you scrolling, not learning.

The solution isn’t discipline. It’s architecture. Build different defaults. Create different spaces. Establish different rhythms. Make depth as easy as distraction currently is. Make thinking feel as natural as scrolling currently does.

What if, instead of mourning some imaginary golden age of pure text, we got serious about designing for depth across all modes? Every video could come with a searchable transcript. Every article could offer multiple entry points for different levels of attention. Our devices could recognise when we’re trying to think and protect that thinking. Schools could teach students to translate between modes the way they once taught translation between languages.

Books aren’t going anywhere. They remain unmatched for certain kinds of sustained, complex thinking. But they’re no longer the only game in town for serious ideas. A well-crafted video essay can carry philosophical weight. A podcast can enable the kind of long-form thinking we associate with written essays. An interactive visualisation can reveal patterns that pages of description struggle to achieve.

The future belongs to people who can dance between all modes without losing their balance. Someone who can read deeply when depth is needed, skim efficiently when efficiency matters, listen actively during a commute, and watch critically when images carry the argument. This isn’t about consuming more. It’s about choosing consciously.

We stand at an inflection point. We can drift into a world where sustained thought becomes a luxury good, where only the privileged have access to the conditions that enable deep thinking. Or we can build something unprecedented: a culture that preserves the best of print’s cognitive gifts while embracing the possibilities of a world where ideas travel through light, sound and interaction.

The choice isn’t between books and screens. The choice is between intentional design and profitable chaos. Between habitats that cultivate human potential and platforms that extract human attention.

The civilisations that thrive won’t be the ones that retreat into text or surrender to the feed. They’ll be the ones that understand a simple truth: every idea has a natural form, and wisdom lies in matching the mode to the meaning. Some ideas want to be written. Others need to be seen. Still others must be heard, felt or experienced. The mistake is forcing all ideas through a single channel, whether that channel is a book or a screen.

Your great-grandchildren won’t read less than you do. They’ll read differently, as part of a richer symphony of sense-making. Whether that symphony sounds like music or noise depends entirely on the choices we make right now about the shape of our tools, the structure of our schools, and the design of our days.

The elegant lamenters offer a eulogy. I’m more interested in a fight…

Reunderstanding reading: “Books and screens,” from @carloiacono.bsky.social in @aeon.co.

* Oscar Wilde

###

As we turn the page, we might note that we’ve been here before, and celebrate the emergence of a design, an innovation, a technology that took on a life of its own and changed reading and… well, everything:  this day in 1455 is the traditionally-given date of the publication of the Gutenberg Bible, the first Western book printed from movable type.

(Lest we think that there’s actually anything new under the sun, we might recall that The Jikji— the world’s oldest known extant movable metal type printed book– was published in Korea in 1377; and that Bi Sheng created the first known moveable type– out of wood– in China in 1040.)

Gutenberg Bible on display at the U.S. Library of Congress (source)

Written by (Roughly) Daily

February 23, 2026 at 1:00 am

“All one wants to do is make a small, finished, polished, burnished, beautiful object”*…

… and if we don’t make them, we can collect them.

Scott Teplin reports (in Paul Lukas‘ nifty newsletter, Inconspicuous Consumption) on one remarkable example…

My family and I recently vacationed in Mexico City. Nestled in the heart of the city’s vibrant Roma Norte neighborhood is a hidden gem that nearly escaped our itinerary: The Object Museum (or MODO, short for Museo del Objeto del Objeto, or “the Object of the Object” [see also here]).

To be honest, I was initially hesitant to step inside. Having grown up in Wisconsin, I developed a healthy distaste for the cluttered crap heaps featured in the infamous tourist trap the House on the Rock, and I worried that MODO might be more of the same. However, curiosity eventually won out, and I wandered in one afternoon to discover what turned out to be one of the most delightful museum experiences I’ve ever had.

The museum is a dedicated homage to the “object (i.e., the point) of the object,” showcasing vast collections of everyday items. Originally conceived as the private obsession of the mansion’s resident, Bruno Newman [here], who spent over 40 years collecting packaging and advertising, it has evolved into something of a localized record of material culture. Plus it’s just a well-curated collection of cool stuff…

Mexican matchbooks (all manufactured in Sweden)
Toy figurine heads based on lucha libre masks

A celebration of the commonplace. See much more at: “A Museum Devoted to Everyday Items,” from @steplin.bsky.social.

* John Banville

###

As we regard the routine, we might celebrate: today is National Margarita Day.

source

“Never tell me the odds!”*…

How likely is it that one will be born on a Leap Day? That one will find a pearl in an oyster? That one will solve Wordle on the first guess? That one will die on a tornado? That two people will share the same fingerprint?

The good folks at R74n (@r74n.com) have these probabilities– and so many more: “What Are The Odds?

(Image above– and tutorial on the odds ratio: source)

* Han Solo (Harrison Ford) in Star Wars: Episode V– The Empire Strikes Back

###

As we place our bets, we might spare a thought for Harvey Kurtzman; he died on this date in 1993. A cartoonist and editor, he is best know for writing and editing the parodic comic book Mad from 1952 until 1956. Kurtzman scripted every story in the first twenty-three issues. (The New York Times‘ obituary for Kurtzman in 1993, alluding to the role of publisher William Gaines, said Kurtzman had “helped found Mad Magazine.” This prompted an angry response to the newspaper from Art Spiegelman, who complained that awarding Kurtzman partial credit for starting Mad was “like saying Michelangelo helped paint the Sistine Chapel just because some Pope owned the ceiling.”)

Kurtzman, who mentored many younger cartoonists (including Terry Gilliam and Robert Crumb), is considered, with cartoonists like Will EisnerJack Kirby, and Carl Barks, one of the defining creators of the Golden Age of American comic books. The prestigious Harvey Awards (for achievement in comic books) are named in his honor.

source

source

Written by (Roughly) Daily

February 21, 2026 at 1:00 am