(Roughly) Daily

Posts Tagged ‘politics

“Did you have any orange juice today?”*…

… if so, it’s less and less likely that it was from Florida.

The canonical articles on the Florida orange juice industry are John McPhee’s two-parter from The New Yorker from the 1960s. But that was then.

Alex Sammon has picked up the baton, with an article on the brutal, unrelenting decline of that business…

Quiet fell over the room, which was neither full nor very loud to begin with, and the 2026 Florida Citrus Show began.

“It should be a great day,” began the event’s first speaker. “Rain should hold off today, even though we definitely need more rain.” No one laughed.

There was no need to say that things were bad. Everyone knew it. The mood wasn’t sour—citrus farmers could handle sour. It was something else. Postapocalyptic. Florida is in the midst of its worst drought in 25 years, but the dry spell actually ranked far down on the list of challenges these bedraggled growers were facing.

In 2003, the mighty Florida orange industry produced 242 million boxes of fruit, with 90 pounds of oranges per box, most of which went on to become orange juice. Now, not even 25 years later, the United States Department of Agriculture was forecasting a pitiful 12 million boxes of oranges, the least in more than 100 years, the worst year since last. A decline of more than 95 percent.

And everyone knew, more or less, that even that figure was not happening. “Twelve million? I would doubt it,” Matt Joyner, CEO of Florida Citrus Mutual, the state’s largest trade group, told me. There was chatter that even 11 million might be out of reach. Could the total end up being less than that, just seven figures? In Florida, the citrus capital of the world, you are today more likely to see the oranges printed on the state’s 18 million license plates than a box of actual fruit.

Rick Dantzler, chief operating officer of the Citrus Research and Development Foundation, took the podium. He was blunt. “It’s been a dumpster fire of a year,” he said.

On the list of immediate problems: the implementation of tariffs and retaliatory tariffs, then the government shutdown, then a stunning, historic freeze, days long, at the end of January and early February, that besieged the fragile orange trees.

And yet those, too, were just footnotes to the even larger problem. Already, Florida had lost about three-quarters of its citrus growers. The last of them, these spent survivors, these hangers-on, had trudged to the Citrus Show to talk about the real problem, which was the disease.

In 2005, Florida first got signs of a new affliction in its groves called citrus greening disease. It also has a Chinese name, Huanglongbing, or HLB, because it came from China, where oranges also came from in the first place.

Citrus greening disease is caused by a bacterial infection that is delivered by the gnawing of the Asian citrus psyllid. (It’s now believed the psyllid first turned up near the Port of Miami in 1998.) The flea-sized psyllid bites the leaves and transmits the disease, which slowly chokes out the tree’s vascular system from the inside, taking years to finally show itself. By the time a tree is displaying symptoms—three to five years, in most cases—it’s too late…

Read on for an explanation of how this catastrophe has materialized and for a consideration of what it means for Central Florida (and the other major supplier, Brazil, which is also suffering).

Who Killed the Florida Orange?” from @alexsammon.bsky.social in @slate.com.

Other comestible news from Florida: “A deadly bacteria is creeping up the Atlantic Coast. How worried should you be?

* Harold Brodkey, First Love and Other Sorrows: Stories

###

As we contemplate the consequences of climate change and contagion, we might consider an alternative to orange juice on this, National Raisin Day. But while raisins are richly nutricious, they are not so strong on Vitamin C, so we’ll have to keep looking…

source

“The danger of the past was that men became slaves. The danger of the future is that man may become robots.”*…

Images of Rastus Robot in an issue of Radio-Craft magazine from 1931

… which might be the same thing?

As more and more folks are fearing obsolescence (if not, indeed, subjugation) by emerging technology, Matthew Wills reminds us that this fear– especially as embodied in androids– has a long (and dark) history here in the U.S…

Our word “robot” comes from Karel Čapek’s 1921 play R.U.R. In it, historian of robots Dustin A. Abnet explains, Čapek repurposed the Czech word for “drudgery” or “servitude” to refer to the artificial workers produced by the play’s Rossumovi Univerzální Roboti (Rossum’s Universal Robots) company. [See also here.] Created from synthetic organic material, and thus more android than mechanical, these worker-roboti ultimately overthrow their human masters.

The play was a sensation in Europe, and then a year later, in America, though something was lost in translation. Čapek used robots to criticize soulless Fordism—the “standardization and regimentation” of American capitalism—and hence the US’s political and cultural power in Europe and around the world. (Other Europeans would conceive of the robot in the same way, notably director Fritz Lang and screenwriter Thea von Harbou in the 1927 German film Metropolis.)

But a funny thing happened to these robotic symbols of American capitalism by the mid-twentieth century. They were Americanized by American capitalism. Americans, as Abnet notes, “turned a figure that initially rebelled against the dehumanizing effects of Fordism into a tamed electro-mechanical slave holding aloft a global empire of consumerism.”

Nowhere was this more literal than in the Westinghouse Electric Company’s “simple remotely controlled mechanical men and women” used to advertise the company’s products from 1927 to 1940. “Technology did not have to run amok, Westinghouse’s robots suggested; it could instead become a tamed slave that empowered each individual consumer to become his or her own master.” In the American context, where the language of master and slave was rooted in racism, Westinghouse “connected robots to romanticized white myths about slavery.”

“Americans had always racialized robot-like creations,” continues Abnet, citing the first American automaton (a caricature of a Native American) and the “grotesque minstrel-like caricatures of Black and Asian bodies” that made up automatons in the late nineteenth century.

Westinghouse’s creations, named Herbert Televox, Karina Van Televox, Telelux, Rastus, Willie Vocalite, and Elektro, were promoted as docile domestic workers. Abnet quotes the New York Times’ science and technology editor extolling the benefits of the first of these “mechanical slaves” in 1927: “it obeys without the usual human arguing, impudence or procrastination.”

Rastus, Westinghouse’s Great Depression-era robot, was the most overtly racialized of these corporate robot slaves. Rastus was modeled on a minstrel show character: “black rubber ‘skin,’ overalls, a white shirt, and a pail hat.” In addition, “the robot had a ‘rich, baritone voice’ that would have been read as unmistakably black.” While “all of Westinghouse’s other robots told jokes…Rastus and its blackness were themselves the joke.”

In 1930, Westinghouse’s President explicitly expressed the prevailing white romanticism of slavery. In the company’s Electric Journal, he argued that without the exploitation of the “muscles of others,” there could be “no art, literature, science, leisure, or comfort for anyone.” Rastus’s “tamed black body,” stresses Abnet, “underscored the larger rhetoric of slavery that shaped the fantasy the company offered white consumers.”

“Ultimately, Westinghouse’s robots were not just about more efficiently accomplishing work or ensuring greater leisure time; they were a symbol that deployed racialized slavery in ways that could reassure white Americans of their own freedom, their own mastery over both technology and the bodies of others.”

Čapek’s robots had successfully rebelled, killing all but one human. In America, that couldn’t happen, at least according to the corporations selling the robot idea. But fear of a robot rebellion, like the fear of slave rebellion before the Civil War, remained. Abnet notes that the “most common robot story in American science fiction during the 1920s and 1930s told a story of white men, using their cunning, strength, and willpower to restore their authority over the robots who should be their slaves.” Movies, especially science fiction serials, often told the same story.

A century after R.U.R. and forty years after The Terminator, the uneasiness engendered by robots (and their droid, cyborg, replicant, and AI cousins) persists, reflecting longstanding concerns about labor, autonomy, and power…

Early automatons in the US evolved from symbols of revolt into racialized figures tied to labor and the legacy of slavery: “How America Racialized the Robot,” from @jstordaily.bsky.social.

* Erich Fromm, The Sane Society

###

As we move on, we might recall that it was on this date in 1967 that Aretha Franklin’s up-tempo cover of Otis Redding’s “Respect” enter the Billboard Hot 100. It rose steadily over the next several weeks, hitting #1 in June, where it stayed for two weeks and won Franklin two Grammy Awards at the 1968 ceremony, including the first of eight consecutive Grammys for Best Female R&B Vocal Performance. An R&B classic, it has also become a protest anthem, thanks to its connections to both the civil rights movement of the 1960s and the second-wave feminist movement of the 1970s.

Written by (Roughly) Daily

April 29, 2026 at 1:00 am

“The greatest danger in times of turbulence is not the turbulence – it is to act with yesterday’s logic”*…

Jennifer Pahlka— the founder and long-time leader of Code for America, the former US Deputy Chief Technology Officer, the author of Recoding America, and the cofounder and board chair of the Recoding America Fund— has dedicated her life to improving governance and government services. Here, she reflects on a core lesson that she has learned…

I got into government reform sixteen years ago, though I didn’t think of it as reform at the time. I thought of it as just trying to make a few specific things work better. Since then I’ve worked at the local, state, and federal levels, on benefit delivery, on national defense, on a handful of things in between. I’ve worked alongside a lot of people whose own paths in this work have run the gamut. Collectively we’ve seen a lot. I think we’ve learned a lot about what we often call the operating model of government.

But the government we have — the operating model it runs on, the rules and structures and assumptions that shape how it hires, procures, and delivers — was built for a world that no longer exists, and the distance between that world and this one is growing. We are approaching the kind of moment when that gap stops being a management problem and becomes a true legitimacy crisis. (Many will say that moment has already come.) It’s time to start asking whether the theory of change most of us have been operating under — incremental improvements off a pretty poor baseline — was ever going to get us to a government capable of meeting fast-changing needs. It hasn’t yet, and if we don’t do something differently, it won’t.

Kelly Born at the Packard Foundation recently shared with me a framework called the Three Horizons, originally developed by Anthony Hodgson and adapted widely in systems-change work. In it, Horizon 1 is the currently dominant system. It’s functional enough to persist but failing in critical ways, especially for people with less power. Horizon 3 is the future system you’re working toward, already visible in patches of practice that embody different values and different ways of working, but far from the norm. Horizon 2 is the turbulent middle where change agents work.

But the key insight is that not all Horizon 2 work is the same. Some H2 innovations genuinely create the conditions for the new system to emerge. Call those transforming H2, or H2+. Others, however inadvertently, extend the lifespan of the failing system by relieving the pressure that might otherwise force structural change. Call those sustaining H2, or H2-. Both feel like reform, but they have very different long-term implications.

H2- work is attractive because it usually produces real value in the short run. H2+ work can take a long time to pay off, and the path is rarely clear. In a stable environment, you can get away with a lot of H2-. In an environment where the underlying system has become truly untenable, the difference between the two starts to matter a great deal. I think that’s where we are now…

[Jen describes a few projects that illustrate patterns that play out over and over in the category of H2-, the work that sustains the status quo…]

… The H2- work I’m describing has been done in good faith by people. I am one of those people. Code for America, which I founded and where I spent more than a decade, is in important respects capacity substitution. USDR, which I also helped start, is as well. The healthcare.gov rescue (which I didn’t actually work on but tried to provide moral support for) was the rescue-and-rebuild cycle. For much of the past fifteen years, the H2- path was arguably the right call. When there was no political space for structural change, demonstrations were a good way to build the evidence base and develop the field.

I think we are in a different moment now. This moment is defined by disruption. I count three kinds.

Contingent disruption — pandemics, climate events, geopolitical shocks, financial crises — is unpredictable in its specifics but very predictable in its category: large, fast-moving, high-stakes demands that fall disproportionately on government. COVID was not an anomaly. The next version won’t look the same.

The most recent disruption to federal government, however, was political. Whatever the cost of its methods, DOGE made the brittleness of the current operating model impossible to ignore and created political openings for structural arguments that previously had no traction. The reform field did not create this moment. But it can shape what comes out of it.

AI brings structural disruption. This is a transformation already underway in the material conditions of work, economy, and administration. AI creates dramatic change in both the needs and conditions government must respond to and the ways in which it can respond at the same time. Yes, I certainly mean a social safety net not nearly fit to handle the levels of unemployment that are likely coming our way, and yes, I mean possible upsets in the balance of power between agencies and the vendors they rely on, but that’s barely scratching the surface.

AI is not only an exogenous shock that government will have to absorb. It is also moving the bar on what counts as acceptable service in the first place. People are already using AI to understand their medical bills, navigate insurance denials, and draft appeals for benefits they were wrongly denied. Soon they will expect to apply for SNAP or file their taxes by uploading a paystub and answering a few plain-language questions, not by filling out even the best-designed web form. The forty-page PDF used to feel intolerable. The well-designed web form will start to feel that way too, and faster than the last transition did.

And service delivery is only the most visible piece. The same expectation shift is going to hit regulation, permitting, enforcement, how quickly an agency can respond to a new problem, how a legislature decides whether a law is working. If a small team with the right tools can map a regulatory regime in a week, the timelines we have now, in which rulemaking takes several years–or even multiple presidential terms–become indefensible. If an advocate can stress-test a policy against thousands of edge cases before it gets enacted, the standard for what counts as due diligence in lawmaking starts to move. The bar is rising on the whole surface of what government does, not just on the forms people fill out.

Not everyone wants this shift to happen. Public sector unions have secured laws in several states forbidding the use of AI in service delivery, won contracts requiring union consent before autonomous vehicles can operate, and pushed legislation mandating staffing levels that the work no longer requires — as my colleagues Robert Gordon and Nick Bagley have documented. The concern for workers caught in this transition is legitimate. But blocking government’s transformation while the world around it moves on is not a strategy for protecting those workers. It exacerbates public frustration with government, weakens the case for investing in it, and leaves the people who most depend on public services with a system increasingly unfit to serve them.

So the gap we have been measuring, between what government delivers and what the public considers a basic level of competence, is widening from both ends at once. The system is straining to clear the old bar at the same moment the bar is rising.

In this environment, the benefits systems that struggled to scale during COVID will be asked to scale again. The regulatory processes that can’t move quickly will be asked to respond to developments they weren’t designed to anticipate. The civil service system that can’t attract the people it needs now will need to attract people with skills that didn’t exist a decade ago.

If I had to pick, it’s AI that drives this disruptive moment. But I don’t have to pick. You could just as easily imagine climate shocks, or the next pandemic, or an escalation of the current war. Truly, some combination of all the above is not that unlikely. Reasonable people may disagree about the size and shape of the disruption AI will bring, but betting against disruption generally seems deeply unwise at the moment.

If you buy that argument, then we must acknowledge that a reform field largely dedicated to H2- work is not what the moment calls for. In a stable environment, H2- work that buys time for a failing system might be much-needed, and might be a missed opportunity for transformation. In an environment where disruptions of all kinds are accelerating, it becomes a compounding liability. Extending the lifespan of a brittle system just means the system eventually fails more spectacularly. More people get hurt. More people look for alternatives to democracy.

That doesn’t mean we need to throw everything out and start over. For the reform ecosystem, it means existing actors need incentives to align their work toward structural transformation, new actors with adjacent expertise need to be welcomed into the fold (especially advocates and lobbyists, given how little influence muscle the field has today), and connections need to be made both upstream and downstream of where we’ve been focused. It means articulating competing H3 visions from a wide range of ideological and practical perspectives and debating them among, including the project that sparked this line of thinking, which Kelly funded and FAI and New America are currently working on. It means designing funding and partnership structures that reward structural ambition while staying grounded in meaningful near-term progress. Funders and grantees share responsibility for creating the conditions under which a diverse set of actors can aim higher by working together, and connecting the dots upstream.

For this to work, it can’t be a zero sum game. Government capacity is wildly neglected in philanthropy despite its high leverage. (Good luck naming an issue philanthropists care about that doesn’t benefit from increased government capacity.) Could the field stop doing some H2- work? Sure. That would free up some existing resources for more H2+ work, which has been too little of the field’s mindshare and resources to date. But that is not the path forward — it wouldn’t get us where we need to be. We need more resources, full stop. We need to make the case to philanthropy for greater investment in the entire field (that’s part of what Recoding America Fund is trying to do) and make the case to government leaders, including electeds, to invest in better plumbing, so that the investment in H2+ work isn’t coming at the expense of the essential life support…

[Jen outlines some of the key principles that animate H2+ efforts, then ponders “doing different things differently”…]

… I realized early last year that while I’d spent the bulk of my career trying to drag government into the Internet Era, that work has to change now. We are entering a new era, and if those of us who fought the last fight don’t adapt to the conditions and expectations of this one, we’ll make exactly the mistake the people who resisted internet-era ways of working made. We’ll become the blockers — the ones holding on to old ways of working because that is what we are used to and that is what we are good at.

None of which means rescue work should stop, or that demonstrations are worthless, or that capacity substitution isn’t helpful and needed. Some H2- work, done deliberately and named honestly, is best understood as experimentation: we’re running it inside the failing system precisely because that’s where we’ll learn what a new operating model has to do. That’s a different kind of work from rescue that produces learning incidentally, but both can be valuable.

But the field needs a shared frame clear-eyed enough to ask, with each investment: does this move the system toward H3, or does it prolong H1? That question should be driving how resources, talent, and attention get allocated now, not because the prior work was mistaken but because the moment is different and the cost of extending the status quo is too high. There will have to be work that sustains the status quo, but what tradeoffs are we willing to make?

But insisting we ask the question does not mean that answering it is easy: there is no objective set of criteria that distinguishes one from the other. What may look like H2+ to some may seem like H2- to others, and part of that depends on your particular vision of that third horizon (more on that in the coming weeks.) Some may see work as contributing to a transformation, and therefore H2+, but towards an undesired H3 state. Grappling with how to answer this question is work we all need to be doing…

… Some things haven’t changed. The community is still full of good, smart people with enormous insight into a very difficult problem. We’ve just run out of time to do it the way we’ve been doing it. A brittle system that gets propped up through manageable shocks will eventually meet a shock it can’t survive, and we are moving into a period where the shocks are neither manageable nor hypothetical. Every H2- intervention that returns the system to “good enough” is now a bet that good enough will hold. It’s a bet I no longer think we can afford to make.

The window for H2+ work has not been open like this before. It will not stay open indefinitely.

Eminently worth reading in full.

What DOGE coulda, shoulda been: “A Three Horizons Framework for Government Reform,” from @pahlkadot.bsky.social.

* Peter Drucker

###

As we face forward, we might recall that it was on this date in 1970 that President Richard Nixon formally authorized the commitment of U.S. combat troops, in cooperation with South Vietnamese units, against North Vietnamese troop sanctuaries in Cambodia.

Secretary of State William Rogers and Secretary of Defense Melvin Laird, who had continually argued for a downsizing of the U.S. effort in Vietnam, were excluded from the decision to use U.S. troops in Cambodia. Gen. Earle Wheeler, Chairman of the Joint Chiefs of Staff, cabled Gen. Creighton Abrams, senior U.S. commander in Saigon, informing him of the decision that a “higher authority has authorized certain military actions to protect U.S. forces operating in South Vietnam.” Nixon believed that the operation was necessary as a pre-emptive strike to forestall North Vietnamese attacks from Cambodia into South Vietnam as the U.S. forces withdrew and the South Vietnamese assumed more responsibility for the fighting. Nevertheless, three National Security Council staff members and key aides to presidential assistant Henry Kissinger resigned in protest over what amounted to an invasion of Cambodia.

When Nixon publicly announced the Cambodian incursion on April 30, it set off a wave of antiwar demonstrations. A May 4, protest at Kent State University resulted in the killing of four students by Army National Guard troops. Another student rally at Jackson State College in Mississippi resulted in the death of two students and 12 wounded when police opened fire on a women’s dormitory. The incursion angered many in Congress, who felt that Nixon was illegally widening the war; this resulted in a series of congressional resolutions and legislative initiatives that would severely limit the executive power of the president.

– source

source

Written by (Roughly) Daily

April 28, 2026 at 1:00 am

“Privacy is rarely lost in one fell swoop. It is usually eroded over time, little bits dissolving almost imperceptibly until we finally begin to notice how much is gone.”*…

… And now, indeed, we’re beginning to notice. Hana Lee Goldin surveys the state of play– who’s buying our personal information, what they’re using it for, and how the system works behind the screen– and considers our options…

Sometime in the mid-2000s, most of us started handing over pieces of ourselves to the internet without giving the exchange a second thought. We created email accounts, signed up for social media, bought things online, downloaded apps, swiped loyalty cards, connected fitness trackers, stored photos in the cloud, and agreed to terms of service that almost none of us have ever read in full. We did this thousands of times over two decades and counting, and each interaction felt small enough to be inconsequential.

But the accumulation is enormous. More than 6 billion people now use the internet, and each one makes an estimated 5,000 digital interactions per day. Most of those interactions happen without our conscious awareness: a GPS ping, a page load, an app opening, a browser cookie refreshing, a device checking in with a cell tower. The average person in 2010 made an estimated 298 digital interactions per day. In fifteen years, that number multiplied more than sixteenfold. Those digital interactions produce records that can persist indefinitely, stored, copied, indexed, bought, sold, and combined with other records to build profiles of extraordinary detail.

If we’ve been online since the late 1990s or early 2000s, our data footprint can include social media accounts we’ve created, online purchases we’ve made, forums we’ve posted in, loyalty cards we’ve used, and apps we’ve installed going back decades. Some of that information lives on platforms we’ve long forgotten. Some of it was collected by companies that have since been acquired or dissolved, with our data potentially passing to successor entities we’ve never heard of. The digital life most of us have been living for 15 to 25 years has produced a layered, evolving archive that only grows more valuable to the people who buy and sell it as time goes on.

Most of us sense that something is off about all of this. In a 2023 survey, Pew Research found that roughly eight in ten Americans feel they have little to no control over the data companies collect about them, 71% are concerned about government data use, and 67% say they understand little to nothing about what companies are doing with their personal information. The concern is real and widespread. And so is the feeling of helplessness: 60% of Americans believe it’s impossible to go through daily life without having their data tracked. The unease is there. What’s missing is a clear picture of what’s happening on the other side of the transaction…

[Goldin explains what data is being collected and shared, and by whom; how the data is managed and trafficked; how its being used (by insurance and financial companies, employers and landlords, retailers, AI companies, governments, and criminals); and how “inferred” data is used to augment the “hard” data. It’s chilling. She then puts the issue into context, and discusses we we can– and cannot– do about it…]

… The philosopher Helen Nissenbaum has a framework for what’s happening here: contextual integrity. The idea is that privacy isn’t about secrecy. We share information willingly all the time, when the context fits. We tell our doctor about a health condition because we expect that information to stay within the medical relationship. We search for symptoms on a health website because we assume that search won’t follow us into an insurance application. In the current data economy, that’s exactly the kind of boundary that dissolves, because the company collecting the data and the company buying it are operating in completely different contexts.

This is an information literacy problem as much as a privacy problem. Information literacy is usually framed around consumption: evaluating sources, questioning claims, recognizing bias in what we read and watch. But every time we interact with a digital service, we’re also producing information: generating a record that will be read, interpreted, scored, and acted on by organizations we may never interact with directly. Many of us have gotten better at questioning the information that comes at us: checking sources, noticing bias, and recognizing when something is trying to sell us a conclusion. But we haven’t developed equivalent habits around the information that flows from us: where it goes after we hand it over, who reads the record, what incentives they have, and what conclusions they draw. The gap between what we think we’re consenting to and what we’ve agreed to in practice is where the real exposure lives, and the system is designed to keep that gap invisible.

One of the reasons the “so what” question is hard to answer with action is that opting out of data collection often means opting out of participation. Declining a social media platform’s terms of service means not using the platform. Refusing location permissions can mean losing access to navigation, ride-sharing, weather, and delivery apps. Choosing not to create an account can mean paying more, seeing less, or being locked out of services that have become essential infrastructure for work, communication, healthcare, banking, and education.

The architecture of digital consent treats data sharing as a binary: agree to the terms or don’t use the product. There’s rarely a middle option that allows us to use a service while limiting what data gets collected and where it goes. The result is that the “choice” to share data often functions as a condition of entry into daily life rather than an informed negotiation. We’re not handing over data because we’ve weighed the tradeoff and decided it’s fair. We’re handing it over because the alternative is exclusion from services we rely on.

This is the structural context behind the Pew Research Center finding that more than half of Americans believe it’s impossible to go through daily life without being tracked. For many of us, it isn’t possible, at least not without significant inconvenience or sacrifice. The question isn’t whether we can avoid data collection entirely, because for the vast majority of people who participate in modern life, the answer is no. The question is whether we can make more informed decisions within the constraints we’re operating in, and whether the system can be pushed – through regulation, through market pressure, through better tools – toward something more transparent.

California’s Delete Act, which took effect in January 2026, is the strongest example of what’s emerging. It created a platform called DROP (Delete Request and Opt-Out Platform) that lets California residents submit a single deletion request to every registered data broker in the state. Brokers are required to process those requests, maintain suppression lists to prevent re-collection, and check the platform regularly for new requests. The European Union’s GDPR provides similar individual rights, and a handful of other U.S. states have enacted their own privacy laws with varying levels of protection. But the coverage is uneven: what’s available to a California or EU resident may not extend to someone in a state without comparable legislation.

Some services now automate parts of the opt-out process, submitting removal requests to dozens of brokers on our behalf. These can’t erase the data trail entirely, but they can narrow what’s actively available for sale.

Beyond deletion, there are smaller choices that reduce how much new data we generate. We can audit which apps have permission to track our location or access our contacts, since a surprising amount of behavioral data comes from apps that don’t need those permissions to function. We can treat “sign in with Google” and “sign in with Facebook” buttons as what they are: data-sharing agreements that can link a new service to an existing profile. And we can glance at the first few lines of a privacy policy before agreeing, looking for some version of “we may share your information with our partners,” where “partners” just means anyone willing to pay.

Most of us don’t read privacy policies, and the policies aren’t built to be read. They average thousands of words of dense legal language filled with terms like “legitimate interest,” “data processor,” and “de-identified data.” Studies consistently put them at a late high school to early college reading level (grade 12 to 14), but the difficulty goes beyond reading level: the concepts are abstract, the volume of agreements we encounter is enormous, and the design of the consent process itself pushes us through as fast as possible. Pre-checked boxes, auto-scrolling agreement windows, “accept all” buttons positioned prominently while “customize settings” options sit behind additional clicks. These are dark patterns, design choices that make the path of least resistance the path of maximum data sharing.

The result is a gap between the moment we share a piece of information and the moment that information shapes a decision about our lives. We don’t connect the app to the insurance premium or the loyalty card to the rental application because the chain of custody between them is long, complex, and designed to stay out of view.

The same critical thinking we’ve learned to apply to the information flowing toward us (checking sources, questioning claims, looking for bias) applies to the information flowing from us: who’s collecting this, what will they do with it, who else will see it, and what did we agree to? The difference is that in the data economy, we’re the product being evaluated, and the questions are being asked about us rather than by us.

So can we get it back? Not entirely. Data that’s already been collected, copied, sold, and processed across multiple systems can’t be fully recalled. What we can do is reduce what’s actively available for sale, slow the flow of new data going forward, and take advantage of legal tools that didn’t exist a few years ago. The archive of our past digital lives is too distributed to undo, but the file is still being written, and we have more say over the next page than we did over the last twenty years of them.

So what if they have our data? The tradeoff extends well beyond better ads. It reaches into the prices we’re charged, the credit we’re offered, the jobs we’re considered for, the insurance premiums we pay, the AI systems trained on our behavior, the accuracy of the profiles used to make decisions about our lives, and the degree to which government agencies can monitor our movements without a warrant. Every new service we sign up for, every permission we grant, and every terms-of-service agreement we accept adds another layer to that file. We can’t close the file entirely, but we can make more informed decisions about what goes into it next…

Eminently worth reading in full: “So What if They Have My Data?

See also: “Why Do We Care So Much About Privacy?” (source of the image above) in which Louis Menand suggests that our concern should be with the “weaponization” of data…

Daniel J. Solove, Nothing to Hide: The False Tradeoff Between Privacy and Security

###

As we reinforce our rights, we might recall that it was on this date in 1996 that the internet-as-we’ve-come-to-know-it broke big into the mainstream: Yahoo! launched the national campaign that asked “Do You Yahoo?” advertising its web-based search service on national television. The campaign was created by ad agency Black Rocket and Yahoo Marketing Head Karen Edwards (whose many awards for the work include a seat in the Advertising Hall of Achievement).

An early spot from the campaign…

Written by (Roughly) Daily

April 25, 2026 at 1:00 am

“The present is pregnant with the future”*…

The estimable Tim O’Reilly uses scenario planning to create an insightful look at AI, our futures, and the choices that will define them…

We all read it in the daily news. The New York Times reports that economists who once dismissed the AI job threat are now taking it seriously. In February, Jack Dorsey cut 40% of Block’s workforce, telling shareholders that “intelligence tools have changed what it means to build and run a company.” Block’s stock rose 20%. Salesforce has shed thousands of customer support workers, saying AI was already doing half the work. And a Stanford study found that software developers aged 22 to 25 saw employment drop nearly 20% from its peak, while developers over 26 were doing fine.

But how are we to square this news with a Vanguard study that found that the 100 occupations most exposed to AI were actually outperforming the rest of the labor market in both job growth and wages, and a rigorous NBER study of 25,000 Danish workers that found zero measurable effect of AI on earnings or hours?

Other studies could contribute to either side of the argument. For example, PwC’s 2025 Global AI Jobs Barometer, analyzing close to a billion job ads across six continents, found that workers with AI skills earn a 56% wage premium, and that productivity growth has nearly quadrupled in the industries most exposed to AI.

This is exactly the kind of contradictory, uncertain landscape that scenario planning was designed for. Scenario planning doesn’t ask you to predict what the future will be. It asks you to imagine divergent possible futures and to develop a strategy that improves your odds of success across all of them. I’ve used it many times at O’Reilly and have written about it before with COVID and climate change as illustrative examples. The argument between those who say AI will cause mass unemployment and those who insist technology always creates more jobs than it destroys is a debate that will only be resolved by time. Both sides have evidence. Both are probably right at some level. And both framings are not terribly helpful for anyone trying to figure out what to do next…

[O’Reilly explains the scenario approach, then applies it to our future with AI (see the image above), astutely assessing the conflicting signals that we’ve experiencing; he explores the “robust strategy” for our uncertian future (strategic choices that make sense regardless of which future unfolds); then he concludes…

… I’ll return to the theme that I sounded in my book WTF? What’s the Future and Why It’s Up To Us.

Every time a company uses AI to do what it was already doing with fewer people, it is making a choice for the lower half of the scenario grid. Every time a company uses AI to do something that wasn’t previously possible, to serve a customer who wasn’t previously served, to solve a problem that wasn’t previously solvable, it is making a choice for the upper half. These choices compound, for good or ill. An economy that uses AI primarily for efficiency will slowly hollow itself out.

Looking at the news from the future, both sets of signals are present. The question is which will dominate. AI will give us both the Augmentation Economy and the Displacement Crisis, in different measures in different places, depending on the choices we make.

Scenario planning teaches us that we don’t have to predict which future we’ll get. We do have to prepare for a very uncertain future. But the robust strategy, the one that works across every quadrant, is to focus on doing more, not just doing the same with less, and to find ways that human taste still matters in what is created. As long as there is unmet demand, as long as there are problems we haven’t solved and people we haven’t served, AI will augment human work rather than replacing it. It’s only when we stop looking for new things to do that the machines come for the jobs…

Eminently worth reading in full. Indeed, speaking as a long-time scenario planner, your correspondent can only wish that everyone who wields “scenarios” applies the approach as appropriately, adriotly, and acutely as Tim has: “Scenario Planning for AI and the ‘Jobless Future‘,” from @timoreilly.bsky.social.

* Voltaire

###

As we take the long view, we might send formative birthday greetings to Mark Pinsker; he was born on this date in 1923. A mathematician, he made impoprtant contributions to the fields of information theory, probability theory, coding theory, ergodic theory, mathematical statistics, and communication networks. This work, which helped lay the foundation for AI-as-we-know-it, earned him the IEEE Claude E. Shannon Award in 1978, and the IEEE Richard W. Hamming Medal in 1996, among other honors.

source