(Roughly) Daily

Posts Tagged ‘culture

“The first to arrive is the first to succeed”*…

Is China “pulling up the ladder”? In his valuable newsletter, Ben Evans puts two recent news items on high-tech manufacturing into context…

… First, the FT argues that after the ‘China shock’ of cheap low-value manufacturing, there’s now a growing second China shock of high-value, high-tech manufacturing, where the same model of ferocious, Darwinian competition, backed by subsidies and cheap energy, produces a handful of very efficient and capable winners in each space, plus a lot of overcapacity, that then moves to exports. Second, Bloomberg says that Chinese export controls in those high-tech industries are crippling India’s attempt to build its own tech manufacturing base…

Gift article from the FT: “China shock 2.0: the flood of high-tech goods that will change the world

Gift article from Bloomberg: “China’s Control Over Tech Is Threatening India’s Manufacturing Dreams

* (先到先得) Chinese proverb

###

As we dissect the dynamics of dominance, we might recall that it was on this date in 1981 that the computer mouse became a practical, operating part of the personal computing world, when Xerox released its 1010 (Star) personal computer. The trackball, a related pointing device, had been invented in 1946 by Ralph Benjamin as part of a post-World War II-era fire-control radar plotting system called the Comprehensive Display System (CDS). Then, in the 1960s, Doug Engelbart and Bill English developed the first mouse prototype. They christened the device the mouse as early models had a cord attached to the rear part of the hand-held unit; the cord looked like a tail and made the device resemble a common mouse.  (According to Roger Bates, a hardware designer under English, another reason for choosing this name was because the cursor on the screen was also referred to as “CAT” at this time.) In 1968, Engelbart premiered the pointer at what has come to be known as “The Mother of All Demos.” There followed, through the 70’s, a pair of personal computers that used a mouse (the Xerox Alto and the Lilith); but while they served as proof-of-concept, they sold only in the hundreds of units over the next several years. It was the Star that effectively brought the mouse to market… soon to be followed by Steve Jobs’ Apple Lisa, which forshadowed the Mac and the user interface that we’ve all come to know.

Apropos the articles above, computer mice are still a $2 billion business. But while they were invented and originally largely manufactured in the U.S., they are (as of 2025) mostly manufactured in Asia (68%, the lion’s share– 54%– in China); only 8% are made in the U.S.

source

Written by (Roughly) Daily

April 27, 2026 at 1:00 am

“The one who plants trees knowing that he or she will never sit in their shade, has at least started to understand the meaning of life”*…

A long-running experiment is testing tree mixes to develop the healthiest forests

… Yes, and, as John Parker and Justin Nowakowski explain, it turns out that what and how we plant matters enormously…

Around the world, people plan to plant more than 1 trillion trees this decade in an ambitious effort to slow climate change and reduce biodiversity loss. But if the past is prologue, many of those planted trees won’t survive. And if they do, they could end up as biological deserts that lack the richness and resilience of healthy forests.

It doesn’t have to be this way.

The United Nations declared 2021-2030 the Decade on Ecosystem Restoration to encourage efforts to repair degraded ecosystems. Tree planting has become a centerpiece of that effort, championed by initiatives such as the Bonn Challenge and the Trillion Trees Campaign.

However, many tree-planting commitments have a critical flaw: They rely too heavily on monoculture plantations – vast areas planted with just a single tree species.

Monoculture plantations are generally one-way tickets to producing wood. But these high-yield plantations are high risk and can be surprisingly fragile. When drought, pests, or forest fires strike, entire monoculture plantations can fail at once. In one example, nearly 90% of 11 million saplings planted in Turkey died within three months due to drought and lack of maintenance.

Forests are more than just timber factories. They regulate water, store carbon, provide habitat for wildlife, cool the landscapes around them and even provide human health benefits.

Rather than gambling on a single species and hoping for the best, science now points to a smarter path that captures both ecological and economic benefits while minimizing risk: mixed-species plantings that mirror the biodiversity of a natural forest, ultimately creating forests that grow faster and are more resilient in the face of constant threats.

We are community and landscape ecologists at the Smithsonian Environmental Research Center. Since 2013, we and our colleagues have been rigorously testing this idea in a large, ecosystem-scale experiment called BiodiversiTREE. The verdict is striking: Trees in mixed forests don’t just survive – they outgrow their monoculture counterparts and support dramatically more biodiversity…

[Parker and Nowakowski outline their project, unpack it’s (impressive) results, and explore the challenges to sclaing their example. They conclude..]

… The stakes are high. Restoration has become a major global investment, with hundreds of billions of dollars already being spent annually. Getting it wrong means wasted resources and missed opportunities to address some of the most pressing environmental challenges of our time.

If the world is going to plant a trillion trees, we believe it needs to do more than just put seedlings in the ground. It needs to rethink what a forest should be.

The goal isn’t just to grow trees. It’s to grow forests that last.

Eminently worth reading in full: “Don’t just plant trees, plant forests to restore biodiversity for the future,” from @johndparker.bsky.social and Justin Nowakowski in @us.theconversation.com.

Rabindranath Tagore

###

As we see the forest, we might send observant birthday greetings to a man who spent a good bit of time in and around forests, John James Audubon; he was born on this date in 1785.  An ornithologist, naturalist, and artist, Audubon documented all types of American birds with detailed illustrations depicting the birds in their natural habitats.  His The Birds of America (1827–1839), in which he identified 25 new species, is considered one of the most important– and finest– ornithological works ever completed.

Print depicting a raven (Plate 101) from Birds of America

 source

“Privacy is rarely lost in one fell swoop. It is usually eroded over time, little bits dissolving almost imperceptibly until we finally begin to notice how much is gone.”*…

… And now, indeed, we’re beginning to notice. Hana Lee Goldin surveys the state of play– who’s buying our personal information, what they’re using it for, and how the system works behind the screen– and considers our options…

Sometime in the mid-2000s, most of us started handing over pieces of ourselves to the internet without giving the exchange a second thought. We created email accounts, signed up for social media, bought things online, downloaded apps, swiped loyalty cards, connected fitness trackers, stored photos in the cloud, and agreed to terms of service that almost none of us have ever read in full. We did this thousands of times over two decades and counting, and each interaction felt small enough to be inconsequential.

But the accumulation is enormous. More than 6 billion people now use the internet, and each one makes an estimated 5,000 digital interactions per day. Most of those interactions happen without our conscious awareness: a GPS ping, a page load, an app opening, a browser cookie refreshing, a device checking in with a cell tower. The average person in 2010 made an estimated 298 digital interactions per day. In fifteen years, that number multiplied more than sixteenfold. Those digital interactions produce records that can persist indefinitely, stored, copied, indexed, bought, sold, and combined with other records to build profiles of extraordinary detail.

If we’ve been online since the late 1990s or early 2000s, our data footprint can include social media accounts we’ve created, online purchases we’ve made, forums we’ve posted in, loyalty cards we’ve used, and apps we’ve installed going back decades. Some of that information lives on platforms we’ve long forgotten. Some of it was collected by companies that have since been acquired or dissolved, with our data potentially passing to successor entities we’ve never heard of. The digital life most of us have been living for 15 to 25 years has produced a layered, evolving archive that only grows more valuable to the people who buy and sell it as time goes on.

Most of us sense that something is off about all of this. In a 2023 survey, Pew Research found that roughly eight in ten Americans feel they have little to no control over the data companies collect about them, 71% are concerned about government data use, and 67% say they understand little to nothing about what companies are doing with their personal information. The concern is real and widespread. And so is the feeling of helplessness: 60% of Americans believe it’s impossible to go through daily life without having their data tracked. The unease is there. What’s missing is a clear picture of what’s happening on the other side of the transaction…

[Goldin explains what data is being collected and shared, and by whom; how the data is managed and trafficked; how its being used (by insurance and financial companies, employers and landlords, retailers, AI companies, governments, and criminals); and how “inferred” data is used to augment the “hard” data. It’s chilling. She then puts the issue into context, and discusses we we can– and cannot– do about it…]

… The philosopher Helen Nissenbaum has a framework for what’s happening here: contextual integrity. The idea is that privacy isn’t about secrecy. We share information willingly all the time, when the context fits. We tell our doctor about a health condition because we expect that information to stay within the medical relationship. We search for symptoms on a health website because we assume that search won’t follow us into an insurance application. In the current data economy, that’s exactly the kind of boundary that dissolves, because the company collecting the data and the company buying it are operating in completely different contexts.

This is an information literacy problem as much as a privacy problem. Information literacy is usually framed around consumption: evaluating sources, questioning claims, recognizing bias in what we read and watch. But every time we interact with a digital service, we’re also producing information: generating a record that will be read, interpreted, scored, and acted on by organizations we may never interact with directly. Many of us have gotten better at questioning the information that comes at us: checking sources, noticing bias, and recognizing when something is trying to sell us a conclusion. But we haven’t developed equivalent habits around the information that flows from us: where it goes after we hand it over, who reads the record, what incentives they have, and what conclusions they draw. The gap between what we think we’re consenting to and what we’ve agreed to in practice is where the real exposure lives, and the system is designed to keep that gap invisible.

One of the reasons the “so what” question is hard to answer with action is that opting out of data collection often means opting out of participation. Declining a social media platform’s terms of service means not using the platform. Refusing location permissions can mean losing access to navigation, ride-sharing, weather, and delivery apps. Choosing not to create an account can mean paying more, seeing less, or being locked out of services that have become essential infrastructure for work, communication, healthcare, banking, and education.

The architecture of digital consent treats data sharing as a binary: agree to the terms or don’t use the product. There’s rarely a middle option that allows us to use a service while limiting what data gets collected and where it goes. The result is that the “choice” to share data often functions as a condition of entry into daily life rather than an informed negotiation. We’re not handing over data because we’ve weighed the tradeoff and decided it’s fair. We’re handing it over because the alternative is exclusion from services we rely on.

This is the structural context behind the Pew Research Center finding that more than half of Americans believe it’s impossible to go through daily life without being tracked. For many of us, it isn’t possible, at least not without significant inconvenience or sacrifice. The question isn’t whether we can avoid data collection entirely, because for the vast majority of people who participate in modern life, the answer is no. The question is whether we can make more informed decisions within the constraints we’re operating in, and whether the system can be pushed – through regulation, through market pressure, through better tools – toward something more transparent.

California’s Delete Act, which took effect in January 2026, is the strongest example of what’s emerging. It created a platform called DROP (Delete Request and Opt-Out Platform) that lets California residents submit a single deletion request to every registered data broker in the state. Brokers are required to process those requests, maintain suppression lists to prevent re-collection, and check the platform regularly for new requests. The European Union’s GDPR provides similar individual rights, and a handful of other U.S. states have enacted their own privacy laws with varying levels of protection. But the coverage is uneven: what’s available to a California or EU resident may not extend to someone in a state without comparable legislation.

Some services now automate parts of the opt-out process, submitting removal requests to dozens of brokers on our behalf. These can’t erase the data trail entirely, but they can narrow what’s actively available for sale.

Beyond deletion, there are smaller choices that reduce how much new data we generate. We can audit which apps have permission to track our location or access our contacts, since a surprising amount of behavioral data comes from apps that don’t need those permissions to function. We can treat “sign in with Google” and “sign in with Facebook” buttons as what they are: data-sharing agreements that can link a new service to an existing profile. And we can glance at the first few lines of a privacy policy before agreeing, looking for some version of “we may share your information with our partners,” where “partners” just means anyone willing to pay.

Most of us don’t read privacy policies, and the policies aren’t built to be read. They average thousands of words of dense legal language filled with terms like “legitimate interest,” “data processor,” and “de-identified data.” Studies consistently put them at a late high school to early college reading level (grade 12 to 14), but the difficulty goes beyond reading level: the concepts are abstract, the volume of agreements we encounter is enormous, and the design of the consent process itself pushes us through as fast as possible. Pre-checked boxes, auto-scrolling agreement windows, “accept all” buttons positioned prominently while “customize settings” options sit behind additional clicks. These are dark patterns, design choices that make the path of least resistance the path of maximum data sharing.

The result is a gap between the moment we share a piece of information and the moment that information shapes a decision about our lives. We don’t connect the app to the insurance premium or the loyalty card to the rental application because the chain of custody between them is long, complex, and designed to stay out of view.

The same critical thinking we’ve learned to apply to the information flowing toward us (checking sources, questioning claims, looking for bias) applies to the information flowing from us: who’s collecting this, what will they do with it, who else will see it, and what did we agree to? The difference is that in the data economy, we’re the product being evaluated, and the questions are being asked about us rather than by us.

So can we get it back? Not entirely. Data that’s already been collected, copied, sold, and processed across multiple systems can’t be fully recalled. What we can do is reduce what’s actively available for sale, slow the flow of new data going forward, and take advantage of legal tools that didn’t exist a few years ago. The archive of our past digital lives is too distributed to undo, but the file is still being written, and we have more say over the next page than we did over the last twenty years of them.

So what if they have our data? The tradeoff extends well beyond better ads. It reaches into the prices we’re charged, the credit we’re offered, the jobs we’re considered for, the insurance premiums we pay, the AI systems trained on our behavior, the accuracy of the profiles used to make decisions about our lives, and the degree to which government agencies can monitor our movements without a warrant. Every new service we sign up for, every permission we grant, and every terms-of-service agreement we accept adds another layer to that file. We can’t close the file entirely, but we can make more informed decisions about what goes into it next…

Eminently worth reading in full: “So What if They Have My Data?

See also: “Why Do We Care So Much About Privacy?” (source of the image above) in which Louis Menand suggests that our concern should be with the “weaponization” of data…

Daniel J. Solove, Nothing to Hide: The False Tradeoff Between Privacy and Security

###

As we reinforce our rights, we might recall that it was on this date in 1996 that the internet-as-we’ve-come-to-know-it broke big into the mainstream: Yahoo! launched the national campaign that asked “Do You Yahoo?” advertising its web-based search service on national television. The campaign was created by ad agency Black Rocket and Yahoo Marketing Head Karen Edwards (whose many awards for the work include a seat in the Advertising Hall of Achievement).

An early spot from the campaign…

Written by (Roughly) Daily

April 25, 2026 at 1:00 am

“The present is pregnant with the future”*…

The estimable Tim O’Reilly uses scenario planning to create an insightful look at AI, our futures, and the choices that will define them…

We all read it in the daily news. The New York Times reports that economists who once dismissed the AI job threat are now taking it seriously. In February, Jack Dorsey cut 40% of Block’s workforce, telling shareholders that “intelligence tools have changed what it means to build and run a company.” Block’s stock rose 20%. Salesforce has shed thousands of customer support workers, saying AI was already doing half the work. And a Stanford study found that software developers aged 22 to 25 saw employment drop nearly 20% from its peak, while developers over 26 were doing fine.

But how are we to square this news with a Vanguard study that found that the 100 occupations most exposed to AI were actually outperforming the rest of the labor market in both job growth and wages, and a rigorous NBER study of 25,000 Danish workers that found zero measurable effect of AI on earnings or hours?

Other studies could contribute to either side of the argument. For example, PwC’s 2025 Global AI Jobs Barometer, analyzing close to a billion job ads across six continents, found that workers with AI skills earn a 56% wage premium, and that productivity growth has nearly quadrupled in the industries most exposed to AI.

This is exactly the kind of contradictory, uncertain landscape that scenario planning was designed for. Scenario planning doesn’t ask you to predict what the future will be. It asks you to imagine divergent possible futures and to develop a strategy that improves your odds of success across all of them. I’ve used it many times at O’Reilly and have written about it before with COVID and climate change as illustrative examples. The argument between those who say AI will cause mass unemployment and those who insist technology always creates more jobs than it destroys is a debate that will only be resolved by time. Both sides have evidence. Both are probably right at some level. And both framings are not terribly helpful for anyone trying to figure out what to do next…

[O’Reilly explains the scenario approach, then applies it to our future with AI (see the image above), astutely assessing the conflicting signals that we’ve experiencing; he explores the “robust strategy” for our uncertian future (strategic choices that make sense regardless of which future unfolds); then he concludes…

… I’ll return to the theme that I sounded in my book WTF? What’s the Future and Why It’s Up To Us.

Every time a company uses AI to do what it was already doing with fewer people, it is making a choice for the lower half of the scenario grid. Every time a company uses AI to do something that wasn’t previously possible, to serve a customer who wasn’t previously served, to solve a problem that wasn’t previously solvable, it is making a choice for the upper half. These choices compound, for good or ill. An economy that uses AI primarily for efficiency will slowly hollow itself out.

Looking at the news from the future, both sets of signals are present. The question is which will dominate. AI will give us both the Augmentation Economy and the Displacement Crisis, in different measures in different places, depending on the choices we make.

Scenario planning teaches us that we don’t have to predict which future we’ll get. We do have to prepare for a very uncertain future. But the robust strategy, the one that works across every quadrant, is to focus on doing more, not just doing the same with less, and to find ways that human taste still matters in what is created. As long as there is unmet demand, as long as there are problems we haven’t solved and people we haven’t served, AI will augment human work rather than replacing it. It’s only when we stop looking for new things to do that the machines come for the jobs…

Eminently worth reading in full. Indeed, speaking as a long-time scenario planner, your correspondent can only wish that everyone who wields “scenarios” applies the approach as appropriately, adriotly, and acutely as Tim has: “Scenario Planning for AI and the ‘Jobless Future‘,” from @timoreilly.bsky.social.

* Voltaire

###

As we take the long view, we might send formative birthday greetings to Mark Pinsker; he was born on this date in 1923. A mathematician, he made impoprtant contributions to the fields of information theory, probability theory, coding theory, ergodic theory, mathematical statistics, and communication networks. This work, which helped lay the foundation for AI-as-we-know-it, earned him the IEEE Claude E. Shannon Award in 1978, and the IEEE Richard W. Hamming Medal in 1996, among other honors.

source

“There are few creatures more remarkable than the lowly slime mold”*…

… nor, perhaps, more beautiful…

We’ve looked before at the at the “intelligent” accomplishments of the humble slime mold, and wondered what they might mean and what they might teach us. Photographer Barry Webb invites us to appreciate their spendor…

Blown wildly out of proportion in large format, the slime molds that British photographer Barry Webb captures seem atmospheric and sculptural. Stemonitis, for example, looks like dozens of thin pieces of wire with their ends coated in colored wax. But this fungi-like form is one of hundreds of kinds of slime mold, and it typically only reaches a height of about two centimeters at the most. Thanks to Webb’s macro photos, we glimpse a phenomenally beautiful world up-close that is otherwise virtually invisible.

Scientists have documented hundreds of these organisms, which aren’t actually related to plants, fungi, animals, or molds—despite the name. They comprise a unique group unto themselves, more closely related to amoebas. And new discoveries are being made all the time. From mottled gray bulbs that look like snow-covered trees to pink, coral-like tendrils, Webb chronicles a huge array of colors and shapes. He also consistently submits images to local and national botanical records so that researchers have access to high-resolution imagery…

Barry Webb Documents a Marvelous, Macro Array of Colorful Slime Molds,” from @thisiscolossal.com.

More of Webb’s portraits of slime mold on his site.

* Brandon Keim (in “Complexity Theory in Icky Action: Meet the Slime Mold“)

###

As we get small, we might send microscopic greetings to William Ian Beardmore (W. I. B.) Beveridge; he was born on this date in 1908.  A microbiologist and veterinarian who served as  director of the Institute of Animal Pathology at Cambridge, he identified the origin of the Great Influenza (the Spanish Flu pandemic, 1918-19)– a strain of swine flu.

WIB Beveridge

source

Happy Shakespeare’s Birthday!

While there is no way to know with certainty the Bard’s birth date, his baptism was recorded at Stratford-on-Avon on April 26, 1564; and three days was the then-customary wait before baptism. In any case, we do know with some certainty that Shakespeare died on this date in 1616.

Written by (Roughly) Daily

April 23, 2026 at 1:00 am