Posts Tagged ‘privacy’
“Privacy is rarely lost in one fell swoop. It is usually eroded over time, little bits dissolving almost imperceptibly until we finally begin to notice how much is gone.”*…
… And now, indeed, we’re beginning to notice. Hana Lee Goldin surveys the state of play– who’s buying our personal information, what they’re using it for, and how the system works behind the screen– and considers our options…
Sometime in the mid-2000s, most of us started handing over pieces of ourselves to the internet without giving the exchange a second thought. We created email accounts, signed up for social media, bought things online, downloaded apps, swiped loyalty cards, connected fitness trackers, stored photos in the cloud, and agreed to terms of service that almost none of us have ever read in full. We did this thousands of times over two decades and counting, and each interaction felt small enough to be inconsequential.
But the accumulation is enormous. More than 6 billion people now use the internet, and each one makes an estimated 5,000 digital interactions per day. Most of those interactions happen without our conscious awareness: a GPS ping, a page load, an app opening, a browser cookie refreshing, a device checking in with a cell tower. The average person in 2010 made an estimated 298 digital interactions per day. In fifteen years, that number multiplied more than sixteenfold. Those digital interactions produce records that can persist indefinitely, stored, copied, indexed, bought, sold, and combined with other records to build profiles of extraordinary detail.
If we’ve been online since the late 1990s or early 2000s, our data footprint can include social media accounts we’ve created, online purchases we’ve made, forums we’ve posted in, loyalty cards we’ve used, and apps we’ve installed going back decades. Some of that information lives on platforms we’ve long forgotten. Some of it was collected by companies that have since been acquired or dissolved, with our data potentially passing to successor entities we’ve never heard of. The digital life most of us have been living for 15 to 25 years has produced a layered, evolving archive that only grows more valuable to the people who buy and sell it as time goes on.
Most of us sense that something is off about all of this. In a 2023 survey, Pew Research found that roughly eight in ten Americans feel they have little to no control over the data companies collect about them, 71% are concerned about government data use, and 67% say they understand little to nothing about what companies are doing with their personal information. The concern is real and widespread. And so is the feeling of helplessness: 60% of Americans believe it’s impossible to go through daily life without having their data tracked. The unease is there. What’s missing is a clear picture of what’s happening on the other side of the transaction…
[Goldin explains what data is being collected and shared, and by whom; how the data is managed and trafficked; how its being used (by insurance and financial companies, employers and landlords, retailers, AI companies, governments, and criminals); and how “inferred” data is used to augment the “hard” data. It’s chilling. She then puts the issue into context, and discusses we we can– and cannot– do about it…]
… The philosopher Helen Nissenbaum has a framework for what’s happening here: contextual integrity. The idea is that privacy isn’t about secrecy. We share information willingly all the time, when the context fits. We tell our doctor about a health condition because we expect that information to stay within the medical relationship. We search for symptoms on a health website because we assume that search won’t follow us into an insurance application. In the current data economy, that’s exactly the kind of boundary that dissolves, because the company collecting the data and the company buying it are operating in completely different contexts.
This is an information literacy problem as much as a privacy problem. Information literacy is usually framed around consumption: evaluating sources, questioning claims, recognizing bias in what we read and watch. But every time we interact with a digital service, we’re also producing information: generating a record that will be read, interpreted, scored, and acted on by organizations we may never interact with directly. Many of us have gotten better at questioning the information that comes at us: checking sources, noticing bias, and recognizing when something is trying to sell us a conclusion. But we haven’t developed equivalent habits around the information that flows from us: where it goes after we hand it over, who reads the record, what incentives they have, and what conclusions they draw. The gap between what we think we’re consenting to and what we’ve agreed to in practice is where the real exposure lives, and the system is designed to keep that gap invisible.
One of the reasons the “so what” question is hard to answer with action is that opting out of data collection often means opting out of participation. Declining a social media platform’s terms of service means not using the platform. Refusing location permissions can mean losing access to navigation, ride-sharing, weather, and delivery apps. Choosing not to create an account can mean paying more, seeing less, or being locked out of services that have become essential infrastructure for work, communication, healthcare, banking, and education.
The architecture of digital consent treats data sharing as a binary: agree to the terms or don’t use the product. There’s rarely a middle option that allows us to use a service while limiting what data gets collected and where it goes. The result is that the “choice” to share data often functions as a condition of entry into daily life rather than an informed negotiation. We’re not handing over data because we’ve weighed the tradeoff and decided it’s fair. We’re handing it over because the alternative is exclusion from services we rely on.
This is the structural context behind the Pew Research Center finding that more than half of Americans believe it’s impossible to go through daily life without being tracked. For many of us, it isn’t possible, at least not without significant inconvenience or sacrifice. The question isn’t whether we can avoid data collection entirely, because for the vast majority of people who participate in modern life, the answer is no. The question is whether we can make more informed decisions within the constraints we’re operating in, and whether the system can be pushed – through regulation, through market pressure, through better tools – toward something more transparent.
California’s Delete Act, which took effect in January 2026, is the strongest example of what’s emerging. It created a platform called DROP (Delete Request and Opt-Out Platform) that lets California residents submit a single deletion request to every registered data broker in the state. Brokers are required to process those requests, maintain suppression lists to prevent re-collection, and check the platform regularly for new requests. The European Union’s GDPR provides similar individual rights, and a handful of other U.S. states have enacted their own privacy laws with varying levels of protection. But the coverage is uneven: what’s available to a California or EU resident may not extend to someone in a state without comparable legislation.
Some services now automate parts of the opt-out process, submitting removal requests to dozens of brokers on our behalf. These can’t erase the data trail entirely, but they can narrow what’s actively available for sale.
Beyond deletion, there are smaller choices that reduce how much new data we generate. We can audit which apps have permission to track our location or access our contacts, since a surprising amount of behavioral data comes from apps that don’t need those permissions to function. We can treat “sign in with Google” and “sign in with Facebook” buttons as what they are: data-sharing agreements that can link a new service to an existing profile. And we can glance at the first few lines of a privacy policy before agreeing, looking for some version of “we may share your information with our partners,” where “partners” just means anyone willing to pay.
Most of us don’t read privacy policies, and the policies aren’t built to be read. They average thousands of words of dense legal language filled with terms like “legitimate interest,” “data processor,” and “de-identified data.” Studies consistently put them at a late high school to early college reading level (grade 12 to 14), but the difficulty goes beyond reading level: the concepts are abstract, the volume of agreements we encounter is enormous, and the design of the consent process itself pushes us through as fast as possible. Pre-checked boxes, auto-scrolling agreement windows, “accept all” buttons positioned prominently while “customize settings” options sit behind additional clicks. These are dark patterns, design choices that make the path of least resistance the path of maximum data sharing.
The result is a gap between the moment we share a piece of information and the moment that information shapes a decision about our lives. We don’t connect the app to the insurance premium or the loyalty card to the rental application because the chain of custody between them is long, complex, and designed to stay out of view.
The same critical thinking we’ve learned to apply to the information flowing toward us (checking sources, questioning claims, looking for bias) applies to the information flowing from us: who’s collecting this, what will they do with it, who else will see it, and what did we agree to? The difference is that in the data economy, we’re the product being evaluated, and the questions are being asked about us rather than by us.
So can we get it back? Not entirely. Data that’s already been collected, copied, sold, and processed across multiple systems can’t be fully recalled. What we can do is reduce what’s actively available for sale, slow the flow of new data going forward, and take advantage of legal tools that didn’t exist a few years ago. The archive of our past digital lives is too distributed to undo, but the file is still being written, and we have more say over the next page than we did over the last twenty years of them.
So what if they have our data? The tradeoff extends well beyond better ads. It reaches into the prices we’re charged, the credit we’re offered, the jobs we’re considered for, the insurance premiums we pay, the AI systems trained on our behavior, the accuracy of the profiles used to make decisions about our lives, and the degree to which government agencies can monitor our movements without a warrant. Every new service we sign up for, every permission we grant, and every terms-of-service agreement we accept adds another layer to that file. We can’t close the file entirely, but we can make more informed decisions about what goes into it next…
Eminently worth reading in full: “So What if They Have My Data?“
See also: “Why Do We Care So Much About Privacy?” (source of the image above) in which Louis Menand suggests that our concern should be with the “weaponization” of data…
* Daniel J. Solove, Nothing to Hide: The False Tradeoff Between Privacy and Security
###
As we reinforce our rights, we might recall that it was on this date in 1996 that the internet-as-we’ve-come-to-know-it broke big into the mainstream: Yahoo! launched the national campaign that asked “Do You Yahoo?” advertising its web-based search service on national television. The campaign was created by ad agency Black Rocket and Yahoo Marketing Head Karen Edwards (whose many awards for the work include a seat in the Advertising Hall of Achievement).
An early spot from the campaign…
“He is seen, but he does not see; he is the object of information, never a subject in communication”*…

We’ve looked before at digital regimes that seem a little too close for comfort to Jeremey Bentham‘s notion of the Panopticon. Surveillance has continued to intensify. 404 Media’s Jason Koebler and Joseph Cox bring us up to speed…
It’s nearly impossible not to be watched these days. It can start right at home with your neighbors and their Ring cameras—a company that sold fear to the American public and is now integrating AI to turn entire neighborhoods into networked, automated surveillance systems.
Head out a bit further and you’ll likely be confronted by Flock’s network of cameras that not only track license plates, but also track people’s movements with detailed precision. And as the Trump administration raids cities across the U.S. for undocumented immigrants, tech giants like Palantir are powering tools for ICE, including one called ELITE that helps the agency pick which neighborhoods to raid.
To better understand what exactly we’re looking at in this dystopian hellscape, 404 Media’s Jason Koebler and Joseph Cox joined r/technology for an AMA.
Understandably, people are worried about violations of their privacy by companies and the government. And many wonder, is there any way to go back once we’ve released all this AI-powered, surveillance tech?…
The (lightly edited for clarity) transcript is a bracing– but critically-important– read: “From Flock to ICE, Here’s a Breakdown of How You’re Being Watched,” @jasonkoebler.mastodon.social.ap.brid.gy and @josephcox.bsky.social in @404media.co.
* “Bentham’s Panopticon [at top] is the architectural figure of this composition. We know the principle on which it was based: at the periphery, an annular building; at the centre, a tower; this tower is pierced with wide windows that open onto the inner side of the ring; the peripheric building is divided into cells, each of which extends the whole width of the building; they have two windows, one on the inside, corresponding to the windows of the tower; the other, on the outside, allows the light to cross the cell from one end to the other. All that is needed, then, is to place a supervisor in a central tower and to shut up in each cell a madman, a patient, a condemned man, a worker or a schoolboy. By the effect of backlighting, one can observe from the tower, standing out precisely against the light, the small captive shadows in the cells of the periphery… He is seen, but he does not see; he is the object of information, never a subject in communication. – Michel Foucault, Discipline and Punish: The Birth of the Prison
###
As we feel seen, we might recall that it was on this date in 2000, that the dot.com bust effectively began. Between 1995 and its peak five days earlier, on March 10, 2000, investments in the Nasdaq Composite stock market index rose from 1,006 to 5,048—a 400% gain fueled by the conviction that the internet would render every prior valuation framework obsolete. It did not.
On March 13, 2000, news that Japan had once again entered a recession triggered a global sell off that disproportionately affected technology stocks. Soon after, Yahoo! and eBay ended merger talks and the Nasdaq fell 2.6%; still, the S&P 500 rose 2.4% as investors shifted from strong performing technology stocks to poor performing established stocks. The market held steady on the 14th. Then, on this date 26 years ago, the broader market begin to drop… and kept dropping. By the end of the stock market downturn of 2002 (the “second chapter” in the correction that began in 2000), stocks had lost $5 trillion in market capitalization since the peak. At its trough on October 9, 2002, the NASDAQ-100 had dropped to 1,114, down 78% from its peak. It took 15 years for the Nasdaq to regain its March, 2000 peak.
“When it comes to privacy and accountability, people always demand the former for themselves and the latter for everyone else”*…
As we contend with ‘answers” from AI’s that, with few exceptions, use source material with no credit nor recompense, we might ponder the experience of our Gilded Age ancestors…
In 1904, a widow named Elizabeth Peck had her portrait taken at a studio in a small Iowa town. The photographer sold the negatives to Duffy’s Pure Malt Whiskey, a company that avoided liquor taxes for years by falsely advertising its product as medicinal. Duffy’s ads claimed the fantastical: that it cured everything from influenza to consumption, that it was endorsed by clergymen, that it could help you live until the age of 106. The portrait of Peck ended up in one of these dubious ads, published in newspapers across the country alongside what appeared to be her unqualified praise: “After years of constant use of your Pure Malt Whiskey, both by myself and as given to patients in my capacity as nurse, I have no hesitation in recommending it.”
Duffy’s lies were numerous. Peck (misleadingly identified as “Mrs. A. Schuman”) was not a nurse, and she had not spent years constantly slinging back malt beverages. In fact, she fully abstained from alcohol. Peck never consented to the ad.
The camera’s first great age—which began in 1888 when George Eastman debuted the Kodak—is full of stories like this one. Beyond the wonders of a quickly developing art form and technology lay widespread lack of control over one’s own image, perverse incentives to make a quick buck, and generalized fear at the prospect of humiliation and the invasion of privacy…
… Early cameras required a level of technical mastery that evoked mystery—a scientific instrument understood only by professionals.
All of that changed when Eastman invented flexible roll film and debuted the first Kodak camera. Instead of developing their own pictures, customers could mail their devices to the Kodak factory and have their rolls of film developed, printed, and replaced. “You press the button,” Kodak ads promised, “we do the rest.” This leap from obscure science to streamlined service forever transformed the nature of looking and being looked at.
By 1905, less than 20 years after the first Kodak camera debuted, Eastman’s company had sold 1.2 million devices and persuaded nearly a third of the United States’ population to take up photography. Kodak’s record-setting yearly ad spending—$750,000 by the end of the 19th century (roughly $28 million in today’s dollars)—and the rapture of a technology that scratched a timeless itch facilitated the onset of a new kind of mass exposure…
…
… Though newspapers across the country cautioned Americans to “beware the Kodak,” as the cameras were “deadly weapons” and “deadly little boxes,” many were also primary facilitators of the craze. The perfection of halftone printing coincided with the rise of the Kodak and allowed for the mass circulation of images. Newly empowered, newspapers regularly published paparazzi pictures of famous people taken without their knowledge, paying twice as much for them as they did for consensual photos taken in a studio.
Lawmakers and judges responded to the crisis clumsily. Suing for libel was usually the only remedy available to the overexposed. But libel law did not protect against your likeness being taken or used without your permission unless the violation was also defamatory in some way. Though results were middling, one failed lawsuit gained enough notoriety to channel cross-class feelings of exposure into action. A teenage girl named Abigail Roberson noticed her face on a neighbor’s bag of flour, only to learn that the Franklin Mills Flour Company had used her likeness in an ad that had been plastered 25,000 times all over her hometown.
After suffering intense shock and being temporarily bedridden, she sued. In 1902, the New York Court of Appeals rejected her claims and held that the right to privacy did not exist in common law. It based its decision in part on the assertion that the image was not libelous; Chief Justice Alton B. Parker wrote that the photo was “a very good one” that others might even regard as a “compliment to their beauty.” The humiliation, the lack of control over her own image, the unwanted fame—none of that amounted to any sort of actionable claim.
Public outcry at the decision reached a fever pitch, and newspapers filled their pages with editorial indignation. In its first legislative session following the court’s decision and the ensuing outrage, the New York state legislature made history by adopting a narrow “right to privacy,” which prohibited the use of someone’s likeness in advertising or trade without their written consent. Soon after, the Supreme Court of Georgia became the first to recognize this category of privacy claim. Eventually, just about every state court in the country followed Georgia’s lead. The early uses and abuses of the Kodak helped cobble together a right that centered on profiting from the exploitation of someone’s likeness, rather than the exploitation itself.
Not long after asserting that no right to privacy exists in common law, and while campaigning to be the Democratic nominee for president, Parker told the Associated Press, “I reserve the right to put my hands in my pockets and assume comfortable attitudes without being everlastingly afraid that I shall be snapped by some fellow with a camera.” Roberson publicly took him to task over his hypocrisy, writing, “I take this opportunity to remind you that you have no such right.” She was correct then, and she still would be today. The question of whether anyone has the right to be free from exposure and its many humiliations lingers, intensified but unresolved. The law—that reactive, slow thing—never quite catches up to technology, whether it’s been given one year or 100…
Early photographers sold their snapshots to advertisers, who reused the individuals’ likenesses without their permission: “How the Rise of the Camera Launched a Fight to Protect Gilded Age Americans’ Privacy,” from @myHNN and @SmithsonianMag.
The parallels with AI usage issues are obvious. For an example of a step in the right direction, see Tim O’Reilly‘s “How to Fix “AI’s Original Sin”
* David Brin
###
As we ponder the personal, we might recall that it was on this date in 1789 that partisans of the Third Estate, impatient for social and legal reforms (and economic relief) in France, attacked and took control of the Bastille. A fortress in Paris, the Bastille was a medieval armory and political prison; while it held only 8 inmates at the time, it resonated with the crowd as a symbol of the monarchy’s abuse of power. Its fall ignited the French Revolution. This date is now observed annually as France’s National Day.
See the estimable Robert Darnton’s “What Was Revolutionary about the French Revolution?“
Happy Bastille Day!

“All human beings have three lives: public, private, and secret”*…
A graphic– and painful– reminder that the latter two are under constant attack…
Think about a personal and private google search and post it on this website. Something you might not have told the ones dearest to your heart. Google uses these searches to generate a data profile of you to sell on open bidding markets. This website creates a bubble for each search to remind us of all the data collected…
Every time we ask Google, we give it answers about ourselves: “Search TM.”
* Gabriel García Márquez
###
As we Duck (Duck Go), we might recall that it was on this date in 1937 that Hormel introduced Spam. It was the company’s attempt to increase sales of pork shoulder, not at the time a very popular cut. While there are numerous speculations as to the “meaning of the name” (from a contraction of “spiced ham” to “Scientifically Processed Animal Matter”), its true genesis is known to only a small circle of former Hormel Foods executives.
As a result of the difficulty of delivering fresh meat to the front during World War II, Spam became a ubiquitous part of the U.S. soldier’s diet. It became variously referred to as “ham that didn’t pass its physical,” “meatloaf without basic training,” and “Special Army Meat.” Over 150 million pounds of Spam were purchased by the military before the war’s end. During the war and the occupations that followed, Spam was introduced into Guam, Hawaii, Okinawa, the Philippines, and other islands in the Pacific. Immediately absorbed into native diets, it has become a unique part of the history and effect of U.S. influence in the Pacific islands.








You must be logged in to post a comment.