(Roughly) Daily

Posts Tagged ‘privacy

“He is seen, but he does not see; he is the object of information, never a subject in communication”*…

Plan of Jeremy Bentham’s panopticon prison was drawn by Willey Reveley in 1791 (source)

We’ve looked before at digital regimes that seem a little too close for comfort to Jeremey Bentham‘s notion of the Panopticon. Surveillance has continued to intensify. 404 Media’s Jason Koebler and Joseph Cox bring us up to speed…

It’s nearly impossible not to be watched these days. It can start right at home with your neighbors and their Ring cameras—a company that sold fear to the American public and is now integrating AI to turn entire neighborhoods into networked, automated surveillance systems. 

Head out a bit further and you’ll likely be confronted by Flock’s network of cameras that not only track license plates, but also track people’s movements with detailed precision. And as the Trump administration raids cities across the U.S. for undocumented immigrants, tech giants like Palantir are powering tools for ICE, including one called ELITE that helps the agency pick which neighborhoods to raid.

To better understand what exactly we’re looking at in this dystopian hellscape, 404 Media’s Jason Koebler and Joseph Cox joined r/technology for an AMA

Understandably, people are worried about violations of their privacy by companies and the government. And many wonder, is there any way to go back once we’ve released all this AI-powered, surveillance tech?…

The (lightly edited for clarity) transcript is a bracing– but critically-important– read: “From Flock to ICE, Here’s a Breakdown of How You’re Being Watched,” @jasonkoebler.mastodon.social.ap.brid.gy and @josephcox.bsky.social in @404media.co.

* “Bentham’s Panopticon [at top] is the architectural figure of this composition. We know the principle on which it was based: at the periphery, an annular building; at the centre, a tower; this tower is pierced with wide windows that open onto the inner side of the ring; the peripheric building is divided into cells, each of which extends the whole width of the building; they have two windows, one on the inside, corresponding to the windows of the tower; the other, on the outside, allows the light to cross the cell from one end to the other. All that is needed, then, is to place a supervisor in a central tower and to shut up in each cell a madman, a patient, a condemned man, a worker or a schoolboy. By the effect of backlighting, one can observe from the tower, standing out precisely against the light, the small captive shadows in the cells of the periphery… He is seen, but he does not see; he is the object of information, never a subject in communication. – Michel Foucault, Discipline and Punish: The Birth of the Prison

###

As we feel seen, we might recall that it was on this date in 2000, that the dot.com bust effectively began. Between 1995 and its peak five days earlier, on March 10, 2000, investments in the Nasdaq Composite stock market index rose from 1,006 to 5,048—a 400% gain fueled by the conviction that the internet would render every prior valuation framework obsolete. It did not.

On March 13, 2000, news that Japan had once again entered a recession triggered a global sell off that disproportionately affected technology stocks. Soon after, Yahoo! and eBay ended merger talks and the Nasdaq fell 2.6%; still, the S&P 500 rose 2.4% as investors shifted from strong performing technology stocks to poor performing established stocks. The market held steady on the 14th. Then, on this date 26 years ago, the broader market begin to drop… and kept dropping. By the end of the stock market downturn of 2002 (the “second chapter” in the correction that began in 2000), stocks had lost $5 trillion in market capitalization since the peak. At its trough on October 9, 2002, the NASDAQ-100 had dropped to 1,114, down 78% from its peak. It took 15 years for the Nasdaq to regain its March, 2000 peak.

source

Written by (Roughly) Daily

March 15, 2026 at 1:00 am

“We live, in fact, in a world starved for solitude, silence, and private: and therefore starved for meditation and true friendship”*…

… if then, even more so now. Ben Tarnoff takes off from Lowry Pressly‘s new book to ponder why privacy matters and why we have such trouble even thinking about how to protect it…

… Today, it is harder to keep one’s mind in place. Our thoughts leak through the sieve of our smartphones, where they join the great river of everyone else’s. The consequences, for both our personal and collective lives, are much discussed: How can we safeguard our privacy against state and corporate surveillance? Is Instagram making teen-agers depressed? Is our attention span shrinking?

There is no doubt that an omnipresent Internet connection, and the attendant computerization of everything, is inducing profound changes. Yet the conversation that has sprung up around these changes can sometimes feel a little predictable. The same themes and phrases tend to reappear. As the Internet and the companies that control it have become an object of permanent public concern, the concerns themselves have calcified into clichés. There is an algorithmic quality to our grievances with algorithmic life.

Lowry Pressly’s new book, “The Right to Oblivion: Privacy and the Good Life,” defies this pattern. It is a radiantly original contribution to a conversation gravely in need of new thinking. Pressly, who teaches political science at Stanford, takes up familiar fixations of tech discourse—privacy, mental health, civic strife—but puts them into such a new and surprising arrangement that they are nearly unrecognizable. The effect is like walking through your home town after a tornado: you recognize the buildings, but after some vigorous jumbling they have acquired a very different shape.

Pressly trained as a philosopher, and he has a philosopher’s fondness for sniffing out unspoken assumptions. He finds one that he considers fundamental to our networked era: “the idea that information has a natural existence in human affairs, and that there are no aspects of human life which cannot be translated somehow into data.” This belief, which he calls the “ideology of information,” has an obvious instrumental value to companies whose business models depend on the mass production of data, and to government agencies whose machinery of monitoring and repression rely on the same.

But Pressly also sees the ideology of information lurking in a less likely place—among privacy advocates trying to defend us from digital intrusions. This is because the standard view of privacy assumes there is “some information that already exists,” and what matters is keeping it out of the wrong hands. Such an assumption, for Pressly, is fatal. It “misses privacy’s true value and unwittingly aids the forces it takes itself to be resisting,” he writes. To be clear, Pressly is not opposed to reforms that would give us more power over our data—but it is a mistake “to think that this is what privacy is for.” “Privacy is valuable not because it empowers us to exercise control over our information,” he argues, “but because it protects against the creation of such information in the first place.”

If this idea sounds intriguing but exotic, you may be surprised to learn how common it once was. “A sense that privacy is fundamentally opposed to information has animated public moral discourse on the subject since the very beginning,” Pressly writes…

[Tarnoff recaps Pressly’s a brief history of the technologies that changed our relationship to information, from Kodak through CCTV, to AI…]

… The reason that Pressly feels so strongly about imposing limits on datafication is not only because of the many ways that data can be used to damage us. It is also because, in his view, we lose something precious when we become information, regardless of how it is used. In the very moment when data are made, Pressly believes, a line is crossed. “Oblivion” is his word for what lies on the other side.

Oblivion is a realm of ambiguity and potential. It is fluid, formless, and opaque. A secret is an unknown that can become known. Oblivion, by contrast, is unknowable: it holds those varieties of human experience which are “essentially resistant to articulation and discovery.” It is also a place beyond “deliberate, rational control,” where we lose ourselves or, as Pressly puts it, “come apart.” Sex and sleep are two of the examples he provides. Both bring us into the “unaccountable regions of the self,” those depths at which our ego dissolves and about which it is difficult to speak in definite terms. Physical intimacy is hard to render in words—“The experience is deflated by description,” Pressly observes—and the same is notoriously true of the dreams we have while sleeping, which we struggle to narrate, or even to remember, on waking.

Oblivion is fragile, however. When it comes into contact with information, it disappears. This is why we need privacy: it is the protective barrier that keeps oblivion safe from information. Such protection insures that “one can actually enter into oblivion from time to time, and that it will form a reliably available part of the structure of one’s society.”

But why do we need to enter into oblivion from time to time, and what good does it do us? Pressly gives a long list of answers, drawn not only from the Victorians but also from the work of Michel Foucault, Roland Barthes, Gay Talese, Jorge Luis Borges, and Hannah Arendt. One is that oblivion is restorative: we come apart in order to come back together. (Sleep is a case in point; without a nightly suspension of our rational faculties, we go nuts.) Another is the notion that oblivion is integral to the possibility of personal evolution. “The main interest in life and work is to become someone else that you were not in the beginning,” Foucault writes. To do so, however, you must believe that the future can be different from the past—a belief that becomes harder to sustain when one is besieged by information, as the obsessive documentation of life makes it “more fixed, more factual, with less ambiguity and life-giving potentiality.” Oblivion, by setting aside a space for forgetting, offers a refuge from this “excess of memory,” and thus a standpoint from which to imagine alternative futures.

Oblivion is also essential for human dignity. Because we cannot be fully known, we cannot be fully instrumentalized. Immanuel Kant urged us to treat others as ends in themselves, not merely as means. For Pressly, our obscurities are precisely what endow us with a sense of value that exceeds our usefulness. This, in turn, helps assure us that life is worth living, and that our fellow human beings are worthy of our trust. “There can be no trust of any sort without some limits to knowledge,” Pressly writes…

… Psychoanalysis first emerged in the late nineteenth century, in parallel with the idea of privacy. This was a period when the boundary between public and private was being redrawn, not only with the onslaught of handheld cameras but also, more broadly, because of the dislocating forces of what historians call the Second Industrial Revolution. Urbanization pulled workers from the countryside and packed them into cities, while mass production meant they could buy (rather than make) most of what they needed. These developments weakened the institution of the family, which lost its primacy as people fled rural kin networks and the production of life’s necessities moved from the household to the factory.

In response, a new freedom appeared. For the first time, the historian Eli Zaretsky observes, “personal identity became a problem and a project for individuals.” If you didn’t have your family to tell you who you were, you had to figure it out yourself. Psychoanalysis helped the moderns to make sense of this question, and to try to arrive at an answer.

More than a century later, the situation looks different. If an earlier stage of capitalism laid the material foundations for a new experience of individuality, the present stage seems to be producing the opposite. In their taverns, theatres, and dance halls, the city dwellers of the Second Industrial Revolution created a culture of social and sexual experimentation. Today’s young people are lonely and sexless. At least part of the reason is the permanent connectivity that, as Pressly argues, conveys the feeling that “one’s time and attention—that is to say, one’s life—are not entirely one’s own.”

The modernist city promised anonymity, reinvention. The Internet is devoid of such pleasures. It is more like a village: a place where your identity is fixed. Online, we are the sum of what we have searched, clicked, liked, and bought. But there are futures beyond those predicted through statistical extrapolations from the present. In fact, the past is filled with the arrival of such futures: those blind corners when no amount of information could tell you what was coming. History has a habit of humbling its participants. Somewhere in its strange rhythms sits the lifelong work of making a life of one’s own…

We often want to keep some information to ourselves. But information itself may be the problem: “What Is Privacy For?” from @bentarnoff in @NewYorker. (Possible paywall; archived link here.)

Pair with the two (marvelous, provocative) documentary series from Adam Curtis and the BBC: The Century of Self and Hypernormalization, both of which are available on You Tube.)

* C. S. Lewis

###

As we make room, we might send painfully-observant birthday greetings to Lenny Bruce; he was born on this date in 1925. A comedian, social critic, and satirist, he was ranked (in a 2017 Roling Stone poll) the third best stand-up comic of all time– behind Richard Pryor and George Carlin, both of whom credit Bruce as an influence.

source

Written by (Roughly) Daily

October 13, 2024 at 1:00 am

“When it comes to privacy and accountability, people always demand the former for themselves and the latter for everyone else”*…

As we contend with ‘answers” from AI’s that, with few exceptions, use source material with no credit nor recompense, we might ponder the experience of our Gilded Age ancestors…

In 1904, a widow named Elizabeth Peck had her portrait taken at a studio in a small Iowa town. The photographer sold the negatives to Duffy’s Pure Malt Whiskey, a company that avoided liquor taxes for years by falsely advertising its product as medicinal. Duffy’s ads claimed the fantastical: that it cured everything from influenza to consumption, that it was endorsed by clergymen, that it could help you live until the age of 106. The portrait of Peck ended up in one of these dubious ads, published in newspapers across the country alongside what appeared to be her unqualified praise: “After years of constant use of your Pure Malt Whiskey, both by myself and as given to patients in my capacity as nurse, I have no hesitation in recommending it.”

Duffy’s lies were numerous. Peck (misleadingly identified as “Mrs. A. Schuman”) was not a nurse, and she had not spent years constantly slinging back malt beverages. In fact, she fully abstained from alcohol. Peck never consented to the ad.

The camera’s first great age—which began in 1888 when George Eastman debuted the Kodak—is full of stories like this one. Beyond the wonders of a quickly developing art form and technology lay widespread lack of control over one’s own image, perverse incentives to make a quick buck, and generalized fear at the prospect of humiliation and the invasion of privacy…

… Early cameras required a level of technical mastery that evoked mystery—a scientific instrument understood only by professionals.

All of that changed when Eastman invented flexible roll film and debuted the first Kodak camera. Instead of developing their own pictures, customers could mail their devices to the Kodak factory and have their rolls of film developed, printed, and replaced. “You press the button,” Kodak ads promised, “we do the rest.” This leap from obscure science to streamlined service forever transformed the nature of looking and being looked at.

By 1905, less than 20 years after the first Kodak camera debuted, Eastman’s company had sold 1.2 million devices and persuaded nearly a third of the United States’ population to take up photography. Kodak’s record-setting yearly ad spending—$750,000 by the end of the 19th century (roughly $28 million in today’s dollars)—and the rapture of a technology that scratched a timeless itch facilitated the onset of a new kind of mass exposure…

… Though newspapers across the country cautioned Americans to “beware the Kodak,” as the cameras were “deadly weapons” and “deadly little boxes,” many were also primary facilitators of the craze. The perfection of halftone printing coincided with the rise of the Kodak and allowed for the mass circulation of images. Newly empowered, newspapers regularly published paparazzi pictures of famous people taken without their knowledge, paying twice as much for them as they did for consensual photos taken in a studio.

Lawmakers and judges responded to the crisis clumsily. Suing for libel was usually the only remedy available to the overexposed. But libel law did not protect against your likeness being taken or used without your permission unless the violation was also defamatory in some way. Though results were middling, one failed lawsuit gained enough notoriety to channel cross-class feelings of exposure into action. A teenage girl named Abigail Roberson noticed her face on a neighbor’s bag of flour, only to learn that the Franklin Mills Flour Company had used her likeness in an ad that had been plastered 25,000 times all over her hometown.

After suffering intense shock and being temporarily bedridden, she sued. In 1902, the New York Court of Appeals rejected her claims and held that the right to privacy did not exist in common law. It based its decision in part on the assertion that the image was not libelous; Chief Justice Alton B. Parker wrote that the photo was “a very good one” that others might even regard as a “compliment to their beauty.” The humiliation, the lack of control over her own image, the unwanted fame—none of that amounted to any sort of actionable claim.

Public outcry at the decision reached a fever pitch, and newspapers filled their pages with editorial indignation. In its first legislative session following the court’s decision and the ensuing outrage, the New York state legislature made history by adopting a narrow “right to privacy,” which prohibited the use of someone’s likeness in advertising or trade without their written consent. Soon after, the Supreme Court of Georgia became the first to recognize this category of privacy claim. Eventually, just about every state court in the country followed Georgia’s lead. The early uses and abuses of the Kodak helped cobble together a right that centered on profiting from the exploitation of someone’s likeness, rather than the exploitation itself.

Not long after asserting that no right to privacy exists in common law, and while campaigning to be the Democratic nominee for president, Parker told the Associated Press, “I reserve the right to put my hands in my pockets and assume comfortable attitudes without being everlastingly afraid that I shall be snapped by some fellow with a camera.” Roberson publicly took him to task over his hypocrisy, writing, “I take this opportunity to remind you that you have no such right.” She was correct then, and she still would be today. The question of whether anyone has the right to be free from exposure and its many humiliations lingers, intensified but unresolved. The law—that reactive, slow thing—never quite catches up to technology, whether it’s been given one year or 100…

Early photographers sold their snapshots to advertisers, who reused the individuals’ likenesses without their permission: “How the Rise of the Camera Launched a Fight to Protect Gilded Age Americans’ Privacy,” from @myHNN and @SmithsonianMag.

The parallels with AI usage issues are obvious. For an example of a step in the right direction, see Tim O’Reilly‘s “How to Fix “AI’s Original Sin

* David Brin

###

As we ponder the personal, we might recall that it was on this date in 1789 that partisans of the Third Estate, impatient for social and legal reforms (and economic relief) in France, attacked and took control of the Bastille.  A fortress in Paris, the Bastille was a medieval armory and political prison; while it held only 8 inmates at the time, it resonated with the crowd as a symbol of the monarchy’s abuse of power.  Its fall ignited the French Revolution.  This date is now observed annually as France’s National Day.

See the estimable Robert Darnton’s “What Was Revolutionary about the French Revolution?

Happy Bastille Day!

300px-Prise_de_la_Bastille
Storming of The Bastille, Jean-Pierre Houël

source

Written by (Roughly) Daily

July 14, 2024 at 1:00 am

“All human beings have three lives: public, private, and secret”*…

A graphic– and painful– reminder that the latter two are under constant attack…

Think about a personal and private google search and post it on this website. Something you might not have told the ones dearest to your heart. Google uses these searches to generate a data profile of you to sell on open bidding markets. This website creates a bubble for each search to remind us of all the data collected…

Every time we ask Google, we give it answers about ourselves: “Search TM.”

* Gabriel García Márquez

###

As we Duck (Duck Go), we might recall that it was on this date in 1937 that Hormel introduced Spam. It was the company’s attempt to increase sales of pork shoulder, not at the time a very popular cut. While there are numerous speculations as to the “meaning of the name” (from a contraction of “spiced ham” to “Scientifically Processed Animal Matter”), its true genesis is known to only a small circle of former Hormel Foods executives.

As a result of the difficulty of delivering fresh meat to the front during World War II, Spam became a ubiquitous part of the U.S. soldier’s diet. It became variously referred to as “ham that didn’t pass its physical,” “meatloaf without basic training,” and “Special Army Meat.” Over 150 million pounds of Spam were purchased by the military before the war’s end. During the war and the occupations that followed, Spam was introduced into Guam, Hawaii, Okinawa, the Philippines, and other islands in the Pacific. Immediately absorbed into native diets, it has become a unique part of the history and effect of U.S. influence in the Pacific islands.

source

Written by (Roughly) Daily

July 5, 2024 at 1:00 am

“Neither privacy nor publicity is dead, but technology will continue to make a mess of both”*…

Indeed, as neurotech advances, privacy concerns grow. Devices that connect brains to computers are increasingly sophisticated. But as Fletcher Reveley asks, can the nascent neurorights movement catch up?…

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time…

Advances in Mind-Decoding Technologies Raise Hopes (and Worries),” from @FletcherReveley in @undarkmag. Eminently worth reading in full.

And to complicate things further (though appropriately), see Eric Hoel‘s “Neuroscience is pre-paradigmatic. Consciousness is why.”

* danah boyd

###

As we ponder the personal, we might send telling birthday greetings to Rolla Harger; he was born on this date in 1890. Biochemistry and pharmacology department chairman of the Indiana University School of Medicine, he invented (in 1931) and patented (in 1936) the first field test for inebriation, the Drunkometer (the forerunner of the Breathalyzer), to test for driving under the influence.

Harger overseeing a test of his kit (source)

Written by (Roughly) Daily

January 14, 2024 at 1:00 am