(Roughly) Daily

Posts Tagged ‘privacy

“Foresight begins when we accept that we are now creating a civilization of risk”*…

There have been a handful folks– Vernor Vinge, Don Michael, Sherry Turkle, to name a few– who were, decades ago, exceptionally foresightful about the technologically-meditated present in which we live. Philip Agre belongs in their number…

In 1994 — before most Americans had an email address or Internet access or even a personal computer — Philip Agre foresaw that computers would one day facilitate the mass collection of data on everything in society.

That process would change and simplify human behavior, wrote the then-UCLA humanities professor. And because that data would be collected not by a single, powerful “big brother” government but by lots of entities for lots of different purposes, he predicted that people would willingly part with massive amounts of information about their most personal fears and desires.

“Genuinely worrisome developments can seem ‘not so bad’ simply for lacking the overt horrors of Orwell’s dystopia,” wrote Agre, who has a doctorate in computer science from the Massachusetts Institute of Technology, in an academic paper.

Nearly 30 years later, Agre’s paper seems eerily prescient, a startling vision of a future that has come to pass in the form of a data industrial complex that knows no borders and few laws. Data collected by disparate ad networks and mobile apps for myriad purposes is being used to sway elections or, in at least one case, to out a gay priest. But Agre didn’t stop there. He foresaw the authoritarian misuse of facial recognition technology, he predicted our inability to resist well-crafted disinformation and he foretold that artificial intelligence would be put to dark uses if not subjected to moral and philosophical inquiry.

Then, no one listened. Now, many of Agre’s former colleagues and friends say they’ve been thinking about him more in recent years, and rereading his work, as pitfalls of the Internet’s explosive and unchecked growth have come into relief, eroding democracy and helping to facilitate a violent uprising on the steps of the U.S. Capitol in January.

“We’re living in the aftermath of ignoring people like Phil,” said Marc Rotenberg, who edited a book with Agre in 1998 on technology and privacy, and is now founder and executive director for the Center for AI and Digital Policy…

As Reed Albergotti (@ReedAlbergotti) explains, better late than never: “He predicted the dark side of the Internet 30 years ago. Why did no one listen?

Agre’s papers are here.

* Jacques Ellul

###

As we consider consequences, we might recall that it was on this date in 1858 that Queen Victoria sent the first official telegraph message across the Atlantic Ocean from London to U. S. President James Buchanan, in Washington D.C.– an initiated a new era in global communications.

Transmission of the message began at 10:50am and wasn’t completed until 4:30am the next day, taking nearly eighteen hours to reach Newfoundland, Canada. Ninety-nine words, containing five hundred nine letters, were transmitted at a rate of about two minutes per letter.

After White House staff had satisfied themselves that it wasn’t a hoax, the President sent a reply of 143 words in a relatively rapid ten hours. Without the cable, a dispatch in one direction alone would have taken rouighly twelve days by the speediest combination of inland telegraph and fast steamer.

source

“The proper definition of a man is an animal that writes letters”*…

If you sent a letter in 17th-century Europe, there was a good chance it would pass through one of the continent’s so-called “Black Chambers”—secret rooms attached to post offices and staffed by intelligence units, where mail was opened, copied, resealed, and sent on its way, with the writer and recipient none the wiser.

Nadine Akkerman, a senior lecturer at Leiden University, is an expert in 16th- and 17th-century espionage. But during her research into the Black Chambers, she ran across something perplexing—a document by Samuel Morland, a British spymaster, in which he bragged about his talent for opening and resealing letters. “Wait,” she thought to herself. “Can’t we all do that?”

It wasn’t until she met Jana Dambrogio and Daniel Starza Smith—researchers studying letters and document security during this period—that Morland’s braggadocio began to make sense. As it turns out, letters in the 1600s didn’t look exactly like letters today. Mass-produced envelopes weren’t invented until the 1830s, meaning that most 17th-century letter writers folded their correspondence in such a way that it became its own envelope—a process Dambrogio had dubbed “letterlocking.” Letterlocks could be simple, just a series of quick folds without any sort of adhesive. But they could also be incredibly complex, even booby-trapped to reveal evidence of tampering…

Postal privacy– “Cracking the Code of Letterlocking“: a tale of Black Chambers, lost correspondence, and high technology.

* Lewis Carroll

###

As we muse on missives, we might recall that it was on this date in 1997 that the U.S. Postal Service issued the Bugs Bunny stamp– the first stamp honoring an animated character.

source

Written by (Roughly) Daily

May 22, 2021 at 1:01 am

“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say”*…

There’s a depressing sort of symmetry in the fact that our modern paradigms of privacy were developed in response to the proliferation of photography and their exploitation by tabloids. The seminal 1890 Harvard Law Review article The Right to Privacy—which every essay about data privacy is contractually obligated to cite—argued that the right of an individual to object to the publication of photographs ought to be considered part of a general ‘right to be let alone’.

30 years on, privacy is still largely conceived of as an individual thing, wherein we get to make solo decisions about when we want to be left alone and when we’re comfortable being trespassed upon. This principle undergirds the notice-and-consent model of data management, which you might also know as the pavlovian response to click “I agree” on any popup and login screen with little regard for the forty pages of legalese you might be agreeing to.

The thing is, the right to be left alone makes perfect sense when you’re managing information relationships between individuals, where there are generally pretty clear social norms around what constitutes a boundary violation. Reasonable people can and do disagree as to the level of privacy they expect, but if I invite you into my home and you snoop through my bedside table and read my diary, there isn’t much ambiguity about that being an invasion.

But in the age of ✨ networked computing ✨, this individual model of privacy just doesn’t scale anymore. There are too many exponentially intersecting relationships for any of us to keep in our head. It’s no longer just about what we tell a friend or the tax collector or even a journalist. It’s the digital footprint that we often unknowingly leave in our wake every time we interact with something online, and how all of those websites and apps and their shadowy partners talk to each other behind our backs. It’s the cameras in malls tracking our location and sometimes emotions, and it’s the license plate readers compiling a log of our movements.

At a time when governments and companies are increasingly investing in surveillance mechanisms under the guise of security and transparency, that scale is only going to keep growing. Our individual comfort about whether we are left alone is no longer the only, or even the most salient part of the story, and we need to think about privacy as a public good and a collective value.

I like thinking about privacy as being collective, because it feels like a more true reflection of the fact that our lives are made up of relationships, and information about our lives is social and contextual by nature. The fact that I have a sister also indicates that my sister has at least one sibling: me. If I took a DNA test through 23andme I’m not just disclosing information about me but also about everyone that I’m related to, none of whom are able to give consent. The privacy implications for familial DNA are pretty broad: this information might be used to sell or withhold products and services, expose family secrets, or implicate a future as-yet-unborn relative in a crime. I could email 23andme and ask them to delete my records, and they might eventually comply in a month or three. But my present and future relatives wouldn’t be able to do that, or even know that their privacy had been compromised at all.

Even with data that’s less fraught than our genome, our decisions about what we expose to the world have externalities for the people around us. I might think nothing of posting a photo of going out with my friends and mentioning the name of the bar, but I’ve just exposed our physical location to the internet. If one of my friends has had to deal with a stalker in their past, I could’ve put their physical safety at risk. Even if I’m careful to make the post friends-only, the people I trust are not the same as the people my friends trust. In an individual model of privacy, we are only as private as our least private friend.

Amidst the global pandemic, this might sound not dissimilar to public health. When I decide whether to wear a mask in public, that’s partially about how much the mask will protect me from airborne droplets. But it’s also—perhaps more significantly—about protecting everyone else from me.

Data collection isn’t always bad, but it is always risky. Sometimes that’s due to shoddy design and programming or lazy security practices. But even the best engineers often fail to build risk-free systems, by the very nature of systems.

Systems are easier to attack than they are to defend. If you want to defend a system, you have to make sure every part of it is perfectly implemented to guard against any possible vulnerabilities. Oftentimes, trying to defend a system means adding additional components, which just ends up creating more potential weak points. Whereas if you want to attack, all you have to do is find the one weakness that the systems designer missed. (Or, to paraphrase the IRA, you only have to be lucky once.)

This is true of all systems, digital or analog, but the thing that makes computer systems particularly vulnerable is that the same weaknesses can be deployed across millions of devices, in our phones and laptops and watches and toasters and refrigerators and doorbells. When a vulnerability is discovered in one system, an entire class of devices around the world is instantly a potential target, but we still have to go fix them one by one.

This is how the Equifax data leak happened. Equifax used a piece of open source software that had a security flaw in it, the people who work on that software found it and fixed it, and instead of diligently updating their systems Equifax hit the snooze button for four months and let hackers steal hundreds of millions of customer records. And while Equifax is definitely guilty of aforementioned lazy security practices, this incident also illustrates how fragile computer systems are. From the moment this bug was discovered, every server in the world that ran that software was at risk.

What’s worse, in many cases people weren’t even aware that their data was stored with Equifax. If you’re an adult who has had a job or a phone bill or interacted with a bank in the last seven years, your identifying information is collected by Equifax whether you like it or not. The only way to opt out would have been to be among the small percentage of overwhelmingly young, poor, and racialized people who have no credit histories, which significantly limits the scope of their ability to participate in the economy. How do you notice-and-consent your way out of that?

There unfortunately isn’t one weird trick to save democracy, but that doesn’t mean there aren’t lessons we can learn from history to figure out how to protect privacy as a public good. The scale and ubiquity of computers may be unprecedented, but so is the scale of our collective knowledge…

Read the full piece (and you should) for Jenny Zhang‘s (@phirephoenix) compelling case that we should treat– and protect– privacy as a public good, and explanation of how we might do that: “Left alone, together.” TotH to Sentiers.

[image above: source]

* Edward Snowden

###

As we think about each other, we might recall that it was on this date in 1939 that the first government appropriation was made to the support the construction of the Harvard Mark I computer.

Designer Howard Aiken had enlisted IBM as a partner in 1937; company chairman Thomas Watson Sr. personally approved the project and its funding. It was completed in 1944 (and put to work on a set war-related tasks, including calculations– overseen by John von Neumann— for the Manhattan Project). 

The Mark I was the industry’s largest electromechanical calculator… and it was large: 51 feet long, 8 feet high, and 2 feet deep; it weighed about 9,445 pounds  The basic calculating units had to be synchronized and powered mechanically, so they were operated by a 50-foot (15 m) drive shaft coupled to a 5 horsepower electric motor, which served as the main power source and system clock. It could do 3 additions or subtractions in a second; a multiplication took 6 seconds; a division took 15.3 seconds; and a logarithm or a trigonometric function took over a minute… ridiculously slow by today’s standards, but a huge advance in its time.

source

“If Ancestry or its businesses are acquired… we will share your Personal Information with the acquiring or receiving entity”*…

If you’ve never before considered how valuable an asset your DNA might be, you are far behind. Some of the biggest direct-to-consumer DNA sequencing companies are busy monetizing their large-scale genomics databases, with hopes to shape the burgeoning DNA economy and reap its rewards. And if you spit in a cup for one of these companies, your DNA might already be under the corporate control of some of the richest firms on Wall Street.

With their purchase of Ancestry.com late last year, the private equity firm Blackstone now owns the DNA data of 18 million people. And Blackstone is currently ramping up efforts to monetize the data amassed among the companies it owns. But experts say Wall Street firms’ interest in genomics poses new and unforeseen threats, and risks sowing distrust among DNA donors. Without trust, could we miss out on the genome’s real value?

Since the global financial crisis of 2008, private equity firms—which buy up and reshape diverse private companies—have quietly overtaken traditional investment banks like Goldman Sachs as the “dominant players in the financial world,” according to the Financial Times. It’s been a rough tenure so far. While private equity mega-deal hits have made billions for investors, often the companies acquired pay the price, as with high-profile flops including mismanaged music group EMI and bankrupt retailer Toys R Us. The industry has become “the poster child for financial firms that suck value out of the economy,” said U.S. Senator Elizabeth Warren, while introducing an act to Congress aimed at reining in private equity “vampires.

In December the biggest, most dominant private equity company of them all, the Blackstone Group, Inc., which boasts half a trillion dollars in assets under management, made a dramatic entry into the genomics space when it bought a controlling stake in Ancestry.com as part of the deal that valued the genealogy and gene testing company at $4.7 billion. And with that one stroke of the pen, the firm acquired the largest trove of DNA data assembled by any consumer gene tester. If your own DNA sequence is included in this collection, it exists on servers somewhere along with the genomes of 18 million people from at least 30 countries.

Announcing the deal, David Kestnbaum, a senior managing director at Blackstone said he foresees Ancestry growing by “investing behind further data, functionality, and product development.” At the same time, many privacy-concerned watchers had the same question: How does Blackstone aim to monetize Ancestry’s massive database, which includes users’ most sensitive genomic data and family histories?

Those lingering worries were ignited in the final days of 2020 by revelations buried in U.S. Securities and Exchange Commission (SEC) filings, and unearthed by Bloomberg, that showed Blackstone will begin to “package and sell data” from the companies it acquires as a fresh revenue stream. 

For any entrepreneur or investor in the genomics space who knows the industry needs investment to realize its dramatic potential, the question is vexed. Are deals that bring sensitive data under the control of private equity mega-funds a much-needed path to realizing the industry’s goals? Or do they threaten to derail the rapid progress that consumer gene science is making?…

A Wall Street giant’s big bet on Ancestry.com drives home the financial realities– and the privacy challenges– facing the consumer genomic revolution: “Is Your DNA Data Safe in Blackstone’s Hands?

* from Ancestry.com’s EULA, September 23, 2020 (between Blackstone announcing its plan to buy and the deal completing)

###

As we appraise the personal, we might send carefully-deduced birthday greetings to Samuel “Sam” Loyd; he was born on this date in 1841. A chess player, chess composer, puzzle author, and recreational mathematician, he was a member of the Chess Hall of Fame (for both his play and for his exercises, or “problems”). He gained broader posthumous fame when his son published a collection of his mathematical and logic puzzles, Cyclopedia of 5000 Puzzles after his father’s death.  As readers can see here and here, his puzzles still delight.

Loyd’s most famous puzzle was the 14-15 Puzzle, which he produced in 1878. His original authorship is debated; but in any case, his version created a craze that swept America to such an extent that employers put up notices prohibiting playing the puzzle during office hours.

 source

Written by (Roughly) Daily

January 31, 2021 at 1:01 am

“All human beings have three lives: public, private, and secret”*…

 

Privacy

A monitor displays the Omron Corp. Okao face- and emotion-detection technology during CES 2020

 

Twenty years ago at a Silicon Valley product launch, Sun Microsystems CEO Scott McNealy dismissed concern about digital privacy as a red herring: “You have zero privacy anyway. Get over it.

“Zero privacy” was meant to placate us, suggesting that we have a fixed amount of stuff about ourselves that we’d like to keep private. Once we realized that stuff had already been exposed and, yet, the world still turned, we would see that it was no big deal. But what poses as unsentimental truth telling isn’t cynical enough about the parlous state of our privacy.

That’s because the barrel of privacy invasion has no bottom. The rallying cry for privacy should begin with the strangely heartening fact that it can always get worse. Even now there’s something yet to lose, something often worth fiercely defending.

For a recent example, consider Clearview AI: a tiny, secretive startup that became the subject of a recent investigation by Kashmir Hill in The New York Times. According to the article, the company scraped billions of photos from social-networking and other sites on the web—without permission from the sites in question, or the users who submitted them—and built a comprehensive database of labeled faces primed for search by facial recognition. Their early customers included multiple police departments (and individual officers), which used the tool without warrants. Clearview has argued they have a right to the data because they’re “public.”

In general, searching by a face to gain a name and then other information is on the verge of wide availability: The Russian internet giant Yandex appears to have deployed facial-recognition technology in its image search tool. If you upload an unlabeled picture of my face into Google image search, it identifies me and then further searches my name, and I’m barely a public figure, if at all.

Given ever more refined surveillance, what might the world look like if we were to try to “get over” the loss of this privacy? Two very different extrapolations might allow us to glimpse some of the consequences of our privacy choices (or lack thereof) that are taking shape even today…

From Jonathan Zittrain (@zittrain), two scenarios for a post-privacy future: “A World Without Privacy Will Revive the Masquerade.”

* Gabriel García Márquez

###

As we get personal, we might send provocatively nonsensical birthday greetings to Hugo Ball; he was born on this date in 1886.  Ball worked as an actor with Max Reinhardt and Hermann Bahr in Berlin until the outbreak of World War I.  A staunch pacifist, Ball made his way to Switzerland, where he turned his hand to poetry in an attempt to express his horror at the conflagration enveloping Europe. (“The war is founded on a glaring mistake, men have been confused with machines.”)

Settling in Zürich, Ball was a co-founder of the Dada movement (and, lore suggests, its namer, having allegedly picked the word at random from a dictionary).  With Tristan Tzara and Jan Arp, among others, he co-founded and presided over the Cabaret Voltaire, the epicenter of Dada.  And in 1916, he created the first Dada Manifesto (Tzara’s came two years later).

 source

 

Written by (Roughly) Daily

February 22, 2020 at 1:01 am

%d bloggers like this: