(Roughly) Daily

Posts Tagged ‘privacy

“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say”*…

There’s a depressing sort of symmetry in the fact that our modern paradigms of privacy were developed in response to the proliferation of photography and their exploitation by tabloids. The seminal 1890 Harvard Law Review article The Right to Privacy—which every essay about data privacy is contractually obligated to cite—argued that the right of an individual to object to the publication of photographs ought to be considered part of a general ‘right to be let alone’.

30 years on, privacy is still largely conceived of as an individual thing, wherein we get to make solo decisions about when we want to be left alone and when we’re comfortable being trespassed upon. This principle undergirds the notice-and-consent model of data management, which you might also know as the pavlovian response to click “I agree” on any popup and login screen with little regard for the forty pages of legalese you might be agreeing to.

The thing is, the right to be left alone makes perfect sense when you’re managing information relationships between individuals, where there are generally pretty clear social norms around what constitutes a boundary violation. Reasonable people can and do disagree as to the level of privacy they expect, but if I invite you into my home and you snoop through my bedside table and read my diary, there isn’t much ambiguity about that being an invasion.

But in the age of ✨ networked computing ✨, this individual model of privacy just doesn’t scale anymore. There are too many exponentially intersecting relationships for any of us to keep in our head. It’s no longer just about what we tell a friend or the tax collector or even a journalist. It’s the digital footprint that we often unknowingly leave in our wake every time we interact with something online, and how all of those websites and apps and their shadowy partners talk to each other behind our backs. It’s the cameras in malls tracking our location and sometimes emotions, and it’s the license plate readers compiling a log of our movements.

At a time when governments and companies are increasingly investing in surveillance mechanisms under the guise of security and transparency, that scale is only going to keep growing. Our individual comfort about whether we are left alone is no longer the only, or even the most salient part of the story, and we need to think about privacy as a public good and a collective value.

I like thinking about privacy as being collective, because it feels like a more true reflection of the fact that our lives are made up of relationships, and information about our lives is social and contextual by nature. The fact that I have a sister also indicates that my sister has at least one sibling: me. If I took a DNA test through 23andme I’m not just disclosing information about me but also about everyone that I’m related to, none of whom are able to give consent. The privacy implications for familial DNA are pretty broad: this information might be used to sell or withhold products and services, expose family secrets, or implicate a future as-yet-unborn relative in a crime. I could email 23andme and ask them to delete my records, and they might eventually comply in a month or three. But my present and future relatives wouldn’t be able to do that, or even know that their privacy had been compromised at all.

Even with data that’s less fraught than our genome, our decisions about what we expose to the world have externalities for the people around us. I might think nothing of posting a photo of going out with my friends and mentioning the name of the bar, but I’ve just exposed our physical location to the internet. If one of my friends has had to deal with a stalker in their past, I could’ve put their physical safety at risk. Even if I’m careful to make the post friends-only, the people I trust are not the same as the people my friends trust. In an individual model of privacy, we are only as private as our least private friend.

Amidst the global pandemic, this might sound not dissimilar to public health. When I decide whether to wear a mask in public, that’s partially about how much the mask will protect me from airborne droplets. But it’s also—perhaps more significantly—about protecting everyone else from me.

Data collection isn’t always bad, but it is always risky. Sometimes that’s due to shoddy design and programming or lazy security practices. But even the best engineers often fail to build risk-free systems, by the very nature of systems.

Systems are easier to attack than they are to defend. If you want to defend a system, you have to make sure every part of it is perfectly implemented to guard against any possible vulnerabilities. Oftentimes, trying to defend a system means adding additional components, which just ends up creating more potential weak points. Whereas if you want to attack, all you have to do is find the one weakness that the systems designer missed. (Or, to paraphrase the IRA, you only have to be lucky once.)

This is true of all systems, digital or analog, but the thing that makes computer systems particularly vulnerable is that the same weaknesses can be deployed across millions of devices, in our phones and laptops and watches and toasters and refrigerators and doorbells. When a vulnerability is discovered in one system, an entire class of devices around the world is instantly a potential target, but we still have to go fix them one by one.

This is how the Equifax data leak happened. Equifax used a piece of open source software that had a security flaw in it, the people who work on that software found it and fixed it, and instead of diligently updating their systems Equifax hit the snooze button for four months and let hackers steal hundreds of millions of customer records. And while Equifax is definitely guilty of aforementioned lazy security practices, this incident also illustrates how fragile computer systems are. From the moment this bug was discovered, every server in the world that ran that software was at risk.

What’s worse, in many cases people weren’t even aware that their data was stored with Equifax. If you’re an adult who has had a job or a phone bill or interacted with a bank in the last seven years, your identifying information is collected by Equifax whether you like it or not. The only way to opt out would have been to be among the small percentage of overwhelmingly young, poor, and racialized people who have no credit histories, which significantly limits the scope of their ability to participate in the economy. How do you notice-and-consent your way out of that?

There unfortunately isn’t one weird trick to save democracy, but that doesn’t mean there aren’t lessons we can learn from history to figure out how to protect privacy as a public good. The scale and ubiquity of computers may be unprecedented, but so is the scale of our collective knowledge…

Read the full piece (and you should) for Jenny Zhang‘s (@phirephoenix) compelling case that we should treat– and protect– privacy as a public good, and explanation of how we might do that: “Left alone, together.” TotH to Sentiers.

[image above: source]

* Edward Snowden

###

As we think about each other, we might recall that it was on this date in 1939 that the first government appropriation was made to the support the construction of the Harvard Mark I computer.

Designer Howard Aiken had enlisted IBM as a partner in 1937; company chairman Thomas Watson Sr. personally approved the project and its funding. It was completed in 1944 (and put to work on a set war-related tasks, including calculations– overseen by John von Neumann— for the Manhattan Project). 

The Mark I was the industry’s largest electromechanical calculator… and it was large: 51 feet long, 8 feet high, and 2 feet deep; it weighed about 9,445 pounds  The basic calculating units had to be synchronized and powered mechanically, so they were operated by a 50-foot (15 m) drive shaft coupled to a 5 horsepower electric motor, which served as the main power source and system clock. It could do 3 additions or subtractions in a second; a multiplication took 6 seconds; a division took 15.3 seconds; and a logarithm or a trigonometric function took over a minute… ridiculously slow by today’s standards, but a huge advance in its time.

source

“If Ancestry or its businesses are acquired… we will share your Personal Information with the acquiring or receiving entity”*…

If you’ve never before considered how valuable an asset your DNA might be, you are far behind. Some of the biggest direct-to-consumer DNA sequencing companies are busy monetizing their large-scale genomics databases, with hopes to shape the burgeoning DNA economy and reap its rewards. And if you spit in a cup for one of these companies, your DNA might already be under the corporate control of some of the richest firms on Wall Street.

With their purchase of Ancestry.com late last year, the private equity firm Blackstone now owns the DNA data of 18 million people. And Blackstone is currently ramping up efforts to monetize the data amassed among the companies it owns. But experts say Wall Street firms’ interest in genomics poses new and unforeseen threats, and risks sowing distrust among DNA donors. Without trust, could we miss out on the genome’s real value?

Since the global financial crisis of 2008, private equity firms—which buy up and reshape diverse private companies—have quietly overtaken traditional investment banks like Goldman Sachs as the “dominant players in the financial world,” according to the Financial Times. It’s been a rough tenure so far. While private equity mega-deal hits have made billions for investors, often the companies acquired pay the price, as with high-profile flops including mismanaged music group EMI and bankrupt retailer Toys R Us. The industry has become “the poster child for financial firms that suck value out of the economy,” said U.S. Senator Elizabeth Warren, while introducing an act to Congress aimed at reining in private equity “vampires.

In December the biggest, most dominant private equity company of them all, the Blackstone Group, Inc., which boasts half a trillion dollars in assets under management, made a dramatic entry into the genomics space when it bought a controlling stake in Ancestry.com as part of the deal that valued the genealogy and gene testing company at $4.7 billion. And with that one stroke of the pen, the firm acquired the largest trove of DNA data assembled by any consumer gene tester. If your own DNA sequence is included in this collection, it exists on servers somewhere along with the genomes of 18 million people from at least 30 countries.

Announcing the deal, David Kestnbaum, a senior managing director at Blackstone said he foresees Ancestry growing by “investing behind further data, functionality, and product development.” At the same time, many privacy-concerned watchers had the same question: How does Blackstone aim to monetize Ancestry’s massive database, which includes users’ most sensitive genomic data and family histories?

Those lingering worries were ignited in the final days of 2020 by revelations buried in U.S. Securities and Exchange Commission (SEC) filings, and unearthed by Bloomberg, that showed Blackstone will begin to “package and sell data” from the companies it acquires as a fresh revenue stream. 

For any entrepreneur or investor in the genomics space who knows the industry needs investment to realize its dramatic potential, the question is vexed. Are deals that bring sensitive data under the control of private equity mega-funds a much-needed path to realizing the industry’s goals? Or do they threaten to derail the rapid progress that consumer gene science is making?…

A Wall Street giant’s big bet on Ancestry.com drives home the financial realities– and the privacy challenges– facing the consumer genomic revolution: “Is Your DNA Data Safe in Blackstone’s Hands?

* from Ancestry.com’s EULA, September 23, 2020 (between Blackstone announcing its plan to buy and the deal completing)

###

As we appraise the personal, we might send carefully-deduced birthday greetings to Samuel “Sam” Loyd; he was born on this date in 1841. A chess player, chess composer, puzzle author, and recreational mathematician, he was a member of the Chess Hall of Fame (for both his play and for his exercises, or “problems”). He gained broader posthumous fame when his son published a collection of his mathematical and logic puzzles, Cyclopedia of 5000 Puzzles after his father’s death.  As readers can see here and here, his puzzles still delight.

Loyd’s most famous puzzle was the 14-15 Puzzle, which he produced in 1878. His original authorship is debated; but in any case, his version created a craze that swept America to such an extent that employers put up notices prohibiting playing the puzzle during office hours.

 source

Written by LW

January 31, 2021 at 1:01 am

“All human beings have three lives: public, private, and secret”*…

 

Privacy

A monitor displays the Omron Corp. Okao face- and emotion-detection technology during CES 2020

 

Twenty years ago at a Silicon Valley product launch, Sun Microsystems CEO Scott McNealy dismissed concern about digital privacy as a red herring: “You have zero privacy anyway. Get over it.

“Zero privacy” was meant to placate us, suggesting that we have a fixed amount of stuff about ourselves that we’d like to keep private. Once we realized that stuff had already been exposed and, yet, the world still turned, we would see that it was no big deal. But what poses as unsentimental truth telling isn’t cynical enough about the parlous state of our privacy.

That’s because the barrel of privacy invasion has no bottom. The rallying cry for privacy should begin with the strangely heartening fact that it can always get worse. Even now there’s something yet to lose, something often worth fiercely defending.

For a recent example, consider Clearview AI: a tiny, secretive startup that became the subject of a recent investigation by Kashmir Hill in The New York Times. According to the article, the company scraped billions of photos from social-networking and other sites on the web—without permission from the sites in question, or the users who submitted them—and built a comprehensive database of labeled faces primed for search by facial recognition. Their early customers included multiple police departments (and individual officers), which used the tool without warrants. Clearview has argued they have a right to the data because they’re “public.”

In general, searching by a face to gain a name and then other information is on the verge of wide availability: The Russian internet giant Yandex appears to have deployed facial-recognition technology in its image search tool. If you upload an unlabeled picture of my face into Google image search, it identifies me and then further searches my name, and I’m barely a public figure, if at all.

Given ever more refined surveillance, what might the world look like if we were to try to “get over” the loss of this privacy? Two very different extrapolations might allow us to glimpse some of the consequences of our privacy choices (or lack thereof) that are taking shape even today…

From Jonathan Zittrain (@zittrain), two scenarios for a post-privacy future: “A World Without Privacy Will Revive the Masquerade.”

* Gabriel García Márquez

###

As we get personal, we might send provocatively nonsensical birthday greetings to Hugo Ball; he was born on this date in 1886.  Ball worked as an actor with Max Reinhardt and Hermann Bahr in Berlin until the outbreak of World War I.  A staunch pacifist, Ball made his way to Switzerland, where he turned his hand to poetry in an attempt to express his horror at the conflagration enveloping Europe. (“The war is founded on a glaring mistake, men have been confused with machines.”)

Settling in Zürich, Ball was a co-founder of the Dada movement (and, lore suggests, its namer, having allegedly picked the word at random from a dictionary).  With Tristan Tzara and Jan Arp, among others, he co-founded and presided over the Cabaret Voltaire, the epicenter of Dada.  And in 1916, he created the first Dada Manifesto (Tzara’s came two years later).

 source

 

Written by LW

February 22, 2020 at 1:01 am

“I never said, ‘I want to be alone.’ I only said ‘I want to be let alone!’ There is all the difference.”*…

 

moshed-1

 

Someone observing her could assemble in forensic detail her social and familial connections, her struggles and interests, and her beliefs and commitments. From Amazon purchases and Kindle highlights, from purchase records linked with her loyalty cards at the drugstore and the supermarket, from Gmail metadata and chat logs, from search history and checkout records from the public library, from Netflix-streamed movies, and from activity on Facebook and Twitter, dating sites, and other social networks, a very specific and personal narrative is clear.

If the apparatus of total surveillance that we have described here were deliberate, centralized, and explicit, a Big Brother machine toggling between cameras, it would demand revolt, and we could conceive of a life outside the totalitarian microscope. But if we are nearly as observed and documented as any person in history, our situation is a prison that, although it has no walls, bars, or wardens, is difficult to escape.

Which brings us back to the problem of “opting out.” For all the dramatic language about prisons and panopticons, the sorts of data collection we describe here are, in democratic countries, still theoretically voluntary. But the costs of refusal are high and getting higher: A life lived in social isolation means living far from centers of business and commerce, without access to many forms of credit, insurance, or other significant financial instruments, not to mention the minor inconveniences and disadvantages — long waits at road toll cash lines, higher prices at grocery stores, inferior seating on airline flights.

It isn’t possible for everyone to live on principle; as a practical matter, many of us must make compromises in asymmetrical relationships, without the control or consent for which we might wish. In those situations — everyday 21st-century life — there are still ways to carve out spaces of resistance, counterargument, and autonomy.

We are surrounded by examples of obfuscation that we do not yet think of under that name. Lawyers engage in overdisclosure, sending mountains of vaguely related client documents in hopes of burying a pertinent detail. Teenagers on social media — surveilled by their parents — will conceal a meaningful communication to a friend in a throwaway line or a song title surrounded by banal chatter. Literature and history provide many instances of “collective names,” where a population took a single identifier to make attributing any action or identity to a particular person impossible, from the fictional “I am Spartacus” to the real “Poor Conrad” and “Captain Swing” in prior centuries — and “Anonymous,” of course, in ours…

There is real utility in an obfuscation approach, whether that utility lies in bolstering an existing strong privacy system, in covering up some specific action, in making things marginally harder for an adversary, or even in the “mere gesture” of registering our discontent and refusal. After all, those who know about us have power over us. They can deny us employment, deprive us of credit, restrict our movements, refuse us shelter, membership, or education, manipulate our thinking, suppress our autonomy, and limit our access to the good life…

As Finn Brunton and Helen Nissenbaum argue in their new book Obfuscation: A User’s Guide for Privacy and Protest, those who know about us have power over us; obfuscation may be our best digital weapon: “The Fantasy of Opting Out.”

* Greta Garbo

###

As we ponder privacy, we might recall that it was on this date in 1536 that William Tyndale was strangled then burned at the stake for heresy in Antwerp.  An English scholar and leading Protestant reformer, Tyndale effectively replaced Wycliffe’s Old English translation of the Bible with a vernacular version in what we now call Early Modern English (as also used, for instance, by Shakespeare). Tyndale’s translation was first English Bible to take advantage of the printing press, and first of the new English Bibles of the Reformation. Consequently, when it first went on sale in London, authorities gathered up all the copies they could find and burned them.  But after England went Protestant, it received official approval and ultimately became the basis of the King James Version.

Ironically, Tyndale incurred Henry VIII’s wrath after the King’s “conversion” to Protestantism, by writing a pamphlet decrying Henry’s divorce from Catherine of Aragon.  Tyndale moved to Europe, where he continued to advocate Protestant reform, ultimately running afoul of the Holy Roman Empire, which sentenced him to his death.

 source

 

Written by LW

October 6, 2019 at 1:01 am

“A better world won’t come about simply because we use data; data has its dark underside.”*…

 

Data

 

Data isn’t the new oil, it’s the new CO2. It’s a common trope in the data/tech field to say that “data is the new oil”. The basic idea being – it’s a new resource that is being extracted, it is valuable, and is a raw product that fuels other industries. But it also implies that data in inherently valuable in and of itself and that “my data” is valuable, a resource that I really should tap in to.

In reality, we are more impacted by other people’s data (with whom we are grouped) than we are by data about us. As I have written in the MIT Technology Review – “even if you deny consent to ‘your’ data being used, an organisation can use data about other people to make statistical extrapolations that affect you.” We are bound by other people’s consent. Our own consent (or lack thereof) is becoming increasingly irrelevant. We won’t solve the societal problems pervasive data surveillance is causing by rushing through online consent forms. If you see data as CO2, it becomes clearer that its impacts are societal not solely individual. My neighbour’s car emissions, the emissions from a factory on a different continent, impact me more than my own emissions or lack thereof. This isn’t to abdicate individual responsibility or harm. It’s adding a new lens that we too often miss entirely.

We should not endlessly be defending arguments along the lines that “people choose to willingly give up their freedom in exchange for free stuff online”. The argument is flawed for two reasons. First the reason that is usually given – people have no choice but to consent in order to access the service, so consent is manufactured.  We are not exercising choice in providing data but rather resigned to the fact that they have no choice in the matter.

The second, less well known but just as powerful, argument is that we are not only bound by other people’s data; we are bound by other people’s consent.  In an era of machine learning-driven group profiling, this effectively renders my denial of consent meaningless. Even if I withhold consent, say I refuse to use Facebook or Twitter or Amazon, the fact that everyone around me has joined means there are just as many data points about me to target and surveil. The issue is systemic, it is not one where a lone individual can make a choice and opt out of the system. We perpetuate this myth by talking about data as our own individual “oil”, ready to sell to the highest bidder. In reality I have little control over this supposed resource which acts more like an atmospheric pollutant, impacting me and others in myriads of indirect ways. There are more relations – direct and indirect – between data related to me, data about me, data inferred about me via others than I can possibly imagine, let alone control with the tools we have at our disposal today.

Because of this, we need a social, systemic approach to deal with our data emissions. An environmental approach to data rights as I’ve argued previously. But first let’s all admit that the line of inquiry defending pervasive surveillance in the name of “individual freedom” and individual consent gets us nowhere closer to understanding the threats we are facing.

Martin Tisné argues for an “environmental” approach to data rights: “Data isn’t the new oil, it’s the new CO2.”

Lest one think that we couldn’t/shouldn’t have seen this (and related issues like over dependence on algorithms, the digital divide, et al.) coming, see also Paul Baran‘s prescient 1968 essay, “On the Future Computer Era,” one of the last pieces he did at RAND, before co-leading the spin-off of The Institute for the Future.

* Mike Loukides, Ethics and Data Science

###

As we ponder privacy, we might recall that it was on this date in 1981 that IBM released IBM model number 5150– AKA the IBM PC– the original version and progenitor of the IBM PC compatible hardware platform. Since the machine was based on open architecture, within a short time of its introduction, third-party suppliers of peripheral devices, expansion cards, and software proliferated; the influence of the IBM PC on the personal computer market was substantial in standardizing a platform for personal computers (and creating a market for Microsoft’s operating system– first PC DOS, then Windows– on which the PC platform ran).  “IBM compatible” became an important criterion for sales growth; after the 1980s, only the Apple Macintosh family kept a significant share of the microcomputer market without compatibility with the IBM personal computer.

IBM PC source

 

Written by LW

August 12, 2019 at 1:01 am

%d bloggers like this: