(Roughly) Daily

Posts Tagged ‘surveillance

“All human beings have three lives: public, private, and secret”*…

 

Privacy

A monitor displays the Omron Corp. Okao face- and emotion-detection technology during CES 2020

 

Twenty years ago at a Silicon Valley product launch, Sun Microsystems CEO Scott McNealy dismissed concern about digital privacy as a red herring: “You have zero privacy anyway. Get over it.

“Zero privacy” was meant to placate us, suggesting that we have a fixed amount of stuff about ourselves that we’d like to keep private. Once we realized that stuff had already been exposed and, yet, the world still turned, we would see that it was no big deal. But what poses as unsentimental truth telling isn’t cynical enough about the parlous state of our privacy.

That’s because the barrel of privacy invasion has no bottom. The rallying cry for privacy should begin with the strangely heartening fact that it can always get worse. Even now there’s something yet to lose, something often worth fiercely defending.

For a recent example, consider Clearview AI: a tiny, secretive startup that became the subject of a recent investigation by Kashmir Hill in The New York Times. According to the article, the company scraped billions of photos from social-networking and other sites on the web—without permission from the sites in question, or the users who submitted them—and built a comprehensive database of labeled faces primed for search by facial recognition. Their early customers included multiple police departments (and individual officers), which used the tool without warrants. Clearview has argued they have a right to the data because they’re “public.”

In general, searching by a face to gain a name and then other information is on the verge of wide availability: The Russian internet giant Yandex appears to have deployed facial-recognition technology in its image search tool. If you upload an unlabeled picture of my face into Google image search, it identifies me and then further searches my name, and I’m barely a public figure, if at all.

Given ever more refined surveillance, what might the world look like if we were to try to “get over” the loss of this privacy? Two very different extrapolations might allow us to glimpse some of the consequences of our privacy choices (or lack thereof) that are taking shape even today…

From Jonathan Zittrain (@zittrain), two scenarios for a post-privacy future: “A World Without Privacy Will Revive the Masquerade.”

* Gabriel García Márquez

###

As we get personal, we might send provocatively nonsensical birthday greetings to Hugo Ball; he was born on this date in 1886.  Ball worked as an actor with Max Reinhardt and Hermann Bahr in Berlin until the outbreak of World War I.  A staunch pacifist, Ball made his way to Switzerland, where he turned his hand to poetry in an attempt to express his horror at the conflagration enveloping Europe. (“The war is founded on a glaring mistake, men have been confused with machines.”)

Settling in Zürich, Ball was a co-founder of the Dada movement (and, lore suggests, its namer, having allegedly picked the word at random from a dictionary).  With Tristan Tzara and Jan Arp, among others, he co-founded and presided over the Cabaret Voltaire, the epicenter of Dada.  And in 1916, he created the first Dada Manifesto (Tzara’s came two years later).

 source

 

Written by LW

February 22, 2020 at 1:01 am

“Surveillance is permanent in its effects, even if it is discontinuous in its action”*…

 

Facial recognition

China’s facial recognition technology identifies visitors in a display at the Digital China Exhibition in Fuzhou, Fujian province, earlier this year

 

Collective wisdom is that China is becoming a kind of all-efficient Technocratic Leviathan thanks to the combination of machine learning and authoritarianism. Authoritarianism has always been plagued with problems of gathering and collating information and of being sufficiently responsive to its citizens’ needs to remain stable. Now, the story goes, a combination of massive data gathering and machine learning will solve the basic authoritarian dilemma. When every transaction that a citizen engages in is recorded by tiny automatons riding on the devices they carry in their hip pockets, when cameras on every corner collect data on who is going where, who is talking to whom, and uses facial recognition technology to distinguish ethnicity and identify enemies of the state, a new and far more powerful form of authoritarianism will emerge. Authoritarianism then, can emerge as a more efficient competitor that can beat democracy at its home game (some fear this; some welcome it).

The theory behind this is one of strength reinforcing strength – the strengths of ubiquitous data gathering and analysis reinforcing the strengths of authoritarian repression to create an unstoppable juggernaut of nearly perfectly efficient oppression. Yet there is another story to be told – of weakness reinforcing weakness. Authoritarian states were always particularly prone to the deficiencies identified in James Scott’s Seeing Like a State – the desire to make citizens and their doings legible to the state, by standardizing and categorizing them, and reorganizing collective life in simplified ways, for example by remaking cities so that they were not organic structures that emerged from the doings of their citizens, but instead grand chessboards with ordered squares and boulevards, reducing all complexities to a square of planed wood. The grand state bureaucracies that were built to carry out these operations were responsible for multitudes of horrors, but also for the crumbling of the Stalinist state into a Brezhnevian desuetude, where everyone pretended to be carrying on as normal because everyone else was carrying on too. The deficiencies of state action, and its need to reduce the world into something simpler that it could comprehend and act upon created a kind of feedback loop, in which imperfections of vision and action repeatedly reinforced each other.

So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.

This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.

These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist…

In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott’s account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason.

So in short, this conjecture would suggest that  the conjunction of AI and authoritarianism (has someone coined the term ‘aithoritarianism’ yet? I’d really prefer not to take the blame), will have more or less the opposite effects of what people expect. It will not be Singapore writ large, and perhaps more brutal. Instead, it will be both more radically monstrous and more radically unstable…

Henry Farrell (@henryfarrell) makes that case that the “automation of authoritarianism” may backfire on China (and on the regimes to which it is exporting it’s surveillance technology): “Seeing Like a Finite State Machine.”

See also: “China Government Spreads Uyghur Analytics Across China.”

* Michel Foucault, Discipline and Punish: The Birth of the Prison

###

As we ponder privacy, we might recall that it was on this date in 1769 that the first patent was issued (in London, to John Bevan) for Venetian blinds.  Invented centuries before in Persia, then brought back to Venice through trade, they became popular in Europe, then the U.S. as both a manager of outside light and as an early privacy technology.

venetian blinds source

 

Written by LW

December 11, 2019 at 1:01 am

“I never said, ‘I want to be alone.’ I only said ‘I want to be let alone!’ There is all the difference.”*…

 

moshed-1

 

Someone observing her could assemble in forensic detail her social and familial connections, her struggles and interests, and her beliefs and commitments. From Amazon purchases and Kindle highlights, from purchase records linked with her loyalty cards at the drugstore and the supermarket, from Gmail metadata and chat logs, from search history and checkout records from the public library, from Netflix-streamed movies, and from activity on Facebook and Twitter, dating sites, and other social networks, a very specific and personal narrative is clear.

If the apparatus of total surveillance that we have described here were deliberate, centralized, and explicit, a Big Brother machine toggling between cameras, it would demand revolt, and we could conceive of a life outside the totalitarian microscope. But if we are nearly as observed and documented as any person in history, our situation is a prison that, although it has no walls, bars, or wardens, is difficult to escape.

Which brings us back to the problem of “opting out.” For all the dramatic language about prisons and panopticons, the sorts of data collection we describe here are, in democratic countries, still theoretically voluntary. But the costs of refusal are high and getting higher: A life lived in social isolation means living far from centers of business and commerce, without access to many forms of credit, insurance, or other significant financial instruments, not to mention the minor inconveniences and disadvantages — long waits at road toll cash lines, higher prices at grocery stores, inferior seating on airline flights.

It isn’t possible for everyone to live on principle; as a practical matter, many of us must make compromises in asymmetrical relationships, without the control or consent for which we might wish. In those situations — everyday 21st-century life — there are still ways to carve out spaces of resistance, counterargument, and autonomy.

We are surrounded by examples of obfuscation that we do not yet think of under that name. Lawyers engage in overdisclosure, sending mountains of vaguely related client documents in hopes of burying a pertinent detail. Teenagers on social media — surveilled by their parents — will conceal a meaningful communication to a friend in a throwaway line or a song title surrounded by banal chatter. Literature and history provide many instances of “collective names,” where a population took a single identifier to make attributing any action or identity to a particular person impossible, from the fictional “I am Spartacus” to the real “Poor Conrad” and “Captain Swing” in prior centuries — and “Anonymous,” of course, in ours…

There is real utility in an obfuscation approach, whether that utility lies in bolstering an existing strong privacy system, in covering up some specific action, in making things marginally harder for an adversary, or even in the “mere gesture” of registering our discontent and refusal. After all, those who know about us have power over us. They can deny us employment, deprive us of credit, restrict our movements, refuse us shelter, membership, or education, manipulate our thinking, suppress our autonomy, and limit our access to the good life…

As Finn Brunton and Helen Nissenbaum argue in their new book Obfuscation: A User’s Guide for Privacy and Protest, those who know about us have power over us; obfuscation may be our best digital weapon: “The Fantasy of Opting Out.”

* Greta Garbo

###

As we ponder privacy, we might recall that it was on this date in 1536 that William Tyndale was strangled then burned at the stake for heresy in Antwerp.  An English scholar and leading Protestant reformer, Tyndale effectively replaced Wycliffe’s Old English translation of the Bible with a vernacular version in what we now call Early Modern English (as also used, for instance, by Shakespeare). Tyndale’s translation was first English Bible to take advantage of the printing press, and first of the new English Bibles of the Reformation. Consequently, when it first went on sale in London, authorities gathered up all the copies they could find and burned them.  But after England went Protestant, it received official approval and ultimately became the basis of the King James Version.

Ironically, Tyndale incurred Henry VIII’s wrath after the King’s “conversion” to Protestantism, by writing a pamphlet decrying Henry’s divorce from Catherine of Aragon.  Tyndale moved to Europe, where he continued to advocate Protestant reform, ultimately running afoul of the Holy Roman Empire, which sentenced him to his death.

 source

 

Written by LW

October 6, 2019 at 1:01 am

“Outward show is a wonderful perverter of the reason”*…

 

facial analysis

Humans have long hungered for a short-hand to help in understanding and managing other humans.  From phrenology to the Myers-Briggs Test, we’ve tried dozens of short-cuts… and tended to find that at best they weren’t actually very helpful; at worst, they were reinforcing of stereotypes that were inaccurate, and so led to results that were unfair and ineffective.  Still, the quest continues– these days powered by artificial intelligence.  What could go wrong?…

Could a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short.

While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train “behavior detection officers” to scan faces for signs of deception.

But when the program was rolled out in 2007, it was beset with problems. Officers were referring passengers for interrogation more or less at random, and the small number of arrests that came about were on charges unrelated to terrorism. Even more concerning was the fact that the program was allegedly used to justify racial profiling.

Ekman tried to distance himself from Spot, claiming his method was being misapplied. But others suggested that the program’s failure was due to an outdated scientific theory that underpinned Ekman’s method; namely, that emotions can be deduced objectively through analysis of the face.

In recent years, technology companies have started using Ekman’s method to train algorithms to detect emotion from facial expressions. Some developers claim that automatic emotion detection systems will not only be better than humans at discovering true emotions by analyzing the face, but that these algorithms will become attuned to our innermost feelings, vastly improving interaction with our devices.

But many experts studying the science of emotion are concerned that these algorithms will fail once again, making high-stakes decisions about our lives based on faulty science…

“Emotion detection” has grown from a research project to a $20bn industry; learn more about why that’s a cause for concern: “Don’t look now: why you should be worried about machines reading your emotions.”

* Marcus Aurelius, Meditations

###

As we insist on the individual, we might recall that it was on this date in 1989 that Tim Berners-Lee submitted a proposal to CERN for developing a new way of linking and sharing information over the Internet.

It was the first time Berners-Lee proposed a system that would ultimately become the World Wide Web; but his proposal was basically a relatively vague request to research the details and feasibility of such a system.  He later submitted a proposal on November 12, 1990 that much more directly detailed the actual implementation of the World Wide Web.

web25-significant-white-300x248 source

 

“The city’s full of people who you just see around”*…

 

An archaeologist’s reconstruction of Dvin, one of the most ancient settlements of the Armenian Highland and an ancient capital of Armenia [source], and modern day New York City [source]

Much of the history of the city—its built forms and its politics, the urban experience, and the characteristic moral ambivalence that cities arouse—can be written as a tension between the visible and the invisible. What and who gets seen? By whom? Who interprets the city’s meaning? What should remain unseen?

Rulers of cities have always had an interest in visibility, both in representing their power and in controlling people by seeing them. The earliest cities emerged out of the symbiosis of religion and political power, and the temple and the citadel gave early urbanism its most visible elements…

Warren Breckman‘s  fascinating history of the city as a place to see and be seen: “A Matter of Optics.”

* Terry Pratchett, Men at Arms

###

As we wonder, with Juvenal (and Alan Moore), who watches the watchmen, we might recall that it was on this date in 1752 that Benjamin Franklin and his son tested the relationship between electricity and lightning by flying a kite in a thunder storm.  Franklin was attempting a (safer) variation on a set of French investigations about which he’d read.  The French had connected lightning rods to a Leyden jar, but one of their experiments electrocuted the investigator.  Franklin– who may have been a wastrel, but was no fool– used a kite; the increased height/distance from the strike reduces the risk of electrocution.  (But it doesn’t eliminate it: Franklin’s experiment is now illegal in many states.)

In fact, (other) French experiments had successfully demonstrated the electrical properties of lightning a month before; but word had not yet reached Philadelphia.

The Treasury’s Bureau of Engraving and Printing created this vignette (c. 1860), which was used on the $10 National Bank Note from the 1860s to 1890s

 source

Written by LW

June 15, 2018 at 1:10 am

%d bloggers like this: