(Roughly) Daily

Posts Tagged ‘Henry Farrell

“Never call an accountant a credit to his profession; a good accountant is a debit to his profession.”*…

The estimable Henry Farrell on accountancy as a lens on the hidden systems of the world…

When reading Cory Doctorow’s latest novel, The Bezzle [which your correspondent highly recommends], I kept on thinking about another recent book, Bruce Schneier’s A Hacker’s Mind: How the Powerful Bend Society’s Rules and How to Bend Them Back [ditto]. Cory’s book is fiction, and Bruce’s non-fiction, but they are clearly examples of the same broad genre (the ‘pre-apocalyptic systems thriller’?). Both are about hackers, but tell us to pay attention to other things than computers and traditional information systems. We need to go beneath the glossy surfaces of cyberpunk and look closely at the messy, complex systems of power beneath them. And these systems – like those described in the very early cyberpunk of William Gibson and others – are all about money and power.

What Bruce says:

In my story, hacking isn’t just something bored teenagers or rival governments do to computer systems … It isn’t countercultural misbehavior by the less powerful. A hacker is more likely to be working for a hedge fund, finding a loophole in financial regulations that lets her siphon extra profits out of the system. He’s more likely in a corporate office. Or an elected official. Hacking is integral to the job of every government lobbyist. It’s how social media systems keep us on our platform.

Bruce’s prime example of hacking is Peter Thiel using a Roth IRA to stash his Paypal shares and turn them into $5 billion, tax free.

This underscores his four key points. First, hacking isn’t just about computers. It’s about finding the loopholes; figuring out how to make complex system of rules do things that they aren’t supposed to. Second, it isn’t countercultural. Most of the hacking you might care about is done by boring seeming people in boring seeming clothes (I’m reminded of Sam Anthony’s anecdote about how the costume designer of the film Hackers visited with people at a 2600 conference for background research, but decided that they “were a bunch of boring nerds and went and took pictures of club kids on St. Marks instead”). Third, hacking tends to reinforce power symmetries rather than undermine them. The rich have far more resources to figure out how to gimmick the rules. Fourth, we should mostly identify ourselves not with the hackers but the hacked. Because that is who, in fact, we mostly are….

… Still, there are things you can do to fight back. One of the major themes of The Bezzle is that prison is now a profit model. Tyler Cowen, the economist, used to talk a lot about “markets in everything.” I occasionally responded by pointing to “captive markets in everything.” And there isn’t any market that is more literally captive than prisoners. As for-profit corporations (and venal authorities) came to realize this, they started to systematically remake the rules and hack the gaps in the regulatory system to squeeze prisoners and their relatives for as much money as possible, charging extortionate amounts for mail, for phone calls, for books that could only be accessed through proprietary electronic tablets.

That’s changing, in part thanks to ingenious counter hacking. The Appeal published a piece last week on how Securus, “the nation’s largest prison and jail telecom corporation,” had to effectively default on nearly a billion dollars of debt. Part of the reason for the company’s travails is that activists have figured out how to use the system against it…

… In other sectors, where companies doing sketchy things have publicly traded shares, activists have started getting motions passed at shareholder meetings, to challenge their policies. However, the companies have begun in turn to sue, using the legal system in unconventional ways to try to prevent these unconventional tactics. Again, as both Bruce and Cory suggest, the preponderance of hacking muscle is owned by the powerful, not those challenging them.

Even so, the more that ordinary people understand the complexities of the system, the more that they will be able to push back. Perhaps the most magnificent example of this is Max Schrems, an Austrian law student who successfully bollocksed-up the entire system of EU-US data transfers by spotting loopholes and incoherencies and weaponizing them in EU courts. Cory’s Martin Hench books seem to me to purpose-designed to inspire a thousand Max Schrems – people who are probably past their teenage years, have some grounding in the relevant professions, and really want to see things change.

And in this, the books return to some of the original ambitions of ‘cyberpunk,’ a somewhat ungainly and contested term that has come to emphasize the literary movement’s countercultural cool over its actual intentions…

One word that never appears in Neuromancer, and for good reason: “Internet.” When it was written, the Internet was just one among many information networks, and there was no reason to suspect that it would defeat and devour its rivals, subordinating them to its own logic. Before cyberspace and the Internet became entangled, Gibson’s term was a synecdoche for a much broader set of phenomena. What cyberspace actually referred to back then was more ‘capitalism’ than ‘computerized information.’

So, in a very important sense, The Bezzle returns to the original mission statement – understanding how the hacker mythos is entwined with capitalism. To actually understand hacking, we need to understand the complex systems of finance and how they work. If you really want to penetrate the system, you need to really grasp what money is and what it does. That, I think, is what Cory is trying to tell us. And so too Bruce. The nexus between accountancy and hacking is not a literary trick or artifice. It is an important fact about the world, which both fiction and non-fiction writers need to pay attention to…

Eminently worth reading in full: “Today’s hackers wear green eyeshades, not mirrorshades,” from @henryfarrell in his invaluable newsletter Programmable Mutter.

Charles Lyell

###

As we ponder power, we might recall that on this date in 1927, a “counter-hacker” in a different domain, Mae West, was sentenced to jail for obscenity.

Her first starring role on Broadway was in a 1926 play entitled Sex, which she wrote, produced, and directed. Although conservative critics panned the show, ticket sales were strong. The production did not go over well with city officials, who had received complaints from some religious groups, and the theater was raided and West arrested along with the cast. She was taken to the Jefferson Market Court House (now Jefferson Market Library), where she was prosecuted on morals charges, and on April 19, 1927, was sentenced to 10 days for “corrupting the morals of youth.” Though West could have paid a fine and been let off, she chose the jail sentence for the publicity it would garner. While incarcerated on Welfare Island (now known as Roosevelt Island), she dined with the warden and his wife; she told reporters that she had worn her silk panties while serving time, in lieu of the “burlap” the other girls had to wear. West got great mileage from this jail stint. She served eight days with two days off for “good behavior”.

Wikipedia

source

“Surveillance is permanent in its effects, even if it is discontinuous in its action”*…

 

Facial recognition

China’s facial recognition technology identifies visitors in a display at the Digital China Exhibition in Fuzhou, Fujian province, earlier this year

 

Collective wisdom is that China is becoming a kind of all-efficient Technocratic Leviathan thanks to the combination of machine learning and authoritarianism. Authoritarianism has always been plagued with problems of gathering and collating information and of being sufficiently responsive to its citizens’ needs to remain stable. Now, the story goes, a combination of massive data gathering and machine learning will solve the basic authoritarian dilemma. When every transaction that a citizen engages in is recorded by tiny automatons riding on the devices they carry in their hip pockets, when cameras on every corner collect data on who is going where, who is talking to whom, and uses facial recognition technology to distinguish ethnicity and identify enemies of the state, a new and far more powerful form of authoritarianism will emerge. Authoritarianism then, can emerge as a more efficient competitor that can beat democracy at its home game (some fear this; some welcome it).

The theory behind this is one of strength reinforcing strength – the strengths of ubiquitous data gathering and analysis reinforcing the strengths of authoritarian repression to create an unstoppable juggernaut of nearly perfectly efficient oppression. Yet there is another story to be told – of weakness reinforcing weakness. Authoritarian states were always particularly prone to the deficiencies identified in James Scott’s Seeing Like a State – the desire to make citizens and their doings legible to the state, by standardizing and categorizing them, and reorganizing collective life in simplified ways, for example by remaking cities so that they were not organic structures that emerged from the doings of their citizens, but instead grand chessboards with ordered squares and boulevards, reducing all complexities to a square of planed wood. The grand state bureaucracies that were built to carry out these operations were responsible for multitudes of horrors, but also for the crumbling of the Stalinist state into a Brezhnevian desuetude, where everyone pretended to be carrying on as normal because everyone else was carrying on too. The deficiencies of state action, and its need to reduce the world into something simpler that it could comprehend and act upon created a kind of feedback loop, in which imperfections of vision and action repeatedly reinforced each other.

So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.

This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.

These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist…

In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott’s account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason.

So in short, this conjecture would suggest that  the conjunction of AI and authoritarianism (has someone coined the term ‘aithoritarianism’ yet? I’d really prefer not to take the blame), will have more or less the opposite effects of what people expect. It will not be Singapore writ large, and perhaps more brutal. Instead, it will be both more radically monstrous and more radically unstable…

Henry Farrell (@henryfarrell) makes that case that the “automation of authoritarianism” may backfire on China (and on the regimes to which it is exporting it’s surveillance technology): “Seeing Like a Finite State Machine.”

See also: “China Government Spreads Uyghur Analytics Across China.”

* Michel Foucault, Discipline and Punish: The Birth of the Prison

###

As we ponder privacy, we might recall that it was on this date in 1769 that the first patent was issued (in London, to John Bevan) for Venetian blinds.  Invented centuries before in Persia, then brought back to Venice through trade, they became popular in Europe, then the U.S. as both a manager of outside light and as an early privacy technology.

venetian blinds source

 

Written by (Roughly) Daily

December 11, 2019 at 1:01 am