(Roughly) Daily

Posts Tagged ‘AI

“We must be free not because we claim freedom, but because we practice it”*…

 

algorithm

 

There is a growing sense of unease around algorithmic modes of governance (‘algocracies’) and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus in this way enables us to see how algorithmic governance can be both emancipatory and enslaving, and provides a framework for future development and activism around the creation of this technology…

From a pre-print of John Danaher‘s (@JohnDanaher) chapter in the forthcoming Oxford Handbook on the Philosophy of Technology, edited by Shannon Vallor: “Freedom in an Age of Algocracy “… a little dense, but very useful.

[image above: source]

* William Faulkner

###

As we meet the new boss, same as the old boss, we might recall that it was on this date in 1962 that telephone and television signals were first relayed in space via the communications satellite Echo 1– basically a big metallic balloon that simply bounced radio signals off its surface.  Simple, but effective.

Forty thousand pounds (18,144 kg) of air was required to inflate the sphere on the ground; so it was inflated in space.  While in orbit it only required several pounds of gas to keep it inflated.

Fun fact: the Echo 1 was built for NASA by Gilmore Schjeldahl, a Minnesota inventor probably better remembered as the creator of the plastic-lined airsickness bag.

200px-Echo-1 source

 

Written by LW

February 24, 2020 at 1:01 am

“It is forbidden to kill; therefore all murderers are punished unless they kill in large numbers and to the sound of trumpets”*…

 

Pope AI

Francis Bacon, Study after Velazquez’s Portrait of Pope Innocent X, 1953

 

Nobody but AI mavens would ever tiptoe up to the notion of creating godlike cyber-entities that are much smarter than people. I hasten to assure you — I take that weird threat seriously. If we could wipe out the planet with nuclear physics back in the late 1940s, there must be plenty of other, novel ways to get that done…

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed…

Technological proliferation is not a list of principles. It is a deep, multivalent historical process with many radically different stakeholders over many different time-scales. People who invent technology never get to set the rules for what is done with it. A “non-evil” Google, built by two Stanford dropouts, is just not the same entity as modern Alphabet’s global multinational network, with its extensive planetary holdings in clouds, transmission cables, operating systems, and device manufacturing.

It’s not that Google and Alphabet become evil just because they’re big and rich. Frankly, they’re not even all that “evil.” They’re just inherently involved in huge, tangled, complex, consequential schemes, with much more variegated populations than had originally been imagined. It’s like the ethical difference between being two parish priests and becoming Pope.

Of course the actual Pope will confront Artificial Intelligence. His response will not be “is it socially beneficial to the user-base?” but rather, “does it serve God?” So unless you’re willing to morally out-rank the Pope, you need to understand that religious leaders will use Artificial Intelligence in precisely the way that televangelists have used television.

So I don’t mind the moralizing about AI. I even enjoy it as metaphysical game, but I do have one caveat about this activity, something that genuinely bothers me. The practitioners of AI are not up-front about the genuine allure of their enterprise, which is all about the old-school Steve-Jobsian charisma of denting the universe while becoming insanely great. Nobody does AI for our moral betterment; everybody does it to feel transcendent.

AI activists are not everyday brogrammers churning out grocery-code. These are visionary zealots driven by powerful urges they seem unwilling to confront. If you want to impress me with your moral authority, gaze first within your own soul.

Excerpted from the marvelous Bruce Sterling‘s essay “Artificial Morality,” a contribution to the Provocations series, a project of the Los Angeles Review of Books in conjunction with UCI’s “The Future of the Future: The Ethics and Implications of AI” conference.

* Voltaire

###

As we agonize over algorithms, we might recall that it was on this date in 1872 that Luther Crowell patented a machine for the manufacture of accordion-sided, flat-bottomed paper bags (#123,811).  That said, Margaret E. Knight might more accurately be considered the “mother of the modern shopping bag”; she had perfected square bottoms two years earlier.

source

 

“Surveillance is permanent in its effects, even if it is discontinuous in its action”*…

 

Facial recognition

China’s facial recognition technology identifies visitors in a display at the Digital China Exhibition in Fuzhou, Fujian province, earlier this year

 

Collective wisdom is that China is becoming a kind of all-efficient Technocratic Leviathan thanks to the combination of machine learning and authoritarianism. Authoritarianism has always been plagued with problems of gathering and collating information and of being sufficiently responsive to its citizens’ needs to remain stable. Now, the story goes, a combination of massive data gathering and machine learning will solve the basic authoritarian dilemma. When every transaction that a citizen engages in is recorded by tiny automatons riding on the devices they carry in their hip pockets, when cameras on every corner collect data on who is going where, who is talking to whom, and uses facial recognition technology to distinguish ethnicity and identify enemies of the state, a new and far more powerful form of authoritarianism will emerge. Authoritarianism then, can emerge as a more efficient competitor that can beat democracy at its home game (some fear this; some welcome it).

The theory behind this is one of strength reinforcing strength – the strengths of ubiquitous data gathering and analysis reinforcing the strengths of authoritarian repression to create an unstoppable juggernaut of nearly perfectly efficient oppression. Yet there is another story to be told – of weakness reinforcing weakness. Authoritarian states were always particularly prone to the deficiencies identified in James Scott’s Seeing Like a State – the desire to make citizens and their doings legible to the state, by standardizing and categorizing them, and reorganizing collective life in simplified ways, for example by remaking cities so that they were not organic structures that emerged from the doings of their citizens, but instead grand chessboards with ordered squares and boulevards, reducing all complexities to a square of planed wood. The grand state bureaucracies that were built to carry out these operations were responsible for multitudes of horrors, but also for the crumbling of the Stalinist state into a Brezhnevian desuetude, where everyone pretended to be carrying on as normal because everyone else was carrying on too. The deficiencies of state action, and its need to reduce the world into something simpler that it could comprehend and act upon created a kind of feedback loop, in which imperfections of vision and action repeatedly reinforced each other.

So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.

This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.

These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist…

In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott’s account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason.

So in short, this conjecture would suggest that  the conjunction of AI and authoritarianism (has someone coined the term ‘aithoritarianism’ yet? I’d really prefer not to take the blame), will have more or less the opposite effects of what people expect. It will not be Singapore writ large, and perhaps more brutal. Instead, it will be both more radically monstrous and more radically unstable…

Henry Farrell (@henryfarrell) makes that case that the “automation of authoritarianism” may backfire on China (and on the regimes to which it is exporting it’s surveillance technology): “Seeing Like a Finite State Machine.”

See also: “China Government Spreads Uyghur Analytics Across China.”

* Michel Foucault, Discipline and Punish: The Birth of the Prison

###

As we ponder privacy, we might recall that it was on this date in 1769 that the first patent was issued (in London, to John Bevan) for Venetian blinds.  Invented centuries before in Persia, then brought back to Venice through trade, they became popular in Europe, then the U.S. as both a manager of outside light and as an early privacy technology.

venetian blinds source

 

Written by LW

December 11, 2019 at 1:01 am

“Not with a bang, but a whimper”*…

 

automation

 

What actually happens to workers when a company deploys automation? The common assumption seems to be that the employee simply disappears wholesale, replaced one-for-one with an AI interface or an array of mechanized arms.

Yet given the extensive punditeering, handwringing, and stump-speeching around the “robots are coming for our jobs” phenomenon—which I will never miss an opportunity to point out is falsely represented—research into what happens to the individual worker remains relatively thin. Studies have attempted to monitor the impact of automation on wages on aggregate or to correlate employment to levels of robotization.

But few in-depth investigations have been made into what happens to each worker after their companies roll out automation initiatives. Earlier this year, though, a paper authored by economists James Bessen, Maarten Goos, Anna Salomons, and Wiljan Van den Berge set out to do exactly that…

What emerges is a portrait of workplace automation that is ominous in a less dramatic manner than we’re typically made to understand. For one thing, there is no ‘robot apocalypse’, even after a major corporate automation event. Unlike mass layoffs, automation does not appear to immediately and directly send workers packing en masse.

Instead, automation increases the likelihood that workers will be driven away from their previous jobs at the companies—whether they’re fired, or moved to less rewarding tasks, or quit—and causes a long-term loss of wages for the employee.

The report finds that “firm-level automation increases the probability of workers separating from their employers and decreases days worked, leading to a 5-year cumulative wage income loss of 11 percent of one year’s earnings.” That’s a pretty significant loss.

Worse still, the study found that even in the Netherlands, which has a comparatively generous social safety net to, say, the United States, workers were only able to offset a fraction of those losses with benefits provided by the state. Older workers, meanwhile, were more likely to retire early—deprived of years of income they may have been counting on.

Interestingly, the effects of automation were felt similarly through all manner of company—small, large, industrial, services-oriented, and so on. The study covered all non-finance sector firms, and found that worker separation and income loss were “quite pervasive across worker types, firm sizes and sectors.”

Automation, in other words, forces a more pervasive, slower-acting and much less visible phenomenon than the robots-are-eating-our-jobs talk is preparing us for…

The result, Bessen says, is an added strain on the social safety net that it is currently woefully unprepared to handle. As more and more firms join the automation goldrush—a 2018 McKinsey survey of 1,300 companies worldwide found that three-quarters of them had either begun to automate business processes or planned to do so next year—the number of workers forced out of firms seems likely to tick up, or at least hold steady. What is unlikely to happen, per this research, is an automation-driven mass exodus of jobs.

This is a double-edged sword: While it’s obviously good that thousands of workers are unlikely to be fired in one fell swoop when a process is automated at a corporation, it also means the pain of automation is distributed in smaller, more personalized doses, and thus less likely to prompt any sort of urgent public response. If an entire Amazon warehouse were suddenly automated, it might spur policymakers to try to address the issue; if automation has been slowly hurting us for years, it’s harder to rally support for stemming the pain…

Brian Merchant on the ironic challenge of addressing the slow-motion, trickle-down social, economic, and cultural threats of automation– that they will accrue gradually, like erosion, not catastrophically… making it harder to generate a sense of urgency around creating a response: “There’s an Automation Crisis Underway Right Now, It’s Just Mostly Invisible.”

* T. S. Eliot, “The Hollow Men”

###

As we think systemically, we might recall that it was on this date in 1994 that Ken McCarthy, Marc Andreessen, and Mark Graham held the first conference to focus on the commercial potential of the World Wide Web.

 

 

Written by LW

November 5, 2019 at 1:01 am

“O brave new world”*…

 

law and AI

 

With the arrival of autonomous weapons systems (AWS)[1] on the 21st century battlefield, the nature of warfare is poised for dramatic change.[2] Overseen by artificial intelligence (AI), fueled by terabytes of data and operating at lightning-fast speed, AWS will be the decisive feature of future military conflicts.[3] Nonetheless, under the American way of war, AWS will operate within existing legal and policy guidelines that establish conditions and criteria for the application of force.[4] Even as the Department of Defense (DoD) places limitations on when and how AWS may take action,[5] the pace of new conflicts and adoption of AWS by peer competitors will ultimately push military leaders to empower AI-enabled weapons to make decisions with less and less human input.[6] As such, timely, accurate, and context-specific legal advice during the planning and operation of AWS missions will be essential. In the face of digital-decision-making, mere human legal advisors will be challenged to keep up!

Fortunately, at the same time that AI is changing warfare, the practice of law is undergoing a similar AI-driven transformation.[7]

From The Judge Advocate General’s CorpsThe Reporter: “Autonomous Weapons Need Autonomous Lawyers.”

As I finish drafting this post [on October 5], I’ve discovered that none of the links are available any longer; the piece (and the referenced articles within it, also from The Reporter) were apparently removed from public view while I was drafting this– from a Reporter web page that, obviously, opened for me earlier.  You will find other references to (and excerpts from/comments on) the article here, here, and here.  I’m leaving the original links in, in case they become active again…

* Shakespeare, The Tempest

###

As we wonder if this can end well, we might recall that it was on this date in 1983 that Ameritech executive Bob Barnett made a phone call from a car parked near Soldier Field in Chicago, officially launching the first cellular network in the United States.

barnett-300x165

Barnett (foreground, in the car) and his audience

 

Written by LW

October 13, 2019 at 1:01 am

“How about a little magic?”*…

 

sorcerers apprentice

 

Once upon a time (bear with me if you’ve heard this one), there was a company which made a significant advance in artificial intelligence. Given their incredibly sophisticated new system, they started to put it to ever-wider uses, asking it to optimize their business for everything from the lofty to the mundane.

And one day, the CEO wanted to grab a paperclip to hold some papers together, and found there weren’t any in the tray by the printer. “Alice!” he cried (for Alice was the name of his machine learning lead) “Can you tell the damned AI to make sure we don’t run out of paperclips again?”…

What could possibly go wrong?

[As you’ll read in the full and fascinating article, a great deal…]

Computer scientists tell the story of the Paperclip Maximizer as a sort of cross between the Sorcerer’s Apprentice and the Matrix; a reminder of why it’s crucially important to tell your system not just what its goals are, but how it should balance those goals against costs. It frequently comes with a warning that it’s easy to forget a cost somewhere, and so you should always check your models carefully to make sure they aren’t accidentally turning in to Paperclip Maximizers…

But this parable is not just about computer science. Replace the paper clips in the story above with money, and you will see the rise of finance…

Yonatan Zunger tells a powerful story that’s not (only) about AI: “The Parable of the Paperclip Maximizer.”

* Mickey Mouse, The Sorcerer’s Apprentice

###

As we’re careful what we wish for (and how we wish for it), we might recall that it was on this date in 1631 that the Puritans in the recently-chartered Massachusetts Bay Colony issued a General Court Ordinance that banned gambling: “whatsoever that have cards, dice or tables in their houses, shall make away with them before the next court under pain of punishment.”

Mass gambling source

 

Written by LW

March 22, 2019 at 1:01 am

“Outward show is a wonderful perverter of the reason”*…

 

facial analysis

Humans have long hungered for a short-hand to help in understanding and managing other humans.  From phrenology to the Myers-Briggs Test, we’ve tried dozens of short-cuts… and tended to find that at best they weren’t actually very helpful; at worst, they were reinforcing of stereotypes that were inaccurate, and so led to results that were unfair and ineffective.  Still, the quest continues– these days powered by artificial intelligence.  What could go wrong?…

Could a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short.

While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train “behavior detection officers” to scan faces for signs of deception.

But when the program was rolled out in 2007, it was beset with problems. Officers were referring passengers for interrogation more or less at random, and the small number of arrests that came about were on charges unrelated to terrorism. Even more concerning was the fact that the program was allegedly used to justify racial profiling.

Ekman tried to distance himself from Spot, claiming his method was being misapplied. But others suggested that the program’s failure was due to an outdated scientific theory that underpinned Ekman’s method; namely, that emotions can be deduced objectively through analysis of the face.

In recent years, technology companies have started using Ekman’s method to train algorithms to detect emotion from facial expressions. Some developers claim that automatic emotion detection systems will not only be better than humans at discovering true emotions by analyzing the face, but that these algorithms will become attuned to our innermost feelings, vastly improving interaction with our devices.

But many experts studying the science of emotion are concerned that these algorithms will fail once again, making high-stakes decisions about our lives based on faulty science…

“Emotion detection” has grown from a research project to a $20bn industry; learn more about why that’s a cause for concern: “Don’t look now: why you should be worried about machines reading your emotions.”

* Marcus Aurelius, Meditations

###

As we insist on the individual, we might recall that it was on this date in 1989 that Tim Berners-Lee submitted a proposal to CERN for developing a new way of linking and sharing information over the Internet.

It was the first time Berners-Lee proposed a system that would ultimately become the World Wide Web; but his proposal was basically a relatively vague request to research the details and feasibility of such a system.  He later submitted a proposal on November 12, 1990 that much more directly detailed the actual implementation of the World Wide Web.

web25-significant-white-300x248 source

 

%d bloggers like this: