(Roughly) Daily

Posts Tagged ‘AI

“I am so clever that sometimes I don’t understand a single word of what I am saying”*…

Humans claim to be intelligent, but what exactly is intelligence? Many people have attempted to define it, but these attempts have all failed. So I propose a new definition: intelligence is whatever humans do.

I will attempt to prove this new definition is superior to all previous attempts to define intelligence. First, consider humans’ history. It is a story of repeated failures. First humans thought the Earth was flat. Then they thought the Sun went around the Earth. Then they thought the Earth was the center of the universe. Then they thought the universe was static and unchanging. Then they thought the universe was infinite and expanding. Humans were wrong about alchemy, phrenology, bloodletting, creationism, astrology, numerology, and homeopathy. They were also wrong about the best way to harvest crops, the best way to govern, the best way to punish criminals, and the best way to cure the sick.

I will not go into the many ways humans have been wrong about morality. The list is long and depressing. If humans are so smart, how come they keep being wrong about everything?

So, what does it mean to be intelligent?…

Arram Sabeti (@arram) gave a prompt to GPT-3, a machine-learning language model; it wrote: “Are Humans Intelligent?- a Salty AI Op-Ed.”

(image above: source)

* Oscar Wilde

###

As we hail our new robot overlords, we might recall that it was on this date in 1814 that London suffered “The Great Beer Flood Disaster” when the metal bands on an immense vat at Meux’s Horse Shoe Brewery snapped, releasing a tidal wave of 3,555 barrels of Porter (571 tons– more than 1 million pints), which swept away the brewery walls, flooded nearby basements, and collapsed several adjacent tenements. While there were reports of over twenty fatalities resulting from poisoning by the porter fumes or alcohol coma, it appears that the death toll was 8, and those from the destruction caused by the huge wave of beer in the structures surrounding the brewery.

(The U.S. had its own vat mishap in 1919, when a Boston molasses plant suffered similarly-burst bands, creating a heavy wave of molasses moving at a speed of an estimated 35 mph; it killed 21 and injured 150.)

Meux’s Horse Shoe Brewery

source

“We must be free not because we claim freedom, but because we practice it”*…

 

algorithm

 

There is a growing sense of unease around algorithmic modes of governance (‘algocracies’) and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus in this way enables us to see how algorithmic governance can be both emancipatory and enslaving, and provides a framework for future development and activism around the creation of this technology…

From a pre-print of John Danaher‘s (@JohnDanaher) chapter in the forthcoming Oxford Handbook on the Philosophy of Technology, edited by Shannon Vallor: “Freedom in an Age of Algocracy “… a little dense, but very useful.

[image above: source]

* William Faulkner

###

As we meet the new boss, same as the old boss, we might recall that it was on this date in 1962 that telephone and television signals were first relayed in space via the communications satellite Echo 1– basically a big metallic balloon that simply bounced radio signals off its surface.  Simple, but effective.

Forty thousand pounds (18,144 kg) of air was required to inflate the sphere on the ground; so it was inflated in space.  While in orbit it only required several pounds of gas to keep it inflated.

Fun fact: the Echo 1 was built for NASA by Gilmore Schjeldahl, a Minnesota inventor probably better remembered as the creator of the plastic-lined airsickness bag.

200px-Echo-1 source

 

Written by LW

February 24, 2020 at 1:01 am

“It is forbidden to kill; therefore all murderers are punished unless they kill in large numbers and to the sound of trumpets”*…

 

Pope AI

Francis Bacon, Study after Velazquez’s Portrait of Pope Innocent X, 1953

 

Nobody but AI mavens would ever tiptoe up to the notion of creating godlike cyber-entities that are much smarter than people. I hasten to assure you — I take that weird threat seriously. If we could wipe out the planet with nuclear physics back in the late 1940s, there must be plenty of other, novel ways to get that done…

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed…

Technological proliferation is not a list of principles. It is a deep, multivalent historical process with many radically different stakeholders over many different time-scales. People who invent technology never get to set the rules for what is done with it. A “non-evil” Google, built by two Stanford dropouts, is just not the same entity as modern Alphabet’s global multinational network, with its extensive planetary holdings in clouds, transmission cables, operating systems, and device manufacturing.

It’s not that Google and Alphabet become evil just because they’re big and rich. Frankly, they’re not even all that “evil.” They’re just inherently involved in huge, tangled, complex, consequential schemes, with much more variegated populations than had originally been imagined. It’s like the ethical difference between being two parish priests and becoming Pope.

Of course the actual Pope will confront Artificial Intelligence. His response will not be “is it socially beneficial to the user-base?” but rather, “does it serve God?” So unless you’re willing to morally out-rank the Pope, you need to understand that religious leaders will use Artificial Intelligence in precisely the way that televangelists have used television.

So I don’t mind the moralizing about AI. I even enjoy it as metaphysical game, but I do have one caveat about this activity, something that genuinely bothers me. The practitioners of AI are not up-front about the genuine allure of their enterprise, which is all about the old-school Steve-Jobsian charisma of denting the universe while becoming insanely great. Nobody does AI for our moral betterment; everybody does it to feel transcendent.

AI activists are not everyday brogrammers churning out grocery-code. These are visionary zealots driven by powerful urges they seem unwilling to confront. If you want to impress me with your moral authority, gaze first within your own soul.

Excerpted from the marvelous Bruce Sterling‘s essay “Artificial Morality,” a contribution to the Provocations series, a project of the Los Angeles Review of Books in conjunction with UCI’s “The Future of the Future: The Ethics and Implications of AI” conference.

* Voltaire

###

As we agonize over algorithms, we might recall that it was on this date in 1872 that Luther Crowell patented a machine for the manufacture of accordion-sided, flat-bottomed paper bags (#123,811).  That said, Margaret E. Knight might more accurately be considered the “mother of the modern shopping bag”; she had perfected square bottoms two years earlier.

source

 

“Surveillance is permanent in its effects, even if it is discontinuous in its action”*…

 

Facial recognition

China’s facial recognition technology identifies visitors in a display at the Digital China Exhibition in Fuzhou, Fujian province, earlier this year

 

Collective wisdom is that China is becoming a kind of all-efficient Technocratic Leviathan thanks to the combination of machine learning and authoritarianism. Authoritarianism has always been plagued with problems of gathering and collating information and of being sufficiently responsive to its citizens’ needs to remain stable. Now, the story goes, a combination of massive data gathering and machine learning will solve the basic authoritarian dilemma. When every transaction that a citizen engages in is recorded by tiny automatons riding on the devices they carry in their hip pockets, when cameras on every corner collect data on who is going where, who is talking to whom, and uses facial recognition technology to distinguish ethnicity and identify enemies of the state, a new and far more powerful form of authoritarianism will emerge. Authoritarianism then, can emerge as a more efficient competitor that can beat democracy at its home game (some fear this; some welcome it).

The theory behind this is one of strength reinforcing strength – the strengths of ubiquitous data gathering and analysis reinforcing the strengths of authoritarian repression to create an unstoppable juggernaut of nearly perfectly efficient oppression. Yet there is another story to be told – of weakness reinforcing weakness. Authoritarian states were always particularly prone to the deficiencies identified in James Scott’s Seeing Like a State – the desire to make citizens and their doings legible to the state, by standardizing and categorizing them, and reorganizing collective life in simplified ways, for example by remaking cities so that they were not organic structures that emerged from the doings of their citizens, but instead grand chessboards with ordered squares and boulevards, reducing all complexities to a square of planed wood. The grand state bureaucracies that were built to carry out these operations were responsible for multitudes of horrors, but also for the crumbling of the Stalinist state into a Brezhnevian desuetude, where everyone pretended to be carrying on as normal because everyone else was carrying on too. The deficiencies of state action, and its need to reduce the world into something simpler that it could comprehend and act upon created a kind of feedback loop, in which imperfections of vision and action repeatedly reinforced each other.

So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.

This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.

These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist…

In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott’s account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason.

So in short, this conjecture would suggest that  the conjunction of AI and authoritarianism (has someone coined the term ‘aithoritarianism’ yet? I’d really prefer not to take the blame), will have more or less the opposite effects of what people expect. It will not be Singapore writ large, and perhaps more brutal. Instead, it will be both more radically monstrous and more radically unstable…

Henry Farrell (@henryfarrell) makes that case that the “automation of authoritarianism” may backfire on China (and on the regimes to which it is exporting it’s surveillance technology): “Seeing Like a Finite State Machine.”

See also: “China Government Spreads Uyghur Analytics Across China.”

* Michel Foucault, Discipline and Punish: The Birth of the Prison

###

As we ponder privacy, we might recall that it was on this date in 1769 that the first patent was issued (in London, to John Bevan) for Venetian blinds.  Invented centuries before in Persia, then brought back to Venice through trade, they became popular in Europe, then the U.S. as both a manager of outside light and as an early privacy technology.

venetian blinds source

 

Written by LW

December 11, 2019 at 1:01 am

“Not with a bang, but a whimper”*…

 

automation

 

What actually happens to workers when a company deploys automation? The common assumption seems to be that the employee simply disappears wholesale, replaced one-for-one with an AI interface or an array of mechanized arms.

Yet given the extensive punditeering, handwringing, and stump-speeching around the “robots are coming for our jobs” phenomenon—which I will never miss an opportunity to point out is falsely represented—research into what happens to the individual worker remains relatively thin. Studies have attempted to monitor the impact of automation on wages on aggregate or to correlate employment to levels of robotization.

But few in-depth investigations have been made into what happens to each worker after their companies roll out automation initiatives. Earlier this year, though, a paper authored by economists James Bessen, Maarten Goos, Anna Salomons, and Wiljan Van den Berge set out to do exactly that…

What emerges is a portrait of workplace automation that is ominous in a less dramatic manner than we’re typically made to understand. For one thing, there is no ‘robot apocalypse’, even after a major corporate automation event. Unlike mass layoffs, automation does not appear to immediately and directly send workers packing en masse.

Instead, automation increases the likelihood that workers will be driven away from their previous jobs at the companies—whether they’re fired, or moved to less rewarding tasks, or quit—and causes a long-term loss of wages for the employee.

The report finds that “firm-level automation increases the probability of workers separating from their employers and decreases days worked, leading to a 5-year cumulative wage income loss of 11 percent of one year’s earnings.” That’s a pretty significant loss.

Worse still, the study found that even in the Netherlands, which has a comparatively generous social safety net to, say, the United States, workers were only able to offset a fraction of those losses with benefits provided by the state. Older workers, meanwhile, were more likely to retire early—deprived of years of income they may have been counting on.

Interestingly, the effects of automation were felt similarly through all manner of company—small, large, industrial, services-oriented, and so on. The study covered all non-finance sector firms, and found that worker separation and income loss were “quite pervasive across worker types, firm sizes and sectors.”

Automation, in other words, forces a more pervasive, slower-acting and much less visible phenomenon than the robots-are-eating-our-jobs talk is preparing us for…

The result, Bessen says, is an added strain on the social safety net that it is currently woefully unprepared to handle. As more and more firms join the automation goldrush—a 2018 McKinsey survey of 1,300 companies worldwide found that three-quarters of them had either begun to automate business processes or planned to do so next year—the number of workers forced out of firms seems likely to tick up, or at least hold steady. What is unlikely to happen, per this research, is an automation-driven mass exodus of jobs.

This is a double-edged sword: While it’s obviously good that thousands of workers are unlikely to be fired in one fell swoop when a process is automated at a corporation, it also means the pain of automation is distributed in smaller, more personalized doses, and thus less likely to prompt any sort of urgent public response. If an entire Amazon warehouse were suddenly automated, it might spur policymakers to try to address the issue; if automation has been slowly hurting us for years, it’s harder to rally support for stemming the pain…

Brian Merchant on the ironic challenge of addressing the slow-motion, trickle-down social, economic, and cultural threats of automation– that they will accrue gradually, like erosion, not catastrophically… making it harder to generate a sense of urgency around creating a response: “There’s an Automation Crisis Underway Right Now, It’s Just Mostly Invisible.”

* T. S. Eliot, “The Hollow Men”

###

As we think systemically, we might recall that it was on this date in 1994 that Ken McCarthy, Marc Andreessen, and Mark Graham held the first conference to focus on the commercial potential of the World Wide Web.

 

 

Written by LW

November 5, 2019 at 1:01 am

%d bloggers like this: