(Roughly) Daily

Posts Tagged ‘machine learning

“I am so clever that sometimes I don’t understand a single word of what I am saying”*…

Humans claim to be intelligent, but what exactly is intelligence? Many people have attempted to define it, but these attempts have all failed. So I propose a new definition: intelligence is whatever humans do.

I will attempt to prove this new definition is superior to all previous attempts to define intelligence. First, consider humans’ history. It is a story of repeated failures. First humans thought the Earth was flat. Then they thought the Sun went around the Earth. Then they thought the Earth was the center of the universe. Then they thought the universe was static and unchanging. Then they thought the universe was infinite and expanding. Humans were wrong about alchemy, phrenology, bloodletting, creationism, astrology, numerology, and homeopathy. They were also wrong about the best way to harvest crops, the best way to govern, the best way to punish criminals, and the best way to cure the sick.

I will not go into the many ways humans have been wrong about morality. The list is long and depressing. If humans are so smart, how come they keep being wrong about everything?

So, what does it mean to be intelligent?…

Arram Sabeti (@arram) gave a prompt to GPT-3, a machine-learning language model; it wrote: “Are Humans Intelligent?- a Salty AI Op-Ed.”

(image above: source)

* Oscar Wilde

###

As we hail our new robot overlords, we might recall that it was on this date in 1814 that London suffered “The Great Beer Flood Disaster” when the metal bands on an immense vat at Meux’s Horse Shoe Brewery snapped, releasing a tidal wave of 3,555 barrels of Porter (571 tons– more than 1 million pints), which swept away the brewery walls, flooded nearby basements, and collapsed several adjacent tenements. While there were reports of over twenty fatalities resulting from poisoning by the porter fumes or alcohol coma, it appears that the death toll was 8, and those from the destruction caused by the huge wave of beer in the structures surrounding the brewery.

(The U.S. had its own vat mishap in 1919, when a Boston molasses plant suffered similarly-burst bands, creating a heavy wave of molasses moving at a speed of an estimated 35 mph; it killed 21 and injured 150.)

Meux’s Horse Shoe Brewery

source

“We must be free not because we claim freedom, but because we practice it”*…

 

algorithm

 

There is a growing sense of unease around algorithmic modes of governance (‘algocracies’) and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus in this way enables us to see how algorithmic governance can be both emancipatory and enslaving, and provides a framework for future development and activism around the creation of this technology…

From a pre-print of John Danaher‘s (@JohnDanaher) chapter in the forthcoming Oxford Handbook on the Philosophy of Technology, edited by Shannon Vallor: “Freedom in an Age of Algocracy “… a little dense, but very useful.

[image above: source]

* William Faulkner

###

As we meet the new boss, same as the old boss, we might recall that it was on this date in 1962 that telephone and television signals were first relayed in space via the communications satellite Echo 1– basically a big metallic balloon that simply bounced radio signals off its surface.  Simple, but effective.

Forty thousand pounds (18,144 kg) of air was required to inflate the sphere on the ground; so it was inflated in space.  While in orbit it only required several pounds of gas to keep it inflated.

Fun fact: the Echo 1 was built for NASA by Gilmore Schjeldahl, a Minnesota inventor probably better remembered as the creator of the plastic-lined airsickness bag.

200px-Echo-1 source

 

Written by LW

February 24, 2020 at 1:01 am

“Surveillance is permanent in its effects, even if it is discontinuous in its action”*…

 

Facial recognition

China’s facial recognition technology identifies visitors in a display at the Digital China Exhibition in Fuzhou, Fujian province, earlier this year

 

Collective wisdom is that China is becoming a kind of all-efficient Technocratic Leviathan thanks to the combination of machine learning and authoritarianism. Authoritarianism has always been plagued with problems of gathering and collating information and of being sufficiently responsive to its citizens’ needs to remain stable. Now, the story goes, a combination of massive data gathering and machine learning will solve the basic authoritarian dilemma. When every transaction that a citizen engages in is recorded by tiny automatons riding on the devices they carry in their hip pockets, when cameras on every corner collect data on who is going where, who is talking to whom, and uses facial recognition technology to distinguish ethnicity and identify enemies of the state, a new and far more powerful form of authoritarianism will emerge. Authoritarianism then, can emerge as a more efficient competitor that can beat democracy at its home game (some fear this; some welcome it).

The theory behind this is one of strength reinforcing strength – the strengths of ubiquitous data gathering and analysis reinforcing the strengths of authoritarian repression to create an unstoppable juggernaut of nearly perfectly efficient oppression. Yet there is another story to be told – of weakness reinforcing weakness. Authoritarian states were always particularly prone to the deficiencies identified in James Scott’s Seeing Like a State – the desire to make citizens and their doings legible to the state, by standardizing and categorizing them, and reorganizing collective life in simplified ways, for example by remaking cities so that they were not organic structures that emerged from the doings of their citizens, but instead grand chessboards with ordered squares and boulevards, reducing all complexities to a square of planed wood. The grand state bureaucracies that were built to carry out these operations were responsible for multitudes of horrors, but also for the crumbling of the Stalinist state into a Brezhnevian desuetude, where everyone pretended to be carrying on as normal because everyone else was carrying on too. The deficiencies of state action, and its need to reduce the world into something simpler that it could comprehend and act upon created a kind of feedback loop, in which imperfections of vision and action repeatedly reinforced each other.

So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.

This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.

These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist…

In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott’s account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason.

So in short, this conjecture would suggest that  the conjunction of AI and authoritarianism (has someone coined the term ‘aithoritarianism’ yet? I’d really prefer not to take the blame), will have more or less the opposite effects of what people expect. It will not be Singapore writ large, and perhaps more brutal. Instead, it will be both more radically monstrous and more radically unstable…

Henry Farrell (@henryfarrell) makes that case that the “automation of authoritarianism” may backfire on China (and on the regimes to which it is exporting it’s surveillance technology): “Seeing Like a Finite State Machine.”

See also: “China Government Spreads Uyghur Analytics Across China.”

* Michel Foucault, Discipline and Punish: The Birth of the Prison

###

As we ponder privacy, we might recall that it was on this date in 1769 that the first patent was issued (in London, to John Bevan) for Venetian blinds.  Invented centuries before in Persia, then brought back to Venice through trade, they became popular in Europe, then the U.S. as both a manager of outside light and as an early privacy technology.

venetian blinds source

 

Written by LW

December 11, 2019 at 1:01 am

“Man is not born to solve the problem of the universe, but to find out what he has to do; and to restrain himself within the limits of his comprehension”*…

 

Half a century ago, the pioneers of chaos theory discovered that the “butterfly effect” makes long-term prediction impossible. Even the smallest perturbation to a complex system (like the weather, the economy or just about anything else) can touch off a concatenation of events that leads to a dramatically divergent future. Unable to pin down the state of these systems precisely enough to predict how they’ll play out, we live under a veil of uncertainty.

But now the robots are here to help…

In new computer experiments, artificial-intelligence algorithms can tell the future of chaotic systems.  For example, researchers have used machine learning to predict the chaotic evolution of a model flame front like the one pictured above.  Learn how– and what it may mean– at “Machine Learning’s ‘Amazing’ Ability to Predict Chaos.”

* Johann Wolfgang von Goethe

###

As we contemplate complexity, we might recall that it was on this date in 1961 that Robert Noyce was issued patent number 2981877 for his “semiconductor device-and-lead structure,” the first patent for what would come to be known as the integrated circuit.  In fact another engineer, Jack Kilby, had separately and essentially simultaneously developed the same technology (Kilby’s design was rooted in germanium; Noyce’s in silicon) and had filed a few months earlier than Noyce… a fact that was recognized in 2000 when Kilby was Awarded the Nobel Prize– in which Noyce, who had died in 1990, did not share.

Noyce (left) and Kilby (right)

 source

 

 

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…

 

When warning about the dangers of artificial intelligence, many doomsayers cite philosopher Nick Bostrom’s paperclip maximizer thought experiment. [See here for an amusing game that demonstrates Bostrom’s fear.]

Imagine an artificial intelligence, he says, which decides to amass as many paperclips as possible. It devotes all its energy to acquiring paperclips, and to improving itself so that it can get paperclips in new ways, while resisting any attempt to divert it from this goal. Eventually it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities”. This apparently silly scenario is intended to make the serious point that AIs need not have human-like motives or psyches. They might be able to avoid some kinds of human error or bias while making other kinds of mistake, such as fixating on paperclips. And although their goals might seem innocuous to start with, they could prove dangerous if AIs were able to design their own successors and thus repeatedly improve themselves. Even a “fettered superintelligence”, running on an isolated computer, might persuade its human handlers to set it free. Advanced AI is not just another technology, Mr Bostrom argues, but poses an existential threat to humanity.

Harvard cognitive scientist Joscha Bach, in a tongue-in-cheek tweet, has countered this sort of idea with what he calls “The Lebowski Theorem”:

No superintelligent AI is going to bother with a task that is harder than hacking its reward function.

Why it’s cool to take Bobby McFerrin’s advice at: “The Lebowski Theorem of machine superintelligence.”

* Alan Kay

###

As we get down with the Dude, we might send industrious birthday greetings to prolific writer Anthony Trollope; he was born on this date in 1815.  Trollope wrote 47 novels, including those in the “Chronicles of Barsetshire” and “Palliser” series (along with short stories and occasional prose).  And he had a successful career as a civil servant; indeed, among his works the best known is surely not any of his books, but the iconic red British mail drop, the “pillar box,” which he invented in his capacity as Postal Surveyor.

 The end of a novel, like the end of a children’s dinner-party, must be made up of sweetmeats and sugar-plums.  (source)

 

%d bloggers like this: