(Roughly) Daily

Posts Tagged ‘algorithms

“We must be free not because we claim freedom, but because we practice it”*…

 

algorithm

 

There is a growing sense of unease around algorithmic modes of governance (‘algocracies’) and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus in this way enables us to see how algorithmic governance can be both emancipatory and enslaving, and provides a framework for future development and activism around the creation of this technology…

From a pre-print of John Danaher‘s (@JohnDanaher) chapter in the forthcoming Oxford Handbook on the Philosophy of Technology, edited by Shannon Vallor: “Freedom in an Age of Algocracy “… a little dense, but very useful.

[image above: source]

* William Faulkner

###

As we meet the new boss, same as the old boss, we might recall that it was on this date in 1962 that telephone and television signals were first relayed in space via the communications satellite Echo 1– basically a big metallic balloon that simply bounced radio signals off its surface.  Simple, but effective.

Forty thousand pounds (18,144 kg) of air was required to inflate the sphere on the ground; so it was inflated in space.  While in orbit it only required several pounds of gas to keep it inflated.

Fun fact: the Echo 1 was built for NASA by Gilmore Schjeldahl, a Minnesota inventor probably better remembered as the creator of the plastic-lined airsickness bag.

200px-Echo-1 source

 

Written by LW

February 24, 2020 at 1:01 am

“The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge”*…

 

40990080740_17170c03ec_z

After the fall of the Berlin Wall, East German citizens were offered the chance to read the files kept on them by the Stasi, the much-feared Communist-era secret police service. To date, it is estimated that only 10 percent have taken the opportunity.

In 2007, James Watson, the co-discoverer of the structure of DNA, asked that he not be given any information about his APOE gene, one allele of which is a known risk factor for Alzheimer’s disease.

Most people tell pollsters that, given the choice, they would prefer not to know the date of their own death—or even the future dates of happy events.

Each of these is an example of willful ignorance. Socrates may have made the case that the unexamined life is not worth living, and Hobbes may have argued that curiosity is mankind’s primary passion, but many of our oldest stories actually describe the dangers of knowing too much. From Adam and Eve and the tree of knowledge to Prometheus stealing the secret of fire, they teach us that real-life decisions need to strike a delicate balance between choosing to know, and choosing not to.

But what if a technology came along that shifted this balance unpredictably, complicating how we make decisions about when to remain ignorant? That technology is here: It’s called artificial intelligence.

AI can find patterns and make inferences using relatively little data. Only a handful of Facebook likes are necessary to predict your personality, race, and gender, for example. Another computer algorithm claims it can distinguish between homosexual and heterosexual men with 81 percent accuracy, and homosexual and heterosexual women with 71 percent accuracy, based on their picture alone. An algorithm named COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) can predict criminal recidivism from data like juvenile arrests, criminal records in the family, education, social isolation, and leisure activities with 65 percent accuracy…

Knowledge can sometimes corrupt judgment, and we often choose to remain deliberately ignorant in response.  But in an age of all-knowing algorithms, how do we choose not to know?  Two scientists at the Max Planck Institute for Human Development argue that “We Need to Save Ignorance From AI.”

* Daniel J. Boorstin

###

As we consider closing our eyes, we might send discoverable birthday greetings to Tim Bray; he was born on this date in 1955.  A seminal software developer and entrepreneur, he is probably best known as the co-author of the original specifications for the XML and XML namespace, open standards that fueled the growth of the internet (by setting down simple rules for encoding documents in a format that is both human-readable and machine-readable), and as the co-founder of the Open Text Corporation, which released the Open Text Index, one of the first popular commercial web search engines.

40990080840_2a593e7046_o source

 

Written by LW

June 21, 2018 at 1:01 am

“The karma of humans is AI”*…

 

The black box… penetrable?

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior…

No one really knows how the most advanced algorithms do what they do. That could be a problem: “The Dark Secret at the Heart of AI.”

* Raghu Venkatesh

###

As we get to know our new overlords, we might spare a thought for the painter, sculptor, architect, musician, mathematician, engineer, inventor, physicist, chemist, anatomist, botanist, geologist, cartographer, and writer– the archetypical Renaissance Man– Leonardo da Vinci.  Quite possibly the greatest genius of the last Millennium, he died on this date in 1519.

Self-portrait in red chalk, circa 1512-15

source

Written by LW

May 2, 2017 at 1:01 am

%d bloggers like this: