(Roughly) Daily

Posts Tagged ‘artificial intelligence

“The appearance of new species naturally and the appearance of new inventions by artifice are both responses to need”*…




Our reign as sole understanders of the cosmos is rapidly coming to an end. We should not be afraid of this. The revolution that has just begun may be understood as a continuation of the process whereby the Earth nurtures the understanders, the beings that will lead the cosmos to self-knowledge. What is revolutionary about this moment is that the understanders of the future will not be humans but cyborgs that will have designed and built themselves from the artificial intelligence systems we have already constructed. These will soon become thousands then millions of times more intelligent than us.

The term cyborg was coined by Manfred Clynes and Nathan Kline in 1960. It refers to a cybernetic organism: an organism as self-sufficient as one of us but made of engineered materials. I like this word and definition because it could apply to anything ranging in size from a microorganism to a pachyderm, from a microchip to an omnibus. It is now commonly taken to mean an entity that is part flesh, part machine. I use it here to emphasize that the new intelligent beings will have arisen, like us, from Darwinian evolution. They will not, at first, be separate from us; indeed, they will be our offspring because the systems we made turned out to be their precursors.

We need not be afraid because, initially at least, these inorganic beings will need us and the whole organic world to continue to regulate the climate, keeping Earth cool to fend off the heat of the sun and safeguard us from the worst effects of future catastrophes. We shall not descend into the kind of war between humans and machines that is so often described in science fiction because we need each other. Gaia will keep the peace.

This is the age I call the “Novacene.” I’m sure that one day a more appropriate name will be chosen, something more imaginative, but for now I’m using Novacene to describe what could be one of the most crucial periods in the history of our planet and perhaps even of the cosmos…

The father of the Gaia principle with a provocative take on the coming age of hyperintelligence: “Gaia Will Soon Belong to the Cyborgs.”

See also: “Is Moore’s Law Evidence for a New Stage in Human Evolution?

For more background on (and some criticism of) Lovelock’s Gaia Hypothesis see “Earth’s Holy Fool?–Some scientists think that James Lovelock’s Gaia theory is nuts, but the public love it. Could both sides be right?

[image above: source]

* James Lovelock


As we scrutinize systems, we might send closely-observed birthday greetings to Franklin Henry Giddings; he was born on this date in 1855.  An economist and political scientist by training, he was instrumental in the emergence of sociology from philosophy (of which it had been considered a branch) into a discipline of its own, and a champion of the use of statistics.  He is probably best remembered for his concept of “consciousness of kind” (rooted in Adam Smith’s concept of “sympathy,” or shared moral reactions), which is a state of mind wherein one conscious being recognizes another as being of like mind.  All human motives, he suggested, organize themselves around consciousness of kind as a determining principle.  Association leads to conflict which leads to consciousness of kind through communication, imitation, toleration, co-operation, and alliance.  Eventually, he argued, a group achieves a self-consciousness of its own (as opposed to individual self-consciousness) from which traditions and social values can arise.

Franklin_Henry_Giddings source


“It is forbidden to kill; therefore all murderers are punished unless they kill in large numbers and to the sound of trumpets”*…


Pope AI

Francis Bacon, Study after Velazquez’s Portrait of Pope Innocent X, 1953


Nobody but AI mavens would ever tiptoe up to the notion of creating godlike cyber-entities that are much smarter than people. I hasten to assure you — I take that weird threat seriously. If we could wipe out the planet with nuclear physics back in the late 1940s, there must be plenty of other, novel ways to get that done…

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed…

Technological proliferation is not a list of principles. It is a deep, multivalent historical process with many radically different stakeholders over many different time-scales. People who invent technology never get to set the rules for what is done with it. A “non-evil” Google, built by two Stanford dropouts, is just not the same entity as modern Alphabet’s global multinational network, with its extensive planetary holdings in clouds, transmission cables, operating systems, and device manufacturing.

It’s not that Google and Alphabet become evil just because they’re big and rich. Frankly, they’re not even all that “evil.” They’re just inherently involved in huge, tangled, complex, consequential schemes, with much more variegated populations than had originally been imagined. It’s like the ethical difference between being two parish priests and becoming Pope.

Of course the actual Pope will confront Artificial Intelligence. His response will not be “is it socially beneficial to the user-base?” but rather, “does it serve God?” So unless you’re willing to morally out-rank the Pope, you need to understand that religious leaders will use Artificial Intelligence in precisely the way that televangelists have used television.

So I don’t mind the moralizing about AI. I even enjoy it as metaphysical game, but I do have one caveat about this activity, something that genuinely bothers me. The practitioners of AI are not up-front about the genuine allure of their enterprise, which is all about the old-school Steve-Jobsian charisma of denting the universe while becoming insanely great. Nobody does AI for our moral betterment; everybody does it to feel transcendent.

AI activists are not everyday brogrammers churning out grocery-code. These are visionary zealots driven by powerful urges they seem unwilling to confront. If you want to impress me with your moral authority, gaze first within your own soul.

Excerpted from the marvelous Bruce Sterling‘s essay “Artificial Morality,” a contribution to the Provocations series, a project of the Los Angeles Review of Books in conjunction with UCI’s “The Future of the Future: The Ethics and Implications of AI” conference.

* Voltaire


As we agonize over algorithms, we might recall that it was on this date in 1872 that Luther Crowell patented a machine for the manufacture of accordion-sided, flat-bottomed paper bags (#123,811).  That said, Margaret E. Knight might more accurately be considered the “mother of the modern shopping bag”; she had perfected square bottoms two years earlier.



“How about a little magic?”*…


sorcerers apprentice


Once upon a time (bear with me if you’ve heard this one), there was a company which made a significant advance in artificial intelligence. Given their incredibly sophisticated new system, they started to put it to ever-wider uses, asking it to optimize their business for everything from the lofty to the mundane.

And one day, the CEO wanted to grab a paperclip to hold some papers together, and found there weren’t any in the tray by the printer. “Alice!” he cried (for Alice was the name of his machine learning lead) “Can you tell the damned AI to make sure we don’t run out of paperclips again?”…

What could possibly go wrong?

[As you’ll read in the full and fascinating article, a great deal…]

Computer scientists tell the story of the Paperclip Maximizer as a sort of cross between the Sorcerer’s Apprentice and the Matrix; a reminder of why it’s crucially important to tell your system not just what its goals are, but how it should balance those goals against costs. It frequently comes with a warning that it’s easy to forget a cost somewhere, and so you should always check your models carefully to make sure they aren’t accidentally turning in to Paperclip Maximizers…

But this parable is not just about computer science. Replace the paper clips in the story above with money, and you will see the rise of finance…

Yonatan Zunger tells a powerful story that’s not (only) about AI: “The Parable of the Paperclip Maximizer.”

* Mickey Mouse, The Sorcerer’s Apprentice


As we’re careful what we wish for (and how we wish for it), we might recall that it was on this date in 1631 that the Puritans in the recently-chartered Massachusetts Bay Colony issued a General Court Ordinance that banned gambling: “whatsoever that have cards, dice or tables in their houses, shall make away with them before the next court under pain of punishment.”

Mass gambling source


Written by LW

March 22, 2019 at 1:01 am

“Outward show is a wonderful perverter of the reason”*…


facial analysis

Humans have long hungered for a short-hand to help in understanding and managing other humans.  From phrenology to the Myers-Briggs Test, we’ve tried dozens of short-cuts… and tended to find that at best they weren’t actually very helpful; at worst, they were reinforcing of stereotypes that were inaccurate, and so led to results that were unfair and ineffective.  Still, the quest continues– these days powered by artificial intelligence.  What could go wrong?…

Could a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short.

While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train “behavior detection officers” to scan faces for signs of deception.

But when the program was rolled out in 2007, it was beset with problems. Officers were referring passengers for interrogation more or less at random, and the small number of arrests that came about were on charges unrelated to terrorism. Even more concerning was the fact that the program was allegedly used to justify racial profiling.

Ekman tried to distance himself from Spot, claiming his method was being misapplied. But others suggested that the program’s failure was due to an outdated scientific theory that underpinned Ekman’s method; namely, that emotions can be deduced objectively through analysis of the face.

In recent years, technology companies have started using Ekman’s method to train algorithms to detect emotion from facial expressions. Some developers claim that automatic emotion detection systems will not only be better than humans at discovering true emotions by analyzing the face, but that these algorithms will become attuned to our innermost feelings, vastly improving interaction with our devices.

But many experts studying the science of emotion are concerned that these algorithms will fail once again, making high-stakes decisions about our lives based on faulty science…

“Emotion detection” has grown from a research project to a $20bn industry; learn more about why that’s a cause for concern: “Don’t look now: why you should be worried about machines reading your emotions.”

* Marcus Aurelius, Meditations


As we insist on the individual, we might recall that it was on this date in 1989 that Tim Berners-Lee submitted a proposal to CERN for developing a new way of linking and sharing information over the Internet.

It was the first time Berners-Lee proposed a system that would ultimately become the World Wide Web; but his proposal was basically a relatively vague request to research the details and feasibility of such a system.  He later submitted a proposal on November 12, 1990 that much more directly detailed the actual implementation of the World Wide Web.

web25-significant-white-300x248 source


“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it”*…


robit writer


Recently, OpenAI announced its latest breakthrough, GPT-2, a language model that can write essays to a prompt, answer questions, and summarize longer works… sufficiently successfully that OpenAI has said that it’s too dangerous to release the code (lest it result in “deepfake news” or other misleading mischief).

Scott Alexander contemplates the results.  His conclusion:

a brain running at 5% capacity is about as good as the best AI that the brightest geniuses working in the best-equipped laboratories in the greatest country in the world are able to produce in 2019. But:

We believe this project is the first step in the direction of developing large NLP systems without task-specific training data. That is, we are developing a machine language system in the generative style with no explicit rules for producing text. We hope for future collaborations between computer scientists, linguists, and machine learning researchers.

A boring sentiment from an interesting source: the AI wrote that when asked to describe itself. We live in interesting times.

His complete post, eminently worthy of reading in full: “Do Neural Nets Dream of Electric Hobbits?

[image above, and another account of OpenAI’s creation: “OpenAI says its new robo-writer is too dangerous for public release“]

* Eliezer Yudkowsky


As we take the Turing Test, we might send elegantly-designed birthday greetings to Steve Jobs; he was born on this date in 1955.  While he is surely well-known to every reader here, let us note for the record that he was was instrumental in developing the Macintosh, the computer that took Apple to unprecedented levels of success.  After leaving the company he started with Steve Wozniak, Jobs continued his personal computer development at his NeXT Inc.  In 1997, Jobs returned to Apple to lead the company into a new era based on NeXT technologies and consumer electronics.  Some of Jobs’ achievements in this new era include the iMac, the iPhone, the iTunes music store, the iPod, and the iPad.  Under Jobs’ leadership Apple was at one time the world’s most valuable company. (And, of course, he bought Pixar from George Lucas, and oversaw both its rise to animation dominance and its sale to Disney– as a product of which Jobs became Disney’s largest single shareholder.)

Jobs source


Written by LW

February 24, 2019 at 1:01 am

%d bloggers like this: