(Roughly) Daily

Posts Tagged ‘AI

“Reality is broken”*…

 

Paperclips, a new game from designer Frank Lantz, starts simply. The top left of the screen gets a bit of text, probably in Times New Roman, and a couple of clickable buttons: Make a paperclip. You click, and a counter turns over. One.

The game ends—big, significant spoiler here—with the destruction of the universe.

In between, Lantz, the director of the New York University Games Center, manages to incept the player with a new appreciation for the narrative potential of addictive clicker games, exponential growth curves, and artificial intelligence run amok…

More at “The way the world ends: not with a bang but a paperclip“; play Lantz’s game here.

(Then, as you consider reports like this, remind yourself that “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”)

* Jane McGonigal, Reality is Broken: Why Games Make Us Better and How They Can Change the World

###

As we play we hope not prophetically, we might recall that it was on this date in 4004 BCE that the Universe was created… as per calculations by Archbishop James Ussher in the mid-17th century.

When Clarence Darrow prepared his famous examination of William Jennings Bryan in the Scopes trial [see here], he chose to focus primarily on a chronology of Biblical events prepared by a seventeenth-century Irish bishop, James Ussher. American fundamentalists in 1925 found—and generally accepted as accurate—Ussher’s careful calculation of dates, going all the way back to Creation, in the margins of their family Bibles.  (In fact, until the 1970s, the Bibles placed in nearly every hotel room by the Gideon Society carried his chronology.)  The King James Version of the Bible introduced into evidence by the prosecution in Dayton contained Ussher’s famous chronology, and Bryan more than once would be forced to resort to the bishop’s dates as he tried to respond to Darrow’s questions.

source

Ussher

source

 

 

Written by LW

October 23, 2017 at 1:01 am

“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run”*…

 

We are surrounded by hysteria about the future of artificial intelligence and robotics—hysteria about how powerful they will become, how quickly, and what they will do to jobs.

I recently saw a story in ­MarketWatch that said robots will take half of today’s jobs in 10 to 20 years. It even had a graphic to prove the numbers.

The claims are ludicrous. (I try to maintain professional language, but sometimes …) For instance, the story appears to say that we will go from one million grounds and maintenance workers in the U.S. to only 50,000 in 10 to 20 years, because robots will take over those jobs. How many robots are currently operational in those jobs? Zero. How many realistic demonstrations have there been of robots working in this arena? Zero. Similar stories apply to all the other categories where it is suggested that we will see the end of more than 90 percent of jobs that currently require physical presence at some particular site.

Mistaken predictions lead to fears of things that are not going to happen, whether it’s the wide-scale destruction of jobs, the Singularity, or the advent of AI that has values different from ours and might try to destroy us. We need to push back on these mistakes. But why are people making them? I see seven common reasons…

Mistaken extrapolations, limited imagination, and other common mistakes that distract us from thinking more productively about the future: Rodney Brooks on “The Seven Deadly Sins of AI Predictions.”

* Roy Amara, co-founder of The Institute for the Future

###

As we sharpen our analyses, we might recall that it was on this date in 1995 that The Media Lab at the Massachusetts Institute of Technology chronicled the World Wide Web in its A Day in the Life of Cyberspace project.

To celebrate its 10th anniversary, the Media Lab had invited submissions for the days leading up to October 10, 1995, on a variety of issues related to technology and the Internet, including privacy, expression, age, wealth, faith, body, place, languages, and the environment.  Then on October 10, a team at MIT collected, edited, and published the contributions to “create a mosaic of life at the dawn of the digital revolution that is transforming our planet.”

source

 

 

Written by LW

October 10, 2017 at 1:01 am

“The karma of humans is AI”*…

 

The black box… penetrable?

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior…

No one really knows how the most advanced algorithms do what they do. That could be a problem: “The Dark Secret at the Heart of AI.”

* Raghu Venkatesh

###

As we get to know our new overlords, we might spare a thought for the painter, sculptor, architect, musician, mathematician, engineer, inventor, physicist, chemist, anatomist, botanist, geologist, cartographer, and writer– the archetypical Renaissance Man– Leonardo da Vinci.  Quite possibly the greatest genius of the last Millennium, he died on this date in 1519.

Self-portrait in red chalk, circa 1512-15

source

Written by LW

May 2, 2017 at 1:01 am

Well, it’s true that they both react poorly to showers…

 

Randall Munroe (xkcd) riffs on the same chatbot-to-chatbot conversation featured here some days ago…

 

As we celebrate our essential humanity, we might recall that it was on this date in 1900 that Jesse Lazear, a then-34-year-old physician working in Cuba to understand the transmission of yellow fever, experimented on himself, allowing himself to be bitten by infected mosquitoes.  His death two weeks later confirmed that mosquitoes are in fact the carriers of the disease.

source

 

The Ghost in the Machine…

Via the always-rewarding Dangerous Minds:

Cornell’s Creative Machines Lab says, “What happens when you let two bots have a conversation? We certainly never expected this…”

As we reconsider that dinner invitation, we might recall that it was on this date in 1991 that Burning Man opened in Nevada’s Black Rock Desert, having moved from San Francisco’s Baker Beach.  All the best to readers headed that way now…

source: Howard Rheingold

%d bloggers like this: