(Roughly) Daily

Posts Tagged ‘artificial intelligence

“Man is not born to solve the problem of the universe, but to find out what he has to do; and to restrain himself within the limits of his comprehension”*…

 

Half a century ago, the pioneers of chaos theory discovered that the “butterfly effect” makes long-term prediction impossible. Even the smallest perturbation to a complex system (like the weather, the economy or just about anything else) can touch off a concatenation of events that leads to a dramatically divergent future. Unable to pin down the state of these systems precisely enough to predict how they’ll play out, we live under a veil of uncertainty.

But now the robots are here to help…

In new computer experiments, artificial-intelligence algorithms can tell the future of chaotic systems.  For example, researchers have used machine learning to predict the chaotic evolution of a model flame front like the one pictured above.  Learn how– and what it may mean– at “Machine Learning’s ‘Amazing’ Ability to Predict Chaos.”

* Johann Wolfgang von Goethe

###

As we contemplate complexity, we might might recall that it was on this date in 1961 that Robert Noyce was issued patent number 2981877 for his “semiconductor device-and-lead structure,” the first patent for what would come to be known as the integrated circuit.  In fact another engineer, Jack Kilby, had separately and essentially simultaneously developed the same technology (Kilby’s design was rooted in germanium; Noyce’s in silicon) and has filed a few months earlier than Noyce… a fact that was recognized in 2000 when Kilby was Awarded the Nobel Prize– in which Noyce, who had died in 1990, did not share.

Noyce (left) and Kilby (right)

 source

 

 

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…

 

When warning about the dangers of artificial intelligence, many doomsayers cite philosopher Nick Bostrom’s paperclip maximizer thought experiment. [See here for an amusing game that demonstrates Bostrom’s fear.]

Imagine an artificial intelligence, he says, which decides to amass as many paperclips as possible. It devotes all its energy to acquiring paperclips, and to improving itself so that it can get paperclips in new ways, while resisting any attempt to divert it from this goal. Eventually it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities”. This apparently silly scenario is intended to make the serious point that AIs need not have human-like motives or psyches. They might be able to avoid some kinds of human error or bias while making other kinds of mistake, such as fixating on paperclips. And although their goals might seem innocuous to start with, they could prove dangerous if AIs were able to design their own successors and thus repeatedly improve themselves. Even a “fettered superintelligence”, running on an isolated computer, might persuade its human handlers to set it free. Advanced AI is not just another technology, Mr Bostrom argues, but poses an existential threat to humanity.

Harvard cognitive scientist Joscha Bach, in a tongue-in-cheek tweet, has countered this sort of idea with what he calls “The Lebowski Theorem”:

No superintelligent AI is going to bother with a task that is harder than hacking its reward function.

Why it’s cool to take Bobby McFerrin’s advice at: “The Lebowski Theorem of machine superintelligence.”

* Alan Kay

###

As we get down with the Dude, we might send industrious birthday greetings to prolific writer Anthony Trollope; he was born on this date in 1815.  Trollope wrote 47 novels, including those in the “Chronicles of Barsetshire” and “Palliser” series (along with short stories and occasional prose).  And he had a successful career as a civil servant; indeed, among his works the best known is surely not any of his books, but the iconic red British mail drop, the “pillar box,” which he invented in his capacity as Postal Surveyor.

 The end of a novel, like the end of a children’s dinner-party, must be made up of sweetmeats and sugar-plums.  (source)

 

“Humans as we know them are just one morphological waypoint on the long road of evolution”*…

 

Imagine a world where the human race is no longer the dominant species.

Extinct through war or spectacular accident. By devastating pandemic, super-natural disaster, or cosmic cataclysm.

Passed through the Singularity to become unrecognisably posthuman, and left the natural order forever behind.

Infected by a virus, hijacked by a parasite or otherwise co-opted to become ex-human – a “bio zombie” – moved sideways to a new position as ecological actor.

Gently absorbed into – or completely overshadowed by the unfathomable actions of – a superior civilisation comprising benevolent – or unacknowledging – emissaries from the stars (or extra-dimensions).

Dethroned by the return of ancient species, the reawakening of the slumbering Old Ones… Out-competed by the arrival of an invasive species from another world making the Earth just one habitat in a galactic ecology.

It could be far into the future or The Day After Tomorrow.

Robots may rule the world… not so much enslaving as letting us retire to a life of Fully Automated Luxury Gay Space Communism; life in The Culture as Iain M. Banks foresaw it could be.

What is the world like then? After us…

Imagine a world where the human race is no longer the dominant species: “What is the Post-Human World.”

* Annalee Newitz in “When Will Humanity Finally Die Out?

###

As we stretch our frames, we might spare a thought for Marvin Minsky; he died on this date in 2016.  A biochemist and cognitive scientist by training, he was founding director of MIT’s Artificial Intelligence Project (the MIT AI Lab).  Minsky authored several widely-used texts, and made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics.  He holds several patents, including those for the first neural-network simulator (SNARC, 1951), the first head-mounted graphical display, the first confocal scanning microscope, and the LOGO “turtle” device (with his friend and frequent collaborator Seymour Papert).  His other inventions include mechanical hands and the “Muse” synthesizer.

 source

 

Written by LW

January 24, 2018 at 1:01 am

“Maybe the only significant difference between a really smart simulation and a human being was the noise they made when you punched them”*…

 

… So humans won’t play a significant role in the spreading of intelligence across the cosmos. But that’s OK. Don’t think of humans as the crown of creation. Instead view human civilization as part of a much grander scheme, an important step (but not the last one) on the path of the universe towards higher complexity. Now it seems ready to take its next step, a step comparable to the invention of life itself over 3.5 billion years ago.

This is more than just another industrial revolution. This is something new that transcends humankind and even biology. It is a privilege to witness its beginnings, and contribute something to it…

Jürgen Schmidhube—  of whom it’s been said,  “When A.I. Matures, It May Call Jürgen Schmidhuber ‘Dad’” — shares the reasoning behind his almost breathless anticipation of intelligence-to-come: “Falling Walls: The Past, Present and Future of Artificial Intelligence.”

Then, for a different perspective on (essentially) the same assumption about the future, read Slavoj Žižek’s “Bladerunner 2049: A View of Post-Human Capitalism.”

* Terry Pratchett, The Long Earth

###

As we welcome our computer overlords, we might recall that it was on this date in 1930 that Henry W. Jeffries invented the Rotolactor.  Housed in the Lactorium of the Walker Gordon Laboratory Company, Inc., at Plainsboro, N.J., it was a 50-stall revolving platform that enabled the milking of 1,680 cows in seven hours by rotating them into position with the milking machines.  A spiffy version of the Rotolactor, displayed at the 1939 New York World’s Fair in the Borden building as part of the “Dairy World of Tomorrow,” was one of the most popular attractions in the Fair’s Food Zone.

source

 

 

“Artificial intelligence is growing up fast”*…

 

Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being “you.” When you see the warm hues of a sunrise, smell the aroma of morning coffee or mull over a new idea, you are having conscious experience. But could an artificial intelligence (AI) ever have experience, like some of the androids depicted in Westworld or the synthetic beings in Blade Runner?

The question is not so far-fetched. Robots are currently being developed to work inside nuclear reactors, fight wars and care for the elderly. As AIs grow more sophisticated, they are projected to take over many human jobs within the next few decades. So we must ponder the question: Could AIs develop conscious experience?…

It’s not easy, but a newly proposed test might be able to detect consciousness in a machine: “Is anyone home? A way to find out if AI has become self-aware.

* Diane Ackerman

###

As we ponder personhood, we might recall that it was on this date in 1967 that US Navy recalled Captain Grace Murray Hopper to active duty to help develop the programming language COBOL.  With a team drawn from several computer manufacturers and the Pentagon, Hopper – who had worked on the Mark I and II computers at Harvard in the 1940s – created the specifications for COBOL (COmmon Business Oriented Language) with business uses in mind.  These early COBOL efforts aimed at creating easily-readable computer programs with as much machine independence as possible.

A seminal computer scientist and ultimately Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) had invented the first compiler for a computer programming language, and appears also to have also been the first to coin the word “bug” in the context of computer science, taping into her logbook a moth which had fallen into a relay of the Harvard Mark II computer.

She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.

 source [and here]

 

Written by LW

August 1, 2017 at 1:01 am

“Eternity is a child playing, playing checkers; the kingdom belongs to a child.”*…

 

Marion Tinsley—math professor, minister, and the best checkers player in the world—sat across a game board from a computer, dying.

Tinsley had been the world’s best for 40 years, a time during which he’d lost a handful of games to humans, but never a match. It’s possible no single person had ever dominated a competitive pursuit the way Tinsley dominated checkers. But this was a different sort of competition, the Man-Machine World Championship.

His opponent was Chinook, a checkers-playing program programmed by Jonathan Schaeffer, a round, frizzy-haired professor from the University of Alberta, who operated the machine. Through obsessive work, Chinook had become very good. It hadn’t lost a game in its last 125—and since they’d come close to defeating Tinsley in 1992, Schaeffer’s team had spent thousands of hours perfecting his machine.

The night before the match, Tinsley dreamt that God spoke to him and said, “I like Jonathan, too,” which had led him to believe that he might have lost exclusive divine backing.

So, they sat in the now-defunct Computer Museum in Boston. The room was large, but the crowd numbered in the teens. The two men were slated to play 30 matches over the next two weeks. The year was 1994, before Garry Kasparov and Deep Blue or Lee Sedol and AlphaGo

The story of a duel between two men, one who dies, and the nature of the quest to build artificial intelligence: “How Checkers Was Solved.”

* Heraclitus

###

As we triangulate a triple jump, we might send precisely-programmed birthday greetings to Joseph F. Engelberger; he was born on this date in 1925.  An engineer and entrepreneur who is widely considered “the father of robotics,” he worked from a patented technology created by George Devol to create the first industrial robot; then, with a partner, created Unimation, the first industrial robotics company.  The Robotics Industries Association presents the Joseph F. Engelberger Awards annually to “persons who have contributed outstandingly to the furtherance of the science and practice of robotics.”

 source

 

 

“The karma of humans is AI”*…

 

The black box… penetrable?

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior…

No one really knows how the most advanced algorithms do what they do. That could be a problem: “The Dark Secret at the Heart of AI.”

* Raghu Venkatesh

###

As we get to know our new overlords, we might spare a thought for the painter, sculptor, architect, musician, mathematician, engineer, inventor, physicist, chemist, anatomist, botanist, geologist, cartographer, and writer– the archetypical Renaissance Man– Leonardo da Vinci.  Quite possibly the greatest genius of the last Millennium, he died on this date in 1519.

Self-portrait in red chalk, circa 1512-15

source

Written by LW

May 2, 2017 at 1:01 am

%d bloggers like this: