Posts Tagged ‘computer science’
“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*…

When warning about the dangers of artificial intelligence, many doomsayers cite philosopher Nick Bostrom’s paperclip maximizer thought experiment. [See here for an amusing game that demonstrates Bostrom’s fear.]
Imagine an artificial intelligence, he says, which decides to amass as many paperclips as possible. It devotes all its energy to acquiring paperclips, and to improving itself so that it can get paperclips in new ways, while resisting any attempt to divert it from this goal. Eventually it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities”. This apparently silly scenario is intended to make the serious point that AIs need not have human-like motives or psyches. They might be able to avoid some kinds of human error or bias while making other kinds of mistake, such as fixating on paperclips. And although their goals might seem innocuous to start with, they could prove dangerous if AIs were able to design their own successors and thus repeatedly improve themselves. Even a “fettered superintelligence”, running on an isolated computer, might persuade its human handlers to set it free. Advanced AI is not just another technology, Mr Bostrom argues, but poses an existential threat to humanity.
Harvard cognitive scientist Joscha Bach, in a tongue-in-cheek tweet, has countered this sort of idea with what he calls “The Lebowski Theorem”:
No superintelligent AI is going to bother with a task that is harder than hacking its reward function.
Why it’s cool to take Bobby McFerrin’s advice at: “The Lebowski Theorem of machine superintelligence.”
* Alan Kay
###
As we get down with the Dude, we might send industrious birthday greetings to prolific writer Anthony Trollope; he was born on this date in 1815. Trollope wrote 47 novels, including those in the “Chronicles of Barsetshire” and “Palliser” series (along with short stories and occasional prose). And he had a successful career as a civil servant; indeed, among his works the best known is surely not any of his books, but the iconic red British mail drop, the “pillar box,” which he invented in his capacity as Postal Surveyor.
The end of a novel, like the end of a children’s dinner-party, must be made up of sweetmeats and sugar-plums. (source)
“Humans as we know them are just one morphological waypoint on the long road of evolution”*…

Imagine a world where the human race is no longer the dominant species.
Extinct through war or spectacular accident. By devastating pandemic, super-natural disaster, or cosmic cataclysm.
Passed through the Singularity to become unrecognisably posthuman, and left the natural order forever behind.
Infected by a virus, hijacked by a parasite or otherwise co-opted to become ex-human – a “bio zombie” – moved sideways to a new position as ecological actor.
Gently absorbed into – or completely overshadowed by the unfathomable actions of – a superior civilisation comprising benevolent – or unacknowledging – emissaries from the stars (or extra-dimensions).
Dethroned by the return of ancient species, the reawakening of the slumbering Old Ones… Out-competed by the arrival of an invasive species from another world making the Earth just one habitat in a galactic ecology.
It could be far into the future or The Day After Tomorrow.
Robots may rule the world… not so much enslaving as letting us retire to a life of Fully Automated Luxury Gay Space Communism; life in The Culture as Iain M. Banks foresaw it could be.
What is the world like then? After us…
Imagine a world where the human race is no longer the dominant species: “What is the Post-Human World.”
* Annalee Newitz in “When Will Humanity Finally Die Out?”
###
As we stretch our frames, we might spare a thought for Marvin Minsky; he died on this date in 2016. A biochemist and cognitive scientist by training, he was founding director of MIT’s Artificial Intelligence Project (the MIT AI Lab). Minsky authored several widely-used texts, and made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics. He holds several patents, including those for the first neural-network simulator (SNARC, 1951), the first head-mounted graphical display, the first confocal scanning microscope, and the LOGO “turtle” device (with his friend and frequent collaborator Seymour Papert). His other inventions include mechanical hands and the “Muse” synthesizer.
“Artificial intelligence is growing up fast”*…

Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being “you.” When you see the warm hues of a sunrise, smell the aroma of morning coffee or mull over a new idea, you are having conscious experience. But could an artificial intelligence (AI) ever have experience, like some of the androids depicted in Westworld or the synthetic beings in Blade Runner?
The question is not so far-fetched. Robots are currently being developed to work inside nuclear reactors, fight wars and care for the elderly. As AIs grow more sophisticated, they are projected to take over many human jobs within the next few decades. So we must ponder the question: Could AIs develop conscious experience?…
It’s not easy, but a newly proposed test might be able to detect consciousness in a machine: “Is anyone home? A way to find out if AI has become self-aware.”
* Diane Ackerman
###
As we ponder personhood, we might recall that it was on this date in 1967 that US Navy recalled Captain Grace Murray Hopper to active duty to help develop the programming language COBOL. With a team drawn from several computer manufacturers and the Pentagon, Hopper – who had worked on the Mark I and II computers at Harvard in the 1940s – created the specifications for COBOL (COmmon Business Oriented Language) with business uses in mind. These early COBOL efforts aimed at creating easily-readable computer programs with as much machine independence as possible.
A seminal computer scientist and ultimately Rear Admiral in the U.S. Navy, “Amazing Grace” (as she was known to many in her field) had invented the first compiler for a computer programming language, and appears also to have also been the first to coin the word “bug” in the context of computer science, taping into her logbook a moth which had fallen into a relay of the Harvard Mark II computer.
She has both a ship (the guided-missile destroyer USS Hopper) and a super-computer (the Cray XE6 “Hopper” at NERSC) named in her honor.
“The karma of humans is AI”*…

The black box… penetrable?
Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”
There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior…
No one really knows how the most advanced algorithms do what they do. That could be a problem: “The Dark Secret at the Heart of AI.”
* Raghu Venkatesh
###
As we get to know our new overlords, we might spare a thought for the painter, sculptor, architect, musician, mathematician, engineer, inventor, physicist, chemist, anatomist, botanist, geologist, cartographer, and writer– the archetypical Renaissance Man– Leonardo da Vinci. Quite possibly the greatest genius of the last Millennium, he died on this date in 1519.

Self-portrait in red chalk, circa 1512-15


You must be logged in to post a comment.